基于创造力过程理论,使用来自情境实验以及多来源、三阶段实地调查的数据,探究工作场所生成式人工智能使用对员工创造力的双重影响路径。研究发现:生成式人工智能使用通过提高员工吸收能力来增强创造力,但同时也通过削弱员工的认知深化来抑制其创造力;员工的人际互动能够缓解生成式人工智能使用对员工认知深化的负面影响,并增强其对员工吸收能力的正面影响;生成式人工智能使用对员工创造性思维过程和创造力绩效具有双刃剑效应,人际互动在人机互动情境下具有重要的互补作用。基于创造力过程理论,使用来自情境实验以及多来源、三阶段实地调查的数据,探究工作场所生成式人工智能使用对员工创造力的双重影响路径。研究发现:生成式人工智能使用通过提高员工吸收能力来增强创造力,但同时也通过削弱员工的认知深化来抑制其创造力;员工的人际互动能够缓解生成式人工智能使用对员工认知深化的负面影响,并增强其对员工吸收能力的正面影响;生成式人工智能使用对员工创造性思维过程和创造力绩效具有双刃剑效应,人际互动在人机互动情境下具有重要的互补作用。
The probabilistic diffusion model (DM), generating content by inferencing through a recursive chain structure, has emerged as a powerful framework for visual generation. After pre-training on enormous unlabeled data, the model needs to be properly aligned to meet requirements for downstream applications. How to efficiently align the foundation DM is a crucial task. Contemporary methods are either based on Reinforcement Learning (RL) or truncated Backpropagation (BP). However, RL and truncated BP suffer from low sample efficiency and biased gradient estimation respectively, resulting in limited improvement or, even worse, complete training failure. To overcome the challenges, we propose the Recursive Likelihood Ratio (RLR) optimizer, a zeroth-order informed fine-tuning paradigm for DM. The zeroth-order gradient estimator enables the computation graph rearrangement within the recursive diffusive chain, making the RLR’s gradient estimator an unbiased one with the lower variance than other methods. We provide theoretical guarantees for the performance of the RLR. Extensive experiments are conducted on image and video generation tasks to validate the superiority of the RLR. Furthermore, we propose a novel prompt technique that is natural for the RLR to achieve a synergistic effect. See our implementation at https://github.com/RTkenny/ RLR-Opimtizer. Copyright ?? 2025, The Authors. All rights reserved.
This study investigates the policy learning problem in observational studies, where the treatment variable can be multivalued and the propensity scores are unknown. We approximate the optimal policy in a global policy class with infinite complexity (VC/Natarajan) dimension, using a sequence of sieve policy classes with finite complexity dimension. The optimal policy within each sieve class is estimated by maximizing the empirical welfare, constructed through the doubly robust moment condition and cross-fitting method. To select the suitable sieve space, we maximize the penalized empirical welfare, with the penalty determined by either the Rademacher complexity or a holdout method. We establish oracle inequalities that demonstrate the bias and variance tradeoff achieved by the data-driven policy estimator. We also investigate two specific sieve selections: (a) a monotone single index model and (b) a systematic discretization method, which uses conventional sieve results for smooth functions such as linear sieves and deep neural networks. In the empirical study, we apply our method to examine the policy of assigning individuals to job training of different lengths.
本文对结构性变化的旅游需求进行研究,基于带有外生变量的向量自回归(VARX)模型,提出了一种分段组合预测的方法。与既有研究普遍采用的基于完整数据集构建组合预测模型不同,本文创新性地将时间因素纳入组合预测考量,通过将不同时间段的变量视为独立的单元,构建出分段时间序列数据集的组合预测模型。该方法以游客的网络搜索行为作为外生变量用于预测旅游人数,并捕捉这些外生变量在不同时间节点上对旅游人数产生的差异化影响,特别是在新冠疫情等突发冲击下的动态变化。实证结果显示,VARX模型的分段组合在预测中国出境旅游人数时展现出更高的准确性,其预测精度因考虑了外生变量在不同时间段的特异性影响而得以提升。事后分析进一步显示,特别是针对2024年中国出境旅游趋势的外样本预测结果,随着全球旅游市场的逐步复苏,中国出境旅游人数将呈现积极向上的增长态势。这一结论与现有公开文献中的趋势分析相吻合,进一步印证了本文预测方法的实践应用价值。本文对结构性变化的旅游需求进行研究,基于带有外生变量的向量自回归(VARX)模型,提出了一种分段组合预测的方法。与既有研究普遍采用的基于完整数据集构建组合预测模型不同,本文创新性地将时间因素纳入组合预测考量,通过将不同时间段的变量视为独立的单元,构建出分段时间序列数据集的组合预测模型。该方法以游客的网络搜索行为作为外生变量用于预测旅游人数,并捕捉这些外生变量在不同时间节点上对旅游人数产生的差异化影响,特别是在新冠疫情等突发冲击下的动态变化。实证结果显示,VARX模型的分段组合在预测中国出境旅游人数时展现出更高的准确性,其预测精度因考虑了外生变量在不同时间段的特异性影响而得以提升。事后分析进一步显示,特别是针对2024年中国出境旅游趋势的外样本预测结果,随着全球旅游市场的逐步复苏,中国出境旅游人数将呈现积极向上的增长态势。这一结论与现有公开文献中的趋势分析相吻合,进一步印证了本文预测方法的实践应用价值。