Machine learning approaches now achieve impressive generation capabilities in numerous domains such as image, audio or video. However, most training \& evaluation frameworks revolve around the idea of strictly modelling the original data distribution rather than trying to extrapolate from it. This precludes the ability of such models to diverge from the original distribution and, hence, exhibit some creative traits. In this paper, we propose various perspectives on how this complicated goal could ever be achieved, and provide preliminary results on our novel training objective called \textit{Bounded Adversarial Divergence} (BAD).
翻译:机器学习方法现在在许多领域,如图像、音频或视频,取得了令人印象深刻的生成能力。然而,大多数培训评价框架都围绕着严格模拟原始数据分布的构想,而不是试图从中推断出来。这排除了这些模型与原始分布不同的能力,从而展示了一些创造性特征。在本文中,我们就如何实现这一复杂目标提出了各种观点,并就我们称为\textit{Bounded Aversarial digence}(BAD)的新颖培训目标提供了初步结果。