How sensitive should machine learning models be to input changes? We tackle the question of model smoothness and show that it is a useful inductive bias which aids generalization, adversarial robustness, generative modeling and reinforcement learning. We explore current methods of imposing smoothness constraints and observe they lack the flexibility to adapt to new tasks, they don't account for data modalities, they interact with losses, architectures and optimization in ways not yet fully understood. We conclude that new advances in the field are hinging on finding ways to incorporate data, tasks and learning into our definitions of smoothness.
翻译:机器学习模式应该对投入的变化多敏感? 我们处理模型的顺畅性问题,并表明这是一个有益的感性偏差,有助于一般化、对抗性稳健性、基因模型和强化学习。我们探索目前施加顺畅制约的方法,并观察到它们缺乏适应新任务的灵活性,它们不考虑数据模式,它们与损失、架构和优化互动的方式尚未完全理解。我们的结论是,该领域的新进展正在寻找办法,将数据、任务和学习纳入我们的顺畅定义中。