Recently, diffusion models (DMs) have demonstrated their advantageous potential for generative tasks. Widespread interest exists in incorporating DMs into downstream applications, such as producing or editing photorealistic images. However, practical deployment and unprecedented power of DMs raise legal issues, including copyright protection and monitoring of generated content. In this regard, watermarking has been a proven solution for copyright protection and content monitoring, but it is underexplored in the DMs literature. Specifically, DMs generate samples from longer tracks and may have newly designed multimodal structures, necessitating the modification of conventional watermarking pipelines. To this end, we conduct comprehensive analyses and derive a recipe for efficiently watermarking state-of-the-art DMs (e.g., Stable Diffusion), via training from scratch or finetuning. Our recipe is straightforward but involves empirically ablated implementation details, providing a solid foundation for future research on watermarking DMs. Our Code: https://github.com/yunqing-me/WatermarkDM.
翻译:最近,扩散模型(DMs)展示了其在生成任务中的优势潜力。人们广泛关注将DMs纳入下游应用程序中,例如生成或编辑真实感图像。然而,DMs的实际部署和前所未有的强大功能引发了法律问题,包括版权保护和监测生成内容。在这方面,水印已被证明是版权保护和内容监测的有效解决方案,但在DMs文献中仍未得到充分探讨。具体来说,DMs从较长的轨迹中生成样本,可能具有新设计的多模态结构,因此需要修改传统的水印流程。为此,我们进行了全面的分析,并推导出一种有效制作最先进的DMs(例如稳定扩散)水印的方法,可以通过从头开始训练或微调进行实现。我们的方法既简单又涉及经验上的复杂实现细节,为未来研究提供了坚实的基础。我们的代码:https://github.com/yunqing-me/WatermarkDM。