This work introduces a new music generation system, called AffectMachine-Classical, that is capable of generating affective Classic music in real-time. AffectMachine was designed to be incorporated into biofeedback systems (such as brain-computer-interfaces) to help users become aware of, and ultimately mediate, their own dynamic affective states. That is, this system was developed for music-based MedTech to support real-time emotion self-regulation in users. We provide an overview of the rule-based, probabilistic system architecture, describing the main aspects of the system and how they are novel. We then present the results of a listener study that was conducted to validate the ability of the system to reliably convey target emotions to listeners. The findings indicate that AffectMachine-Classical is very effective in communicating various levels of Arousal ($R^2 = .96$) to listeners, and is also quite convincing in terms of Valence (R^2 = .90). Future work will embed AffectMachine-Classical into biofeedback systems, to leverage the efficacy of the affective music for emotional well-being in listeners.
翻译:本文介绍一种新的音乐生成系统——AffectMachine-Classical,能够实时生成具有情感色彩的古典音乐。AffectMachine 被设计为嵌入生物反馈系统(如脑-计算机接口),以帮助用户认识并最终调节自己的动态情感状态。也就是说,本系统是为基于音乐的医疗技术而开发的,旨在支持用户的实时情感自我调节。我们提供了基于规则和概率的系统架构概述,描述了系统的主要方面以及创新之处。然后,我们呈现了一项监听者研究的结果,该研究旨在验证本系统能够可靠地将目标情感传达给听众的能力。研究结果表明,AffectMachine-Classical 在传达各种激活度($R^2 =.96$)方面非常有效,并且在情感价值方面(R^2 =.90)也非常令人信服。未来的工作将把 AffectMachine-Classical 嵌入到生物反馈系统中,以利用这种影响力音乐来促进听众的情感福祉。