Music is a powerful medium for influencing listeners' emotional states, and this capacity has driven a surge of research interest in AI-based affective music generation in recent years. Many existing systems, however, are a black box which are not directly controllable, thus making these systems less flexible and adaptive to users. We present \textit{AffectMachine-Pop}, an expert system capable of generating retro-pop music according to arousal and valence values, which can either be pre-determined or based on a listener's real-time emotion states. To validate the efficacy of the system, we conducted a listening study demonstrating that AffectMachine-Pop is capable of generating affective music at target levels of arousal and valence. The system is tailored for use either as a tool for generating interactive affective music based on user input, or for incorporation into biofeedback or neurofeedback systems to assist users with emotion self-regulation.
翻译:暂无翻译