Speaker
Seung-Goo Kim
(Max Planck Institute for Empirical Aesthetics)
Description
We present an ongoing study aimed at creating a large-scale multimodal dataset for the precise modelling of music-evoked emotions, titled "ManyMusic 🎶." The dataset is designed to (1) include 1,080 full-length musical pieces spanning diverse genres and eliciting a wide range of emotions, (2) extensively sample EEG, fMRI, and behavioural signals from selected individuals across multiple sessions to capture subjective experiences with high precision, and (3) be publicly released to support interdisciplinary research on the neuroscience of musical emotions using AI-based approaches.
This poster provides an overview of the project, summarizes key data-quality metrics, and outlines future directions.
Author
Seung-Goo Kim
(Max Planck Institute for Empirical Aesthetics)
Co-authors
Pablo Alonso-Jiménez
(Universitat Pompeu Fabra, Spain)
Dmitry Bogdanov
(Universitat Pompeu Fabra, Spain)
Daniela Sammler
(Max Planck Institute for Empirical Aesthetics)