Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 2 - MUSICAL-MOODS (A mood-indexed database of scores, lyrics, musical excerpts, vector-based 3D animations, and dance video recordings)

Teaser

Musical-Moods aimed at enabling the capacity to classify and recognize emotions and mental states from multimedia data in interactive and intelligent music systems. Examples of application include the profiling of users and databases for creative and media industries...

Summary

Musical-Moods aimed at enabling the capacity to classify and recognize emotions and mental states from multimedia data in interactive and intelligent music systems. Examples of application include the profiling of users and databases for creative and media industries, improving access for citizens and researchers to musical heritage, services for audio on demand, education and training activities, music therapy, and music making.

OBJECTIVES and CONCLUSIONS
1) Dept. of Cognitive Sciences, University of California, Irvine (UCI): A multimodal game with a purpose for Internet users was developed and deployed online (M-GWAP), drawing from preliminary lyrics corpus data of opera works in the public domain and transcription of language data from interviews to dancers.
2) Dept. of Dance, UCI: A multimodal database was realized (audio, video, motion capture, language data), for mood indexing in terms of dancers\' embodied cognition and automatic music generation.
3) Dept. of Electronic Engineering, University of Rome Tor Vergata (UNITOV): A music mood classification model was implemented by leveraging on domain experts\' knowledge.

Work performed

PUBLICATIONS
- Journal articles:
Paolizzo F., Pichierri, N., Casali, D., Giardino, D., Matta M., Costantini G. (2019). Multilabel Automated Recognition of Emotions Induced Through Music. arXiv:1905.12629 [cs.SD].
Paolizzo, F. & Johnson, C. G. (2017) [2019]. Creative Autonomy Through Salience and Multidominance in Interactive Music Systems: Evaluating an Implementation. In: Journal of New Music Research (under review). arXiv:1711.11319v2 [cs.HC].
Alessandrini M., Micarelli A., Viziano A., Pavone I., Costantini G., Casali D., Paolizzo F. & Saggio, G. (2017). Body-worn triaxial accelerometer coherence and reliability related to static posturography in unilateral vestibular failure. In: Acta Otorhinolaryngol Italica. Vol.37 (3), pp. 231–236. arXiv:1907.11166 [physics.med-ph].
Costantini, G., Casali, D., Paolizzo, F., Alessandrini, M., Micarelli, A., Viziano, A. & Saggio, G. (2018). Towards the enhancement of body standing balance recovery by means of a wireless audio-biofeedback system. In: Medical Engineering & Physics. DOI: 10.1016/j.medengphy.2018.01.008. arXiv:1907.11542 [eess.SP].

- Conference articles:
Paolizzo, F. (2019). M-GWAP: An Online and Multimodal Game With A Purpose in WordPress for Mental States Annotation. arXiv:1905.12884 [cs.CL].
Paolizzo, F. & Johnson, C. G. (2018). Autonomy in the Interactive Music System VIVO: A New Framework. arXiv:1711.11319v1 [cs.HC].
Paolizzo, F. (2017). Enabling Embodied Analogies in Intelligent Music Systems. In: Proc. of A Body of Knowledge Conference: Embodied Cognition and the Arts. Irvine: University of California. arXiv:1712.00334 [cs.HC].

RESEARCH/DISSEMINATION
- Music Works/Call for artists:
Dept. of Music University of California, Irvine (UCI), Music and Motion Lab. Nicole Mitchell & Fabio Paolizzo – Chapter I & II. (with Nicole Mitchell). Music Co-Director, Performer & informatics. 2018 & 2019.
Concert series: Claire Trevor School of the Arts UCI, xMPL. Emerse (with John Crawford, Lisa Naugle and Alan Terriciano). Multimedia performance series. Music Performer & informatics. 2016.

- Concerts:
Claire Trevor School of the Arts UCI, xMPL. Jazz: The House that America Built — Part II. Multimedia Dance Play. Music Director, Conductor, Performer and informatics. 2017.
Dept. of Dance UCI. JamXchange. (with Sharon Wray et al.). Various media. Concert. Music Performer & informatics. 2017.
CalIT2, Irvine. Pathways to Possible Worlds. New media performance. Composer, Performer & informatics. 2016.

DISSEMINATION
- Teaching:
“Contemporary Music Ensemble”, Dept. of Music, UCI. Co-taught With Kojiro Umezaki and Stephen Tucker. 2018
“Dance Improvisation”, Dept. of Dance, UCI. With Lisa Naugle. 2016.
“Mood Technology for Creative Practice”, Dept. of Dance, UCI. Taught for 3 academic terms 2017-2018.

- International meeting:
“Music, Computation and Emotions”. University of Rome Tor Vergata (UNITOV), Master in Sonic Arts.

- Invited speaker:
CalIT2, Irvine. Pathways to Possible Worlds. 2016.
Consiglio Nazionale delle Ricerche, Istituto di Calcolo e Reti ad Alte prestazioni. The Musical-Moods Dataset: Multimodal Information Retrieval and Learning Through Human/Computational Creativity. 2019.
Talk at ICIT, Music Dept., UCI, Colloquium Series. http://music.arts.uci.edu/icit/icit-colloquium-fabio-paolizzo/

- Music work/Call for artists:
Theatre of Tor Bella Monaca, Rome. Sempre Libera / Always Free. (feat. Giancarlo Schiaffini, Eugenio Colombo et al.). Multimedia concert. Director, Composer, Music Performer & informatics. Summer School “Performing the Space: Integration among the Arts”. 2016.
Theatre of Tor Bella Monaca, Rome. Sempre Libera 2, Embodied. (feat. Lisa Naugle, John Crawford, Alipio Neto and DTM2 dance ensemble). Intermedia concert. Director, Composer, Music Conductor & Performer, & informatics. 2017.
Claire Trevor School of the Arts UCI, xMPL. Automata Embodied. (feat. Stephen Tucker, Kojiro Umezaki, Lukas Ligeti, Contemporary Music Ensemble and). Multime

Final results

Consortia have already engaged in multiple proposals: 3 H2020, 1 multi-campus US-based funding, 1 national-based funding (total budget for UNITOV as coordinator institution: 8.6 mil). Strong impact is foreseen both in the specific fields investigated by the project and other fields, which can benefit from a combination of machine learning and multimodal information retrieval or language modelling.

ACHIEVEMENTS
1) Annotation game: Paolizzo, F., Powers, A. and Pearl, L. (2019). M-GWAP: Multimodal Game-With-A-Purpose. [online computer program]. UCI, UNITOV. This online game-with-a-purpose allows cognitive modelling and classification or prediction of mood using multimedia exposure and natural language processing (mindprints). Deployed for WordPress in PHP/JavaScript. Multimodal Game-With-A-Purpose (M-GWAP). M-GWAP will be used to generate future language data annotations for a Musical-Moods spin-off project. Demo: https://goo.gl/Mquqaz
2) Musical-Moods dataset: Interactive electroacoustic music excerpts and scores, vector-based 3D animations and video recordings of dance improvisation, and language mindprints of participants were realized with 12 professional dancers in a green-screen environment equipped with a 30-camera Vicon motion capture system and the VIVO interactive music system. More than 100 multimedia clips and 300 minutes of duration for each media type were realized, totaling to over 1TB of audio, video and motion capture data. At: https://github.com/fabiopaolizzo/musical-moods
3) Mood classification of music files and associated data: Achieved mean classification accuracy of 88% and root mean square error improvement of 0.44 from the state of the art.

Website & more info

More info: http://www.musicalmoods2020.org/.