Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 1 - MorpheuS (Hybrid Machine Learning – Optimization techniques To Generate Structured Music Through Morphing And Fusion)

Teaser

State-of-the-art generation/improvisation systems include the Continuator, OMax, and Mimi. These systems generate music that sounds good on a low (note-to-note) level but lacks the critical structure and direction necessary for long term coherence. In this project, we have...

Summary

State-of-the-art generation/improvisation systems include the Continuator, OMax, and Mimi. These systems generate music that sounds good on a low (note-to-note) level but lacks the critical structure and direction necessary for long term coherence. In this project, we have addressed this challenge by creating a system that generates music compositions based on structural templates. Our novel approach deploys machine learning methods in an optimization context to morph existing musical pieces into new ones and to fuse disparate styles. We have developed a novel hybrid framework that combines the strength of optimization techniques with machine learned models to generated music with long-term structure such as recurrent patterns. In doing so, we have also developed a mathematical model for calculating tonal tension of music, which proved to be a useful tool in generating affective music.

Work performed

One of the main contributions of this work is the development of a multidisciplinary technique for generating musical pieces with structure and long-term coherence. The MorpheuS system implements a novel mathematical model for calculating tonal tension in music. This model is then used for generating new music with a prespecified, or learned, tension profile. The generated new (affective) music is advances the state-of-the art, as it is optimized to contain repeated patterns and themes of a template piece by using a state-of-the-art pattern detection algorithm. Musical compositions created by MorpheuS have been performed by Prof. Elaine Chew in Stanford, London, Brighton and Cambridge.

Finally, we explored the power of deep learning methods for capturing musical style. These models which, include a semantic vector space model and long-short term models that implements a novel musically inspired image representation, confirm the huge potential of deep learning in this field. The researcher also organized the first International Workshop on Deep Learning and Music and Machine Learning, joint with the International Joint Conference of Neural Networks in Anchorage, US. The presence of 50+ registered participants and speakers from both Google and Pandora, testify to the potential of deep learning techniques for music composition.

In order to facilitate the music research of music analysts and music cognition researchers, and demonstrate the developed tension model, an interactive multimodal website, called IMMA, was developed to visualise both audio and score characteristics in sync with their score and audio performance. We plan to further extend this platform to a widely used music analysis platform and score repository.

Final results

The MorpheuS system is the first system to generate music with a specific musical tension, thus making it perfect for future implementations in computer games, film, stock music for advertising. As the generated music is one of the first to contain a long-term structure and recurring patterns in this unique way, it offers new opportunities for the digital music industry and the field of automatic composition. When generated music contains such a long term structure, it can be integrated in our daily lives and become a valid form of music composition for laymen through smartphones or online applications.

Website & more info

More info: http://dorienherremans.com/morpheus.