Researchers at the University of Michigan are using modern big data techniques to transform how we understand, create and interact with music.
Four U-M research teams, including two teams led by CSE researchers, will receive support for projects that apply data science tools like machine learning and data mining to the study of music theory, performance, social media-based music making, and the connection between words and music.The funding is provided under the Data Science for Music Challenge Initiative through the Michigan Institute for Data Science.
Each project will receive $75,000 over a year. Below are the two projects headed by CSE researchers.
Understanding and Mining Patterns of Audience Engagement
Modern mobile and web audio technologies remove many of the technical barriers to facilitating large scale audience participation in music concerts. Interactive music applications that shape a live music performance and form a connected ensemble can immediately be distributed to audience members to allow them to generate music from their smartphones. However, it remains an ongoing challenge to design interactions that encourage and sustain audience participation over time.
As a foundation for this work, they plan to leverage their existing live performance system, Crowd in C[loud], which combines an interactive audience UI for generating music, an interface for an expert musician to orchestrate their participation, and an ephemeral social network that supports musical collaboration.
They also plan on developing a suite of data driven computational methods that will help them understand audience’s behaviors during interactive music performances using analysis of large scale user-to-user interaction data. The data from individual audience members can provide important evidences that indicate the extent to which each participant is engaged with the performance.This work will extend their understanding of how participants interact with each other and generate musical content during live performances. This will lead to insights on how to better facilitate audience engagement in general large scale participatory system beyond music, e.g., classrooms, public events, and academic conferences.
The Sound of Text
Music and words often come together, in the millions of songs and soundtracks that delight us, and yet for most of the words in the world, their music is silent.
CSE Prof. Rada Mihalcea, Prof. Anıl Çamcı, assistant professor of performing arts technology; Sile O'Modhrain, associate professor of performing arts technology; and CSE research fellow Jonathan Kummerfeld will develop data-intensive algorithms that leverage existing alignments between words and music to produce a musical interpretation for any text. They will do this by building a large aligned collection of text and music, by drawing from publicly available digital collections of songs and lyrics, and leveraging automatic algorithms for data alignment. They will also develop novel neural network based algorithms for text-to-music generation, building upon recent advances in sequence-to-sequence deep learning algorithms to uncover patterns of connections between language and music that can be used in the generation process.
Toward the end of the project, they will organize a public event, which will both communicate and demonstrate the outcome of their project, and will consist of research presentations on data science topics, interleaved with musical performances obtained by translating the text describing the research into music.
Posted: May 29, 2018