information

Type
Séminaire / Conférence
performance location
Ircam, Salle Igor-Stravinsky (Paris)
duration
48 min
date
November 6, 2019

The aim of this research project is to model the multivariate information structures inherent to multiple sound signals through different methods of machine learning. Here, we consider structure as any underlying sequence that constitutes a higher-level abstraction of an original input sequence. In musical audio signals, this includes both the high-level properties (eg. chords progressions, key changes, thematic organization) and resulting audio signal (eg. emerging timbral properties well known in orchestration) of sound mixtures.

Our application case is to develop a software that interacts in real-time with a musician by inferring expected structures (e.g. chord progression).
In order to achieve this goal, we divided the project into two main tasks: a listening module and a symbolic generation module. The listening module extracts the musical structure played by the musician whereas the generative module predicts musical sequences based on the extracted features.


Tristan Carsault : Structure discovery in multivariate musical audio signals through machine learning

The aim of this research project is to model the multivariate information structures inherent to multiple sound signals through different methods of machine learning. Here, we consider structure as any underlying sequence that constitutes a higher-level abstraction of an original input sequence. In musical audio signals, this includes both the high-level properties (eg. chords progressions, key changes, thematic organization) and resulting audio signal (eg. emerging timbral properties well known in orchestration) of sound mixtures. Our application case is to develop a software that interacts in real-time with a musician by inferring expected structures (e.g. chord progression). In order to achieve this goal, we divided the project into two main tasks: a listening module and a symbolic generation module. The listening module extracts the musical structure played by the musician whereas the generative module predicts musical sequences based on the extracted features.

speakers


share


Do you notice a mistake?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.