CompMusic Seminar
9:30 Gerhard Widmer (Johannes Kepler University, Linz, Austria)
"Con Espressione! - An Update from the Computational Performance Modelling Front"
Computational models of expressive music performance have been a target of considerable research efforts in the past 20 years. Motivated by the desire to gain a deeper understanding of the workings of this complex art, various research groups have proposed different classes of computational models (rule-based, case-based, machine-learning-based) for different parametric dimensions of expressive performance, and it has been demonstrated in various studies that such models can provide interesting new insights into this musical art. In this presentation, I will review recent work that has carried this research further. I will mostly focus on a general modelling framework known as the "Basis Mixer", and show various extensions of this model that have gradually increased the modelling power of the framework. However, it will also become apparent that are still serious limitations and obstacles on the path to comprehensive models of musical expressivity, and I will briefly report on a new ERC project entitled "Con Espressione", which expressly addresses these challenges. Along the way, we will also hear about a recent musical "Turing Test" that is said to demonstrate that computational performance models have now reached a level where their interpretations of classical piano music are qualitatively indistinguishable from true human performances -- a story that I will quickly try to put into perspective ...
The increasing availability of music data as well as networks and computing resources has the potential to profoundly change the methodology of musicological research towards a more data-driven empirical approach. However, many questions are still unanswered regarding the technology, data collection and provision, metadata, analysis methods and legal aspects. This talk will report on an effort to address these questions in the Digital Music Lab project, and present achieved outcomes, lessons learnt and challenges that emerged in this process.
Science and technology plays in an increasingly vital role in how we experience, how we compose, perform, share and enjoy musical audio. The invention of recording in the late 19th century is a profound example that, for the first time in human history, disconnected music performance from listening and gave rise to a new industry as well as new fields of scientific investigation. But musical experience is not just about listening. Human minds make sense of what we hear by categorising and by making associations, cognitive processes which give rise to meaning or influence our mood. Perhaps the next revolution akin to recording is therefore in audio semantics. Technologies that mimic our abilities and enable interaction with audio on human terms are already changing the way we experience it. The emerging field of Semantic Audio is in the confluence of several key fields, namely, signal processing, machine learning and Semantic Web ontologies that enable knowledge representation and logic-based inference. In my talk, I will put forward that synergies between these fields provide a fruitful way, if not necessary to account for human interpretation of sound. I will outline music and audio related ontologies and ontology based systems that found applications on the Semantic Web, as well as intelligent audio production tools that enable linking musical concepts with signal processing parameters in audio systems. I will outline my recent work demonstrating how web technologies may be used to create interactive performance systems that enable mood-based audience-performer communication and how semantic audio technologies enable us to link social tags and audio features to better understand the relationship between music and emotions. I will hint at how some principles used in my research also contribute to enhancing scientific protocols, ease experimentation and facilitate reproducibility. Finally, I will discuss challenges in fusing audio and semantic technologies and outline some future opportunities they may bring about.