Midlands

AES-Midlands Workshop on Intelligent Music Production

Date: 8 Sep 2015
Time: 13:30

Location: Room 203, Birmingham City University
Millennium Point
Birmingham

Audio Engineering and Music Production are inherently technical disciplines, often involving extensive training and the investment of time. The processes involved implicitly require knowledge of signal processing and audio analysis, and can present barriers to musicians and non-specialists. The emerging field of Intelligent Music Production addresses these issues via the development of systems that map complex processes to intuitive interfaces and automate elements of the processing chain.

The event will provide an overview of some of the tools and techniques currently being developed in the field, whilst providing insight for audio engineers, producers and musicians looking to gain access to new technologies. The event will consist of presentations from leading academics, with additional posters and demonstrations.

Schedule:

1.30pm: Registration and Coffee
2.00pm: Josh Reiss, Queen Mary University of London. Intelligent Music Production: Challenges, Frontiers and Implications.
2.45pm: Hyunkook Lee, University of Huddersfield. Perceptually motivated 3D music production.

3.30pm: Coffee/Tea (+ posters)

3.45pm: Brecht De Man, Queen Mary University of London. Understanding The Mix: Learning music production practices through subjective evaluation.
4.30pm: Sean Enderby, Birmingham City University. Making Music Production More Accessible using Semantic Audio Analysis.

5.15pm: Sandwiches (+ posters/demos)

5.45pm: Alessandro Palladini, Music Group UK. Smart Audio Effects for Live Audio Mixing.
6.30pm: Alex Wilson, University of Salford. Navigating the “mix-space” in multitrack recordings.

The event is completely free and open to everyone. To attend, please register here.

Abstracts:

2.00pm: Intelligent Music Production: Challenges, Frontiers and Implications
Josh Reiss, Queen Mary University of London.

In recent years there have been tremendous advances towards the creation of intelligent systems that are capable of performing audio production tasks which would typically be done manually by a sound engineer. These systems build on the emerging fields of multitrack signal processing, machine listening and cross-adaptive digital audio effects, as well as exploiting new knowledge regarding the psychoacoustics and perception of multitrack audio content. Here, we give an overview of these approaches, the challenges in the field and the current research directions. We also discuss the implications of this research; whether it is even possible to automate creative production tasks, and what this might mean for practicing musicians and sound engineers.

 

2.45pm: Perceptually motivated 3D music production.
Hyunkook Lee, University of Huddersfield.

Next-gen multichannel audio formats employ height and/or overhead channels in order to provide the audience with a three-dimensional immersiveness in sound reproduction. Psychoacoustic principles for vertical stereophony are fundamentally different to those for horizontal stereophony, and therefore new methods are required for the effective recording and rendering of 3D multichannel sound. This talk will introduce recent research into perceptually motivated methods for 3D audio recording, upmixing and downmixing, and discuss their applications to intelligent 3D music production.

 

3.45pm: Understanding The Mix: Learning music production practices through subjective evaluation.
Brecht De Man, Queen Mary University of London.

Overview of the methodology and results of PhD research on mixing music. In this work, mixing ‘best practices’ are uncovered, confirmed or contradicted primarily by analysing real-world mixes and perceptual evaluation of these mixes.

 

4.30pm: Making Music Production More Accessible using Semantic Audio Analysis.
Sean Enderby, Birmingham City University.

Music production is an inherently technical discipline, which often requires extensive training. In this talk, we present initial developments from the SAFE project (semanticaudio.co.uk), in which we use semantic audio analysis to provide intuitive in-DAW platform for audio processing.

 

5.45pm: Smart Audio Effects for Live Audio Mixing.
Alessandro Palladini, Music Group UK.

Despite the recent advances in audio mixing offered by digital technologies, live mixing still presents many challenges: low latency and real time processing constraints, limited setup time, suboptimal acoustics and unexpected changes, just to name a few. Intuitive interfaces and processing tools that offer a fast and reliable workflow are therefore a key selling point of many modern digital mixing consoles. At Midas, we believe that the potentials of advanced signal processing and artificial intelligence have not been yet fully exploited. In this presentation we will talk about our approach to the development of smart interfaces and smart audio effects for live audio mixing and demonstrate our first commercially available products.

 

6.30pm: Navigating the “mix-space” in multitrack recordings.
Alex Wilson, University of Salford.

With the continued research and development of automated music production tools, relatively little is known about the nature of quality-perception in music production practices. This talk will describe a statistical analysis of large dataset of alternative human-made mixes in order to determine the dimensions of mix-variation and how they relate to quality. Also, the formulation of a “mix-space” will be described, which provides insight into how level balances are achieved in a simple mixing exercise as well as a conceptual framework for the future study of the complex art of mix engineering.