2nd AES Workshop on Intelligent Music Production

2nd AES Workshop on Intelligent Music Production

Date: 13 Sep 2016
Time: 09:00

Location: Willoughby Lecture Theatre, Charterhouse Square, EC1M 6BQ
Charterhouse Square

See below for location map.

Registration | Programme | Proceedings | Keynote speakers | Panel session | Committee | News | PicturesVenue and travel | WIMP1 (2015)

WIMP banner

A one-day workshop organised by the British Section of the Audio Engineering Society and the Centre for Digital Music at Queen Mary University of London.

Audio Engineering and Music Production are inherently technical disciplines, often involving extensive training and the investment of time. The processes involved implicitly require knowledge of signal processing and audio analysis, and can present barriers to musicians and non-specialists. The emerging field of Intelligent Music Production addresses these issues via the development of systems that map complex processes to intuitive interfaces and automate elements of the processing chain.

The event will provide an overview of some of the tools and techniques currently being developed in the field, whilst providing insight for audio engineers, producers and musicians looking to gain access to new technologies. The day will consist of presentations from leading academics, industry keynotes, and additional posters and demonstrations.

For more information about the event, please get in touch with the workshop chairs:

Brecht De Man
Joshua D. Reiss


Register on the Eventbrite page: http://wimp.eventbrite.co.uk/


9:00 Registration and coffee
9:50 Introduction
10:00 Bryan Pardo Keynote 1
10:40 Owen Campbell ADEPT: A Framework for Adaptive Digital Audio Effects
11:00 Iver Jordal Evolving Artificial Neural Networks for Cross-Adaptive Audio Effects
11:20 Steven Gelineck Exploring Visualisation of Channel Activity, Levels and EQ for User Interfaces Implementing the Stage Metaphor for Music Mixing
11:40 Simon Zagorski-Thomas Sonic Cartoons and Semantic Audio Processing: Using invariant properties to create schematic representations of acoustic phenomena
12:00 Lunch and posters
1:20 François Pachet Keynote 2
2:00 Keita Arimoto Identification of Drum Overhead-Microphone Tracks in Multi-Track Recordings
2:20 Dominic Ward Loudness Algorithms for Automatic Mixing
2:40 Stylianos Mimilakis New Sonorities for Jazz Recordings: Separation and Mixing using Deep Neural Networks
3:00 Gerard Roma Music remixing and upmixing using source separation
3:20 Coffee and posters
4:00 Panel session Future Directions in Intelligent Sound Engineering
5:20 End


Download all proceedings as a ZIP file.

Editors: Brecht De Man and Joshua D. Reiss

Aurélien Antoine, Duncan Williams and Eduardo Miranda Towards a timbre classification system for orchestral excerpts [pdf] [BibTeX]
Keita Arimoto Identification of drum overhead-microphone tracks in multi-track recordings [pdf] [BibTeX]
Carsten Bönsel, Jakob Abeßer, Sascha Grollmisch and Stylianos Ioannis Mimilakis Automatic best take detection for electric guitar and vocal studio recordings [pdf] [BibTeX]
Andrew Bourbon and Simon Zagorski-Thomas Sonic cartoons and semantic audio processing: Using invariant properties to create schematic representations of acoustic phenomena [pdf] [BibTeX]
Owen Campbell, Curtis Roads, Andrés Cabrera, Matthew Wright and Yon Visell ADEPT: A framework for adaptive digital audio effects [pdf] [BibTeX]
Brecht De Man, Nicholas Jillings, David Moffat, Joshua D. Reiss and Ryan Stables Subjective comparison of music production practices using the Web Audio Evaluation Tool [pdf] [BibTeX]
Brecht De Man and Joshua D. Reiss The Open Multitrack Testbed: Features, content and use cases [pdf] [BibTeX]
Emmanuel Deruty Goal-oriented mixing [pdf] [BibTeX]
Christopher Dewey and Jonathan P. Wakefield Audio interfaces should be designed based on data visualisation first principles [pdf] [BibTeX]
Sean Enderby, Thomas Wilmering, Ryan Stables and György Fazekas A semantic architecture for knowledge representation in the digital audio workstation [pdf] [BibTeX]
Steven Gelineck and Anders Kirk Uhrenholt Exploring visualisation of channel activity, levels and EQ for user interfaces implementing the stage metaphor for music mixing [pdf] [BibTeX]
Nicholas Jillings and Ryan Stables JSAP: Intelligent audio plugin format for the Web Audio API [pdf] [BibTeX]
Iver Jordal Evolving neural networks for cross-adaptive audio effects [pdf] [BibTeX]
Hyunkook Lee Towards the development of intelligent microphone array designer [pdf] [BibTeX]
Sean McGrath, Adrian Hazzard, Alan Chamberlain and Steve Benford An ethnographic exploration of studio production practice [pdf] [BibTeX]
Kirk McNally What the masters teach us: New approaches in audio engineering and music production education [pdf] [BibTeX]
Adib Mehrabi, Simon Dixon and Mark Sandler Towards a comprehensive dataset of vocal imitations of drum sounds [pdf] [BibTeX]
Stylianos Ioannis Mimilakis, Estefanía Cano, Jakob Abeßer and Gerald Schuller New sonorities for jazz recordings: Separation and mixing using deep neural networks [pdf] [BibTeX]
Gerard Roma, Emad M. Grais, Andrew J. R. Simpson and Mark D. Plumbley Music remixing and upmixing using source separation [pdf] [BibTeX]
Spyridon Stasis, Jason Hockman and Ryan Stables Descriptor sub-representations in semantic equalisation [pdf] [BibTeX]
Simon Waloschek, Axel Berndt, Benjamin W. Bohl and Aristotelis Hadjakos Accelerating the editing phase in music productions using interactive scores [pdf] [BibTeX]
Dominic Ward and Joshua D. Reiss Loudness algorithms for automatic mixing [pdf] [BibTeX]
Alex Wilson and Bruno Fazenda An evolutionary computation approach to intelligent music production, informed by experimentally gathered domain knowledge [pdf] [BibTeX]

Keynote speakers

Bryan PardoBryan Pardo, head of the Northwestern University Interactive Audio Lab, is an associate professor in the Northwestern University Department of Electrical Engineering and Computer Science and acting head of the Cognitive Systems division. Prof. Pardo received a M. Mus. in Jazz Studies in 2001 and a Ph.D. in Computer Science in 2005, both from the University of Michigan. He has authored over 60 peer-reviewed publications. He is an associate editor for IEEE Transactions on Audio Speech and Language Processing. He has developed speech analysis software for the Speech and Hearing department of the Ohio State University, statistical software for SPSS and worked as a machine learning researcher for General Dynamics. While finishing his doctorate, he taught in the Music Department of Madonna University. When he’s not programming, writing or teaching, he performs throughout the United States on saxophone and clarinet at venues such as Albion College, the Chicago Cultural Center, the Detroit Concert of Colors, Bloomington Indiana’s Lotus Festival and Tucson’s Rialto Theatre.

François PachetFrancois Pachet is director of SONY Computer Science Laboratory Paris. He leads the music research team, which conducts research on interactive music listening, composition and performance. Since its creation, the team developed several award winning technologies (constraint-based spatialization, intelligent music scheduling using metadata) and systems (MusicSpace, PathBuilder, Continuator for interactive music improvisation, etc.). His current goal is to create a new generation of authoring tools able to boost individual creativity. These tools, called Flow Machines, abstract “style” from concrete corpora (text, music, etc.), and turn it into a malleable substance that acts as a texture. Applications range from music composition to text or drawing generation and probably much more.

Panel session: “Future Directions in Intelligent Sound Engineering”

Chair: Ryan Stables (Birmingham City University)

Panel members: Henry Bourne (Calrec Audio), Andy Farnell (Mogees), Bruno Fazenda (University of Salford), Alessandro Palladini (Music Group)


Workshop chairs: Brecht De Man and Joshua D. Reiss

Technical chair: David Moffat

Poster chairs: Di Sheng and Emmanouil Chourdakis

Social media: Rod Selfridge

AV tech: Adán Benito Temprano

Audio recording: Axel Drioli, Benedict Sanderson and Jim Donaldson

Video recording: Matthew Cheshire

Stage managers: Will Wilkinson and Saurjya Sarkar

Photography: Alessia Milo

Logistics: Matthew White


Twitter feed



Pictures of the workshop by Alessia Milo can be found on the Centre for Digital Music web page.

Venue and travel

The nearest stations on the London Underground are Barbican and Farringdon on Hammersmith and City, Metropolitan and Circle lines. Charterhouse Square campus map Click map to open vector image in new tab/window.

First edition

The first edition of the AES Workshop on Intelligent Music Production was held at Birmingham City University. See the event page for more info, and watch the talks at http://www.semanticaudio.co.uk/events/wimp2015/.

Interactive map

Loading Map....