Forthcoming Meetings

Discussions on Acoustic Modelling

Date: 3 Sep 2015
Time: 18:30

Location: Anglia Ruskin University
Anglia Ruskin University East Road, CB1 1PT

This session will include 3 short papers on the topics of acoustic absorber design, the use of virtual acoustics for the study of medieval drama and the application of computer models to the study of cathedrals.

Holistic Acoustic Absorber Design: from modeling and simulation to laboratory testing and practical realization
Dr Rob Toulson (CoDE Research Institute, Anglia Ruskin University)
Dr Silvia Cirstea (VERU, Anglia Ruskin University)

Mathematical models for many acoustic absorption methods have previously been developed; however there is very little accessible data describing how those models perform in a practical implementation of the design. This paper describes the development of a novel slotted film sound absorber and presents the results at each design iteration. Initially a number of mathematical models are considered, in order to optimise the design. The modelled designs are laboratory tested with an impedance tube system, and finally, the practical acoustic absorber design is tested in an ISO accredited reverberation chamber. The results presented demonstrate that the simulation and impedance tube results match very closely, whereas the practical implementation performance is lower in terms of acoustic absorption.

Medieval Drama Acoustics
Dr Mariana Lopez (CoDE Research Institute, Anglia Ruskin University)

Research on pre-seventeenth century theatre acoustics has focused either on Greek and Roman or Elizabethan theatre, leaving aside the variety of performance venues used in medieval times. This paper focuses on research on the York Mystery Plays, a series of plays that were performed in the streets of York (UK) from the fourteenth to the sixteenth century. The application of impulse response measurements on site and the design of a multiplicity of computer models are analysed according to ISO 3382-1: 2009 parameters. A methodology for the study of medieval performance spaces is presented and results demonstrate that organisers of medieval plays were very likely aware of the impact of the staging configuration on acoustics, allowing them to make decisions that considered the aural dimension of the plays.

Acoustics of large places of worship: Spanish Cathedrals
Lidia Alvarez-Morales (PhD Candidate, University of Seville)

The last few years have seen an increase in the interest in the acoustic behaviour of heritage buildings and how such studies can increase our understanding of the cultural use of the space. This work describes the methodology used for studying the acoustic environment of the Catholic cathedrals of southern Spain, nowadays conceived as multifunctional enclosures. Sound propagation in these large reverberant spaces is assessed through the analysis of monaural and binaural impulse responses measured throughout the audience area considering sound source positions that correspond to the liturgical, musical, and cultural activities that take place in the temple nowadays. Furthermore, a 3D simulation model is created for each cathedral, which is used to assess the influence of occupancy and to evaluate different rehabilitation options and acoustic treatments.

AES-Midlands Workshop on Intelligent Music Production

Date: 8 Sep 2015
Time: 13:30

Location: Room 203, Birmingham City University
Millennium Point

Audio Engineering and Music Production are inherently technical disciplines, often involving extensive training and the investment of time. The processes involved implicitly require knowledge of signal processing and audio analysis, and can present barriers to musicians and non-specialists. The emerging field of Intelligent Music Production addresses these issues via the development of systems that map complex processes to intuitive interfaces and automate elements of the processing chain.

The event will provide an overview of some of the tools and techniques currently being developed in the field, whilst providing insight for audio engineers, producers and musicians looking to gain access to new technologies. The event will consist of presentations from leading academics, with additional posters and demonstrations.


1.30pm: Registration and Coffee
2.00pm: Josh Reiss, Queen Mary University of London. Intelligent Music Production: Challenges, Frontiers and Implications.
2.45pm: Hyunkook Lee, University of Huddersfield. Perceptually motivated 3D music production.

3.30pm: Coffee/Tea (+ posters)

3.45pm: Brecht De Man, Queen Mary University of London. Understanding The Mix: Learning music production practices through subjective evaluation.
4.30pm: Sean Enderby, Birmingham City University. Making Music Production More Accessible using Semantic Audio Analysis.

5.15pm: Sandwiches (+ posters/demos)

5.45pm: Alessandro Palladini, Music Group UK. Smart Audio Effects for Live Audio Mixing.
6.30pm: Alex Wilson, University of Salford. Navigating the “mix-space” in multitrack recordings.

The event is completely free and open to everyone. To attend, please register here.


2.00pm: Intelligent Music Production: Challenges, Frontiers and Implications
Josh Reiss, Queen Mary University of London.

In recent years there have been tremendous advances towards the creation of intelligent systems that are capable of performing audio production tasks which would typically be done manually by a sound engineer. These systems build on the emerging fields of multitrack signal processing, machine listening and cross-adaptive digital audio effects, as well as exploiting new knowledge regarding the psychoacoustics and perception of multitrack audio content. Here, we give an overview of these approaches, the challenges in the field and the current research directions. We also discuss the implications of this research; whether it is even possible to automate creative production tasks, and what this might mean for practicing musicians and sound engineers.


2.45pm: Perceptually motivated 3D music production.
Hyunkook Lee, University of Huddersfield.

Next-gen multichannel audio formats employ height and/or overhead channels in order to provide the audience with a three-dimensional immersiveness in sound reproduction. Psychoacoustic principles for vertical stereophony are fundamentally different to those for horizontal stereophony, and therefore new methods are required for the effective recording and rendering of 3D multichannel sound. This talk will introduce recent research into perceptually motivated methods for 3D audio recording, upmixing and downmixing, and discuss their applications to intelligent 3D music production.


3.45pm: Understanding The Mix: Learning music production practices through subjective evaluation.
Brecht De Man, Queen Mary University of London.

Overview of the methodology and results of PhD research on mixing music. In this work, mixing ‘best practices’ are uncovered, confirmed or contradicted primarily by analysing real-world mixes and perceptual evaluation of these mixes.


4.30pm: Making Music Production More Accessible using Semantic Audio Analysis.
Sean Enderby, Birmingham City University.

Music production is an inherently technical discipline, which often requires extensive training. In this talk, we present initial developments from the SAFE project (, in which we use semantic audio analysis to provide intuitive in-DAW platform for audio processing.


5.45pm: Smart Audio Effects for Live Audio Mixing.
Alessandro Palladini, Music Group UK.

Despite the recent advances in audio mixing offered by digital technologies, live mixing still presents many challenges: low latency and real time processing constraints, limited setup time, suboptimal acoustics and unexpected changes, just to name a few. Intuitive interfaces and processing tools that offer a fast and reliable workflow are therefore a key selling point of many modern digital mixing consoles. At Midas, we believe that the potentials of advanced signal processing and artificial intelligence have not been yet fully exploited. In this presentation we will talk about our approach to the development of smart interfaces and smart audio effects for live audio mixing and demonstrate our first commercially available products.


6.30pm: Navigating the “mix-space” in multitrack recordings.
Alex Wilson, University of Salford.

With the continued research and development of automated music production tools, relatively little is known about the nature of quality-perception in music production practices. This talk will describe a statistical analysis of large dataset of alternative human-made mixes in order to determine the dimensions of mix-variation and how they relate to quality. Also, the formulation of a “mix-space” will be described, which provides insight into how level balances are achieved in a simple mixing exercise as well as a conceptual framework for the future study of the complex art of mix engineering.

Large scale sound system design

Date: 16 Sep 2015
Time: 18:00

Location: WSP Holborn, WC2A 1AF
70 Chancery Lane

A presentation by Simon Kahn and Mott MacDonald.

Large scale sound systems are required for entertainment, for information transmission, or for emergency communication in spaces such as large buildings, stations and airport terminals, shopping malls, arenas and stadiums, and at festivals. This presentation will discuss the principles of designing systems and the challenges and opportunities of complex systems and spaces.

NOTE: As this will be a joint event with the IOA (Institute Of Acoustics) it will be held on the 16th of September, Wednesday. This event is free to both members and the general public. As this is a joint IOA / AES event, numbers have been limited. Places will be allocated on a first come basis. In order to register for this event, please go to the registration page at: society-evening-meeting-tickets-18201989641

If you register and are no longer able to attend the event, please cancel your ticket via the same link or contact so that your place can be offered to someone else.