Meetings Archive – 2016

Glasgow Royal Concert Hall Extension (Talk/Tour)

Date: 21 Jan 2016
Time: 18:30

Location: Glasgow Royal Concert Hall
Killermont Street

The extension to Glasgow’s Royal Concert Hall has recently opened providing a world-class home for the Royal Scottish National Orchestra. AES Scotland invites the acoustics enthusiasts from the group (30 people max) to come to the venue, where the Principal Architect Graeme Baillie, and acoustician Luke Robertson from ARUP ( with provide a tour of the new venue, and talk through the building design, functionality and its acoustic setup.

The event is arranged for the 21st January 2016, start time at 6:30pm. Attendees should arrive at the Killermont Street entrance of the Concert Hall in good time for the start of the tour. Places are strictly limited (for the convenience of the venue holders), and we’ll proceed with first come – first served basis. It will also will be restricted to a maximum of 2 tickets per person, to avoid disappointment.

Please note that registration for this event is only open to AES Members. Please register here:

If you are not a member but would like to join, you can sign up here:

Listening to science: how music and sound are reshaping scientific investigations

Date: 7 Jan 2016
Time: 18:30

Location: Lab 309, Anglia Ruskin University
East Road

Talk by Domenico Vicinanza

Data sonification is one of the fastest growing disciplines and its study is a fascinating journey through art, technology and science.  From Kepler’s experiments in the XVIII century to the study of sound waves from the sun, from understanding the basics of human physiology to helping detecting cancer, music and sounds have been supporting scientists in many different ways. Science and music are probably two of the most intrinsically linked disciplines in the spectrum of human knowledge and their synergy is transforming the way researchers work, collaborate and think.

In technical terms, auditory perception of complex, structured information could have several advantages in terms of temporal, amplitude, and frequency resolution when compared to visual representations.  Using sound and audible signals to represent information opens up possibilities for an alternative or complement to visualisation techniques. These advantages include the capability of the human ear to detect patterns, recognise timbres and follow different strands at the same time. This would offer, in a natural way, the opportunity of rendering different, interdependent variables into sound in such a way that a listener could gain relevant insight into the represented information or data.

This presentation will start with a brief history of sonification for science and then continue describing how auditory display can provide a unique support to disciplines like particle physics, astronomy, biomechanics and neurosciences.

Thinking Smart – Audio Post production workflow for Drama

Date: 12 Jan 2016
Time: 18:30

Location: King’s College London, Edmond J Safra Lecture Theatre
London, WC2R 2LS

Talk by Mike Wood (Dirty Dog Audio Ltd)

This talk will look at how new technology and processes have changed the audio workflow in high end TV drama. Changes in technology have affected all parts of the workflow, from working exclusively ‘in the box’ and removal of the requirement for STEM mixes, changing ADR processes with new features in digital audio workstations, and the use of remote collaboration to improve time management particularly with overseas clients. Mike will illustrate with clips and examples from recent projects.

Mike Wood is director of Dirty Dog Audio Ltd, and is a freelance Sound Editor and Dubbing Mixer with over 20 years of expertise in Feature Film Sound Editorial and Television Audio Post Production. He has a technical background from starting out his career with the BBC, has spent time in the Music Recording sector and now acts as a Supervising Sound Editor in Feature Film and high end TV drama, with credits on over 40 Feature Films including Road to Perdition, Hotel Rwanda, Tomb Raider and The Iron Lady, and is currently working on “DaVinci’s Demons” Season 2 & 3 for Starz and BBC Worldwide.

(The Sound of) Creativity

Date: 2 Feb 2016
Time: 18:30

Location: Lab 113, Anglia Ruskin University
Anglia Ruskin University, East Road

Speaker: Dr Damian Murphy, Reader in Audio and Music Technology; University of York Research Theme Champion for Creativity

Creativity is a key driver of modern, dynamic societies. In the UK the creative economy provides jobs for 2.5 million people, more than financial services, advanced manufacturing or construction, and the creative economy is one of the few industrial areas where the UK has a credible claim to be world-leading. Creativity themed research can range from the molecular and cellular workings of the brain, through to the linguistic, cultural, aesthetic and cognitive dimensions of what it is to be creative.


This seminar will start by exploring how unlikely intellectual collaborations are often the most productive, drawing inspiration from the development of computer music as a discipline through the work of musician, composer and innovator, John Chowning. His research into FM synthesis left a lasting academic legacy for his host institution of Stanford University while also delivering one of their most successful-ever examples of economic impact.


This seminar will highlight leading projects from across the University of York’s past and more recent portfolio, including the recently announced £18 million national Digital Creativity Hub. The DC-Hub sets out to develop transformative projects for the UK’s digital economy in collaboration with industrial and cultural partners across the fields of computer games, interactive media and the convergent space between these disciplines. Furthermore, the creative application of audio technology will play a key role across the full remit of DC-Hub projects.

Live Notation and Composition

Date: 10 Mar 2016
Time: 18:30

Location: Lab 214, Anglia Ruskin University
Anglia Ruskin University, East Road, CB1 1PT

Richard Hoadley will discuss his research and practice in connection to dynamic notation, including the use of the Kinect, the Leap and other proprietary interfaces with dancers and other performers.  In Richard’s dance, music and text pieces ‘Quantum Canticorum’ (2013) and ‘Semaphore’ (2014) he introduced notation generating algorithms influenced by the dancer’s physical movements.  A project still in development, ‘Choreograms’, also seeks to investigate and implement dance notations.  While the general idea of mapping movement is far from novel, the use of movement to generate music notation dynamically, which can then be performed at the time, is less common.  It enables the live reflection and synchronisation of movement in both acoustically performed and electronically generated music and sound.

Similar technologies have been used to create the composition ‘How To Play the Piano’ (2015) written for and performed by the pianist Philip Mead.  For this piece the composer and writer Katharine Norman, was commissioned to write and record a new original poem.  This has formed the backbone of the piece, providing audiovisual prompts and dynamic music notation and in the process becoming a guided improvisation, which very much reflects a performer’s character and experience.  Katharine’s own recording of the poem also provides a significant part of the music as the voice’s amplitude and frequency is mirrored in live audio and notation.

Richard will also be discussing contrasting methods of notation projection and display for performers (musicians, dancers and actors) and audiences.

RF Best Practice: Science or Black Magic?

Date: 17 Feb 2016
Time: 18:30

Location: BBC Scotland, Glasgow
40 Pacific Quay
Glasgow, G51 1DA

We are pleased to announce another event for AES Scotland. It will feature a talk by Mr George-Tolonen from Shure, who is one of the industry experts on Radio Frequency for sound applications. This seminar will unravel some of the mysteries around RF, enabling engineers to be confident in their dealings with radio mics and IEMs which are now an integral, yet often misunderstood, part of the pro-audio skill base. As a consequence of the continuous erosion of UHF spectrum, wireless microphones have become a mainstay topic for many sound professionals. This seminar will cover how different types of wireless systems operate, from analogue to digital, from UHF to 2.4GHz, and further aim to demystify some of the confusion that surrounds this topic.

Registration for this event is open to all (AES Members and Non Members). Register freely here:

Presenter biography: Mr. George-Tolonen is the Pro Audio Group Manager at Shure Distribution UK with over 17 years of experience in the professional audio industry. At Shure UK, he has a particular emphasis of large system RF co-ordination as well as work with the Product Development team. Mr. George-Tolonen is also a Steering Committee member of BEIRG (British Entertainment Industry Radio Group) and has been at the forefront of the discussions and developments with OfCom to secure a future for wireless equipment in the professional audio industry in the UK’

The Great British Recording Studios

Date: 23 Mar 2016
Time: 18:00

Location: Gorbals Sound
97 Pollokshaws Road

AES Scotland and JAMES are very pleased to bring to you a special evening event at Gorbals Sound with Howard Massey.  Howard Massey is a longtime audio journalist, consultant to the professional audio industry, and author of the recently published book, The Great British Recording Studios. Massey’s previous books include Behind The Glass and Behind The Glass Volume II, collections of interviews with the world’s leading engineers and producers. He is also the co-author of legendary Beatles engineer Geoff Emerick’s autobiography Here, There and Everywhere. Formerly a touring/session musician and songwriter, Massey learned his craft as a recording engineer in England in the 1970s, working at studios such as Trident and Pathway. He lives in New York and has lectured extensively at colleges and universities throughout the U.S.

Talk abstract: Some of the most important and influential recordings of all time were created in British studios during the 1960s and 1970s—iconic places like Abbey Road, Olympic, Trident, Decca, Pye, IBC, Advision, AIR, and Apple. This presentation will unravel the origins of the so-called “British Sound” and celebrate the people, equipment, and innovative recording techniques that came out of those hallowed halls, including rare photographs, videos, and musical examples.

This is a rare opportunity to see Howard talk in Scotland. Register early to avoid disappointment.  Registration is free and open to all (AES Members and Non Members):

Event sponsored by JAMES, Mediaspec and Gorbals Sound

Audio programming with JUCE: what could possibly go wrong? / AES UK AGM

Date: 9 Feb 2016
Time: 18:30

Location: ROLI Ltd
2 Glebe Road
London E8 4BD

Lecture by Julian Storer (Head of Software Architecture, JUCE) and Timur Doumler (JUCE Senior Software Engineer).

NOTE: This lecture is open to the public, as are all our lectures. However, it will be preceded by the AES UK’s Annual General Meeting, to which only members are invited. AGM starts at 6:30pm (early bird non-members will be served refreshments in the old tech hive room until the end of the general meeting). Lecture starts at 7:00pm (everyone’s invited).

PARKING: please note that there is ample parking on and around Kingsland Road and it goes off-meter after 6.30pm.

This talk will consist of two parts. In the first one Julian Storer will look into what the platform covers and its design philosophy. In the second part Timur Doumler will give a deeper technical dive into the topic of real-time audio programming and the perils of it.

JUCE is the leading engine for the creation of audio applications on all platforms. JUCE provides tools that give you everything you need to create music software, including libraries for audio, MIDI, GUI, graphics and 20 more. The core of JUCE is the Introjucer, a unique project management tool, which provides the key to seamless cross-platform development. JUCE is used by over 400 companies, including leaders such as InMusic, KORG, Cycling’74, Arturia, Presonus or Image Line.

Julian Storer (Head of Software Architecture, JUCE)
An experienced C++ developer and consultant, Jules is the author of the JUCE framework, which powers the apps and plugins made by hundreds of audio tech companies. He also created the Tracktion DAW, used by thousands of musicians for over a decade.

Timur Doumler (JUCE Senior Software Engineer)
Formerly developing Kontakt at Native Instruments, Timur now works closely with Jules and the rest of the JUCE team to further develop the JUCE framework. Timur holds a PhD in Astrophysics and loves learning languages, progressive rock, and science fiction.

Object Based Radio: Effects On Production and Audience Experience

Date: 8 Mar 2016
Time: 18:30

Location: SAE Institute London
297 Kingsland Road
London E8 4DD

Tony Churnside
Origins of storytelling involved small gatherings of people around campfires and the passing down of stories though generations. Story creators connected with small audiences directly, and responded to audience reactions by subtly changing their stories and how they presented them.

Then technology disrupted this – the printing press – the record player – the radio – these all enabled broadcasting to massive audiences but that one-to-one connection with audiences was lost. Now there has to be a single “definitive” version of a story. One that works for a whole mass-audience. But this single “correct” version is a compromise which tends to appeal to the centre of a normally distributed audience and ignores the fringes.

But new technology means this no longer need be the case. We can create audio content that can flex and adapt to an individual listener in a similar way to a someone telling stories around a campfire. Tony Churnside draws on his previous experience with BBC R&D, his current role with BBC’s new Radiophonic Workshop, his collaboration with Bjork at MoMA, and his ground-breaking work in production workflows, to explore what new audio experiences could be generated if we started designing audio content differently.

The Virtual Singing Studio: A tool for exploring musical performance and interaction through real-time room acoustic simulations

Date: 6 Apr 2016
Time: 18:30

Location: Lab 214, Anglia Ruskin University
Anglia Ruskin University, East Road, CB1 1PT

Jude Brereton, Audio Lab, Department of Electronics, University of York

The physical characteristics of a music performance venue influence the experience of music for the listener and performing musician alike. Indeed, the acoustic characteristics of the venue will influence not only the perception of music for the listener, but also many of the attributes of the performance itself, since a musician will alter their performance in response to the acoustic feedback they receive from the concert hall.  To facilitate the investigation of the influence of acoustic environments on singing performance, a Virtual Singing Studio (VSS) has been developed which offers an interactive room acoustic simulation in real-time, using established auralisation techniques, which allow a singer to perform in an ordinary room and hear him/herself as if singing in a real performance venue.

This talk will introduce the design and implementation of the VSS and report on results which demonstrated that professional singers rated the room acoustic simulation as highly plausible, and judged it to be authentic in comparison to singing at the real performance venue. It will also outline comparisons of singing performance analysis comparing tempo, vibrato and intonation characteristics of singing in the real and virtual performance spaces.

Audience members will also be able to try out the Virtual Singing Studio for themselves!