Meetings Archive – 2014

Perceptually Optimized Sound Zones

Date: 21 Jan 2014
Time: 19:00

Location: PATS / BA Building, University of Surrey
University of Surrey
Guildford

Members of the Audio Engineering Society and the Institute of Acoustics are jointly invited to a free technical demonstration and a studio tour at the University of Surrey, Guildford.

The flier for the event, with further information, is available here:

.

Note that Eventbrite registration is required here so that the University can ascertain numbers in advance.

There is ample car parking space on the campus in the evening; otherwise the location is a fifteen-minute walk from Guildford Station. A campus map is here.


AGM; Volterra kernel based sampling and the future of convolution audio software

Date: 11 Feb 2014
Time: 18:30

Location: SAE Institute London
297 Kingsland Road
London E8 4DD

NOTE. This lecture is open to the public, as are all our lectures. However, it will be preceded by the AES UK’s Annual General Meeting, to which only members are invited. Refreshments will be served from 6pm; the AGM will be at 6.30pm; the lecture will begin at 7pm.

For those attending the AGM, limited car parking spaces are available in the local area, which are free after 6.30.

Lecture by Giancarlo Del Sordo and Antonello Punturi, Acustica Audio.

A recording of this lecture is available here (mp3).

The lecture will show the evolution of Acustica Audio’s Volterra kernel-based sampling software Nebula — a project which initiated from the desire to improve on existing in-the-box audio mix setups without resorting to outboard gear. Starting first as a research project into dynamic convolution, it resulted in the creation of a fast software engine capable of processing large numbers of impulse responses. This approach was later replaced by Volterra kernels as a mathematical generalisation, an improvement over dynamic convolution, and a way of overcoming existing patents.

A pure software approach allowed for a quick development of an active Web 2.0 community, allowing dynamic collaboration between forward-thinking developers, beta testers, audio engineers and equipment samplers from around the world.

The division of competences among developers (library development, skin/graphic development and code/software development) allows for the creation of a new generation of high-quality plugins and a new kind of business model.

This lecture will include audio demonstrations of the latest Nebula software, and will look at the core aspects of the Volterra kernel design and the possibilities for future development.


Comparative results between loudspeaker measurements using a tetrahedral enclosure and other methods

Date: 11 Mar 2014
Time: 18:30

Location: Main meeting room (Room 130), Huxley Building, Imperial College
180 Queen's Gate
London SW7 2RH

Lecture by Geoff Hill, Hill Acoustics Limited.

A recording of this lecture is available here (mp3).

AESUK_lecture_1403

A major problem for the loudspeaker and transducer industries throughout the world has been an inability to rely upon measurements routinely exchanged between suppliers and customers. New systems are now in use, giving a unique and stable test environment with the opportunity to standardize and compare results between measurement sites.

Tetrahedral measurement enclosures work because the shape eliminates standing waves, and acoustic foam damps any remaining high frequencies. It then rigidly defines the measurement geometry together with interchangeable sub baffles, ensuring rapid and accurate change-over and repeatable measurements. The design, production and customer chain results are thus comparable unit-to-unit throughout the world to an unprecedented degree.

This lecture updates a paper published in 2013 with further comparative measurements, and compares them to results by other people and methods.


Infrasound in Vehicles: Theory, Measurement and Analysis / Social evening

Date: 8 Apr 2014
Time: 18:30

Location: Dolby Europe’s London Office
4–6 Soho Square
London W1D 3PZ

Please note that this is a short preview of Professor Vanderkooy’s Berlin paper (approximately 30 minutes with questions) followed by a social evening.

Lecture by John Vanderkooy.

A recording of this lecture is available here (mp3).

Infrasound (IS) in cars is quite strong and may be responsible for health effects. This paper presents measurements and simplified mechanisms for the production of IS in vehicles. Four mechanisms are proposed:

  1. turbulence from the moving vehicle or other traffic, infusing through the vents,
  2. flexing of the body causing volume changes,
  3. acceleration of the vehicle, causing an inertial reaction from the enclosed and external air,
  4. pressure variations due to altitude changes.

The acoustic pressure from these mechanisms can be simplified by the fact that IS wavelengths are much larger than the size of the vehicle. Measurements are interesting and analyzed to elucidate the acoustic contribution of each mechanism.


Application of Measured Directivity Patterns to Acoustic Array Processing

Date: 13 May 2014
Time: 18:30

Location: Imperial College
South Kensington Campus
London SW7 2AZ

Lecture by Mark Thomas, Microsoft Research.

The desire for hands-free telephony and immersive teleconferencing/telepresence systems has sparked much interest in signal processing for microphone and loudspeaker arrays. Many of the tools that are now commonplace in acoustic arrays have roots in entirely different fields; most beamforming techniques stem from phased-array antenna technology and much of Fourier acoustics is borrowed from quantum theory, both emerging around the beginning of the 20th Century. While the impact of these tools is undeniable, many design procedures nevertheless make assumptions that are reasonable for the original scenarios but are not so valid in the acoustic case. For instance, transducers may be assumed to be omnidirectional, have a flat frequency response, or be matched in sensitivity. Ideally the design of processing algorithms should account for non-ideal behavior by using measured directivity and radiation patterns that can vary significantly in the real world. In this talk we investigate how practical measurements can be made, how array signal processing can benefit from this information in the acoustic beamforming scenario, and other applications in the field of audio and acoustics.

Biography

Mark Thomas received the M.Eng. degree in Electrical and Electronic Engineering and the Ph.D. degree from Imperial College London, London, U.K. in 2006 and 2010 respectively. His research interests include signal processing for speech including source modelling and speaker identification, and particularly multichannel acoustic signal processing for beamforming, acoustic echo cancellation, dereverberation, and spatial audio capture and rendering. He is currently a Researcher with Microsoft Research, Redmond, USA. Dr. Thomas is a member of the IEEE Signal Processing Society, the American Acoustical Society and the Audio Engineering Society.

Venue details

The lecture will be held in The Gabor Seminar Room, Level 6, Department of Electrical and Electronic Engineering. A campus map is here.

Instructions:

  1. The nearest tube is South Kensington.
  2. From Exhibition Road, enter the campus through the revolving doors in the entrance hall (large yellow arrow on campus map).
  3. Cross the entrance hall and go out through the rear revolving door. The Electrical and Electronic Engineering Building is immediately facing you across the plaza.
  4. Enter the EEE building and take the lift to Level 6. The Gabor Seminar room is signposted.


The Acoustics Behind Sonic Wonderland

Date: 8 Jul 2014
Time: 18:00

Location: WSP Holborn, WC2A 1AF
70 Chancery Lane
London

Lecture by Trevor Cox, University of Salford.
Joint lecture between the AES and IOA.
6.00 pm IOA AGM (All Welcome) / 6.30 pm Evening Presentation

Sonic Wonderland is a popular science book about the most remarkable sounds in the world. In this talk, Trevor will look at some of the detailed acoustic science behind a few of the wonders, picking examples that required first hand research by himself. It will begin by solving the mystery of Echo Bridge, something that first appeared in the Journal of Acoustical Society of America in the 1940s. Trevor will present measurements that he made at the badly tuned musical road in California. To finish, he will look at the detailed acoustic properties of the world’s ‘longest echo’, and explain why the record isn’t for the longest reverberation time.


A Comprehensive Overview of the Dolby Atmos System

Date: 22 May 2014
Time: 18:30

Location: Room 203, Birmingham City University
Millennium Point
Birmingham

Lecture by James Shannon, Dolby Labs, UK.

The talk will give an introduction to the technologies behind the Dolby Atmos system. Dolby Atmos uses an innovative layered approach to building a soundtrack. A base layer consists largely of static ambient sounds, which are mixed using the familiar channel-based method. Layered on top of these are dynamic audio elements that can be positioned and moved precisely to correspond to the images onscreen. Metadata records how these elements should behave during playback—behaviour that best matches the director’s intent, regardless of theatre configuration. This dual-layer approach gives film creators greater power and freedom while ensuring you a consistent experience in any theatre.


High Resolution: Capturing the Moment

Date: 10 Jun 2014
Time: 18:30

Location: De Vere Holborn Bars
138-142 Holborn
London, EC1N 2NQ

Lecture by Bob Stuart (Meridian Audio) and Peter Craven (Algol Applications).

We see increasing interest in High Resolution analogue and digital audio and in recordings employing higher sample rates or bit depths than CD. Yet this trend seems unstructured, and progress to be stalled in coding concepts established in the late 1990s.

In this lecture we hope to show how recent findings in neuroscience, perception and signal processing seem, at least to us, to point in a new direction — one which promises higher quality than ever — a different framework and direction for audio in the 21st century.

Bob and Peter have cooperated together on analogue and digital audio over four decades; their work together has including fundamental topics in dither, noise-shaping, coding, lossless compression, archive and playback. This lecture is a rare chance to glimpse them in action and to see some recent and provocative thinking.


Making sense of the beat: How humans use information across the senses to coordinate movements to a beat.

Date: 20 Aug 2014
Time: 18:00

Location: Room 203, Birmingham City University
Millennium Point
Birmingham

Lecture by Dr. Mark Elliott, School of Psychology, University of Birmingham.
Room 203, Millennium Point, Birmingham City University, Curzon St, Birmingham, B4 7XG.
 

People will nod or tap along to the beat of a song, often without even thinking about it. This demonstrates the strong links between auditory rhythms and movement in humans. However, the brain often uses multiple senses, including vision and touch rather than just sound alone to define events in time. While synchronising movements to the beat appears a very simple thing to do, the brain continuously has to deal with conflicting information from these senses and correct for errors such that we continue to move in time with the beat. In this talk I will present the research we have carried out in the Sensory Motor Neuroscience (SyMoN) Lab at the University of Birmingham, to understand how we keep in time with the beat. In particular, I will discuss the models that describe how we combine information across the senses and correct the mistakes we make. I will further talk about how we have used the models developed to understand how a string quartet keep in time together, how a DJ convinces us they performed a seamless mix and how excited crowds end up bouncing around in synchrony. 

 


A Comprehensive Overview of Game Audio

Date: 3 Nov 2014
Time: 15:00

Location: Room 203, Birmingham City University
Millennium Point
Birmingham

Room 203, Millennium Point, Birmingham City University, Curzon St, Birmingham, B4 7XG.

Overview:
AES-Midlands is hosting an afternoon of presentations introducing the field of video game audio. From simple casual games all the way to AAA blockbusters, audio plays a major role in the gamer’s experience. The interactive nature of games and the technical limitations of the platforms they run on, significantly contrasts from linear mediums such as TV and film. Presented by industry professionals of both creative and technical backgrounds, these presentations are ideal for those interested in the industry.

The event is free of charge and open to everyone (members and non-members). Free tickets are available here.

Schedule:
[3.00 – 3.45] The Sonic Journey, Andy Grier (FreeStyleGames/Activision)
[3.45 – 4.15] Pew-Pew! Boom-Boom! Kapow!, Andy Grier (FreeStyleGames/Activision)
[4.15 – 5.00] If a tree falls in the forest and no one is around to hear it, how many channels does it use?, Jethro Dunn (Codemasters)
[5.00 – 5.30] Break
[5.30 – 6.15] Mixing for the Unknown, Edward Walker (Sounding Sweet)
[6.15 – 7.00] Sound Bytes, Aristotel Digenis (FreeStyleGames/Activision)
[7.00 – 7.45] Creating a Virtual World in Real-Time, Jon Holmes (Rare/Microsoft)
[7.45 – 8.00] Coffee
[8.00 – 8.30] To ∞ and beyond…, Aristotel Digenis (FreeStyleGames/Activision)

Abstracts:

3.00 – 3.45: The Sonic Journey
Andy Grier – Lead Audio Designer – FreeStyleGames/Activision
A brief historical audio tour from the first bleeps and bloops of Pong, through all the various synthesis methods of console generations that you or your parents played on, right up to groundbreaking technology found in current generation video games. Andy will highlight key moments in game audio history which helped shape the industry whilst engaging the audience in a comparison of where we’ve came from vs. where we currently stand.

3.45 -4.15: PEW-PEW! BOOM-BOOM! KAPOW!
Andy Grier – Lead Audio Designer – FreeStyleGames/Activision
So what does sound design for video games entail? Andy will discuss the task of producing raw source material and content that will go into the video game. Audio topics such as traditional sound design, Foley art, field recording, synthesis, voice-over production and music composition to name a few. It’s the harmonious sum of all these components which are brought together to create the audio vision for the game.

4.15 -5.00 If a tree falls in the forest and no one is around to hear it, how many channels does it use?
Jethro Dunn – Senior Audio Designer – Codemasters
We have sound… now what? Jethro discusses how sound is implemented in games from basic authoring and triggering using audio middle-ware to complex data driven simulation systems for modern games. He will also discuss how hardware limitations can affect design decisions and examines some approaches to budgeting and optimisation which can allow sound designers to exceed those limitations.

5.00 – 5.30: Break
Sandwiches will be served in Room 405

5.30 – 6.15: Mixing for the Unknown
Edward Walker – Game Audio & Post Production Sound Engineer/ Director – Sounding Sweet
Mixing audio for games is no longer something that ‘just happens’ towards the end of production. It has become a key area of development which offers some fantastic creative opportunities. Mixing has the power to make or break the audio presentation of your game! In this session we will be exploring the challenges that mixing for non linear audio presents. We will be looking at how one mix can be adapted to sound great across multiple platforms while also conforming to loudness requirements and console limitations. What techniques can we learn from film dubbing mixers that could be employed in a 3D game audio mix, and what can the film dubbing mixer learn from us game audio guys that would help with positioning in Dolby Atmos and Auro 3D? How many speakers is enough? Where does the average consumer place them? What happens when my carefully sculpted surround mix is only experienced through a mere TELEVISION SPEAKER!

6.15 – 7.00: Sound Bytes
Aristotel Digenis – Lead Audio Programmer – FreeStyleGames/Activision
While new game console generations offer great gains in computational power, audio programmers still need to be creative in how to best implement audio algorithms into games. Computational complexity aside, algorithms that may be suitable for offline processing often need to be reviewed and adjusted for the interactive and real-time nature inherent in games. Aristotel will cover engaging topics including spatial audio, environmental audio, codecs selection, and toolsets for authoring interactive audio.

7.00 – 7.45: Creating a Virtual World in the Real-Time-World
Jon Holmes – Audio Engineer – Rare/Microsoft
It’s a really exciting time for audio in games. Technology has rapidly advanced to the point where we have the fewest restrictions ever on what we are capable of doing. The last generation of games consoles in particular have allowed game audio programmers to really flex their technical creativity. Jon will talk about how much audio really goes into a modern AAA game and how the available technology is used to make it sound dynamic and fully immersive.

7.45 – 8.00: Coffee
Coffee will be served in Room 405

8.00 – 8.30: To ∞ and beyond…
Aristotel Digenis – Lead Audio Programmer – FreeStyleGames/Activision
What next…? That is the question. This closing talk will go over just some of the areas being researched both by game audio companies as well as academic institutions, and suggest how game audio may benefit from them.


Augmenting the piano keyboard: From the lab to the stage

Date: 9 Sep 2014
Time: 18:30

Location: Arts One, Queen Mary University of London (Mile End campus)
ArtsOne
London

Lecture by Andrew McPherson, Queen Mary University of London

 Magneting resonator piano demo

This talk presents two augmented musical instruments which extend the capabilities of the familiar keyboard. The magnetic resonator piano (MRP) is an electronically-transformed acoustic grand piano. Electromagnets induce vibrations in the strings, creating infinite sustain, crescendos from silence, harmonics, pitch bends and new timbres, all controlled intuitively from the piano keyboard.

The TouchKeys add multi-touch sensing to the surface of any electronic keyboard. Capacitive touch sensors measure the position and contact area of the fingers on each key, allowing the performer to add vibrato, pitch bends and timbre changes to each note independently just by moving the fingers on the keys during performance. The mappings between touch data and sound have been designed to avoid interference with traditional keyboard technique, so the TouchKeys can build on the expertise of trained pianists with minimal relearning.

Both instruments have established a continuing musical presence outside the research lab. In addition to presenting the instrument designs, this talk will discuss recent collaborations with the London Chamber Orchestra and the band These New Puritans using the magnetic resonator piano, and a Kickstarter crowd-funding campaign for the TouchKeys which raised support for producing and distributing TouchKeys instruments to musicians in 20 countries.


Cutting Edge Research - from City University and King's College London

Date: 14 Oct 2014
Time: 18:30

Location: Performance Space, City University London
College Building, St John Street, EC1V 4PB
London

This month’s lecture will showcase cutting edge research from City University’s Music Informatics Research group and King’s College London’s Centre for Telecommunications Research. The evening will include a drinks reception a selection of technology-based creative works from the Music Department at City.

Music Informatics Research in the Department of Computer Science

The Music Informatics Research Group in the Department of Computer Science works since 2005 on analysing music as audio and symbolic data (scores, MIDI). At City we bring together expertise in machine learning, signal processing, computer science and musicology to develop intelligent music analysis and processing methods. We work on challenges such as audio transcription, music audio similarity, music voice separation, chord recognition and melody models for generation and classification of music. Our work focuses on analysing and recognising musical structure and we are interested in particular in the integration of audio and symbolic music representations and processing. We use our methods in interdisciplinary applications such as music education software and a game interface for data collection, as well as large scale processing of music audio and scores for musicology and music retrieval.

Posters:

  • Automatic Music Transcription: Methods and Applications
  • Big Data for Musicology and Music Retrieval
  • Music Audio Similarity Models and a Game with a Purpose
  • Music Language Models

Composition Research in the Department of Music

At City we recognise that the ways in which composers create and share work are shifting and changing. Traditionally delineated boundaries between the fields of scored concert music, studio composition, and media composition are increasingly dissolving to form a broad and fluid landscape for contemporary composers. For the music department at City, this broad field of contemporary composition encompasses notated and digital music, sound arts, improvisation, interdisciplinary practices and numerous points of intersection between these areas. At present staff and students are engaged in practice-led research in instrumental composition, live electronic performance, multichannel studio composition, interdisciplinary and collaborative research (notably in music and dance, and music and film), and sound installation theory and practice. Composition in this broad sense forms a critical strand of research in music at City alongside the department’s other strengths in musicology, ethnomusicology and performance research.

Audio Lab, Centre for Telecommunications Research, King’s College London

The research of the Audio Lab at King’s College London is centred on multichannel systems for perceptual sound field synthesis and reproduction. The field of spatial sound has so far been mainly geared towards creating special effects and providing a pleasing listening experience, rather than rooted in solid engineering or science. Notable exceptions include ambisonics and WFS, which unfortunately haven’t penetrated the market yet. At King’s, we established a scientific framework for the analysis and design of multichannel systems based on concise modelling of underlying psychoacoustic phenomena. That framework enabled the development of a new multichannel audio technology which improves over state-of-the-art systems in terms of accuracy and stability of the auditory perspective. We also developed a super-real-time software implementation for virtual reality applications, based on further psychoacoustic approximation, as well as a new class of underlying higher-order microphones.

Posters:

  • Perceptual Sound Field Recording, Reproduction, and Synthesis
  • Efficient Synthesis of Room Acoustics Via Scattering Delay Network
  • A New Class of Higher Order Differential Microphones
  • A Computational Model for the Prediction of Localisation Uncertainty


Spot the Odd Song Out: A System for Music Similarity Estimation

Date: 10 Dec 2014
Time: 18:30

Location: Room 203, Birmingham City University
Millennium Point
Birmingham

Dr. Daniel Wolff, City University London

Music similarity estimation is a key topic in Music Information
Retrieval. In scenarios such as music exploration or recommendation,
user satisfaction depends on the agreement between the user and the
system on which music is more and which is less similar. The perceived
similarity is specific to the individual user and influenced by a number
of factors such as cultural background, age and education. We will
discuss how to adapt similarity models to the relative similarity data
collected from users, using machine learning techniques or metric
learning.

At this point, there are few similarity datasets available for training
and evaluation of such systems. We will present the “€œSpot the Odd
Song Out”€ game, which collects relative similarity judgements of users
on triplets of songs: Players are they are asked to choose one song as
the “odd song out”€. This data is annotated with user attributes such
as age, location and spoken language. The game is designed as
multi-player and rewards blind agreement of players. Based on the
CASimIR API, it has been extended to multiple question types and
scenarios including annotations of tempo, rhythm and further
classification. Game URL: http://goo.gl/6sNcmm

the event is completely free, but it would be great if you could register so we can manage the room/coffee more effectively: https://www.eventbrite.com/e/aes-midlands-lecture-spot-the-odd-song-out-a-system-for-music-similarity-estimation-tickets-14134182721

Daniel Wolff recently finished his PhD on “Similarity Model Adaptation
and Analysis using Relative Human Ratings” at the Music Informatics
Group of City University London, now researching in the Digital Music
Lab project. Apart from modelling music similarity, his past research
includes feature extraction from audio with a focus on periodic
patterns, as well as computational bioacoustics with a focus on birdsong
recognition. He is furthermore an active musician in the City University
Experimental music Ensemble. Author’s homepage: http://www.soi.city.ac.uk/~abdz038


Reimagining the piano: ROLI and the Seaboard

Date: 11 Nov 2014
Time: 18:30

Location: ROLI Ltd
2 Glebe Road
London E8 4BD

Lecture by Ben Supper, ROLI Ltd.

ROLI created the Seaboard as an evolution of the piano. The Seaboard sets out to endow the traditional piano keyboard with far greater powers of musical expression, without alienating existing musicians.

What is involved in producing such a musical instrument? In developing the Seaboard, ROLI has had to overcome several classes of problems. There are organisational difficulties: to build and mature a design and manufacturing company from scratch, and to reconcile the ambition of a nascent company with the realities of its capability. Alongside this are several classes of technical problems: to develop and refine new technologies; to find new materials and to learn to manufacture with them, and to bend the aged MIDI specification to handle more expressive information. There are social aspects to this task, too: ROLI has formed partnerships with synthesiser manufacturers, working groups, and industry pioneers to shape the Seaboard and provide it with an infrastructure of support.

This lecture picks out a few of these challenges, providing a perspective on how a new company sets out to innovate within the music industry.

Ben Supper has lectured to the AES in various guises: as a PhD student at the University of Surrey, as a panellist discoursing on digital signal processing, and a few times as the inventor of Focusrite’s VRM system. As Head of Research at ROLI since 2013, his mission is to provide other engineers with a rewarding and stable career, and the industry in general with a more interesting future.



Christmas lecture: Object-based audio - The future of broadcast?

Date: 9 Dec 2014
Time: 18:30

Location: BBC New Broadcasting House, London
Portland Place
London

Lecture by Dr. Frank Melchior, Head of Audio Research, BBC R&D

The audio research group at BBC R&D has for the past four years been developing the next generation of audio for broadcast. Working with academic partners in the BBC Audio Research Partnership, the group has created novel audience experiences based on new forms of audio content representations. This work has led to the launch of large-scale funded projects involving a significant number of researchers and international industrial partners.

Dr. Melchior will outline a number of public trials around object-based audio and binaural experiences which demonstrates his vision of the future of broadcast. His presentation will concentrate on the audience experience, but will also detail the technological challenges from a content provider perspective. In addition to the latest research results from the BBC and its academic partners, the talk will highlight four projects in depth:

  • An interactive web-based football broadcast, which includes the ability to adjust the level of commentary versus background.
  • An immersive radio drama, produced in stereo, surround and immersive audio versions based on a single mix.
  • An object-based binaural production system, used to create and deliver a binaural experience to the audience, receiving very positive feedback.
  • A variable-length documentary, whose duration can be adjusted without compromising on the quality of the storytelling.


Scotland Christmas Lecture: The Psychology of Sound and Music

Date: 17 Dec 2014
Time: 11:00

Location: Glasgow Caledonian University (City Campus)
70 Cowcaddens Road
Glasgow G4 0BA

Lecture by Dr. Don Knox and guests

This Christmas lecture will be a first-time collaboration between the AES Scottish Branch, Glasgow Caledonian University and BBC Scotland. “The Psychology of Sound and Music” will feature questions like “Why you like certain types of music and dislike others?”, “Why do some sounds make you jump and others soothe you?”, and others that will be answered in a fun lecture with lots of demos and interaction.

If you would like to attend, book your place by registering at:

https://www.eventbrite.co.uk/e/audio-engineering-societyaesglasgow-caledonian-universitygcu-christmas-lecture-the-psychology-of-tickets-14445965271

Can’t make it to Glasgow?  Thanks to the support of BBC Scotland the lecture will  be streamed live on the Internet at: https://www.youtube.com/watch?v=xIfmvO4XOlg