Forthcoming Meetings

The Rise and Reign of Voice-Centric Media

Date: 14 Dec 2018
Time: 18:30

Location: Queen Mary University of London
Queen Mary University of London
London

The Rise and Reign of Voice-Centric Media

Date: 14th December 2018
Time: 18:30
Venue: Queen Mary University of London, Mile End Road, London, E1 4NS

AES London is delighted to welcome Kirsty Gillmore!The past few years has seen a worldwide explosion in audience numbers for audiobooks, podcasts and audio dramas. Kirsty Gillmore takes us through the reasons behind the growing popularity of voice-centric productions and what this means for sound designers looking to work in this area.

Kirsty Gillmore is a sound designer, voice actor and award-nominated voice demo producer with close to twenty years working in professional audio across theatre, opera, post-production, radio drama and audiobooks. She trained music and sound in New Zealand, and moved to London in 2002. After several years working at the BBC in a variety of roles, in 2010 she left to establish her own sound design and voice production company, Sounds Wilde. As a voice actor she can be heard on audiobooks, video games, audio drama and a variety of other international projects. Kirsty is the European Co-Director for Soundgirls and a contributor for Soundgirls.org, ProSoundWeb and TheatreArtLife.

You can register to attend the event through this link.


Automotive Sound, towards a 360 degree multi-content experience

Date: 15 Jan 2019
Time: 18:00

Location: Southampton Solent University
Southampton Solent University, East Park Terrace
Southampton

Automotive Sound, towards a 360 degree multi-content experience

Date: 15th January 2019
Time: 18:00
Venue: Palmerston Lecture Theatre, Solent University, Southampton

The market of car sound systems is possibly the single largest buyer of transducers in the world, and surely it represents a substantial part of the market of audio amplifiers as well.   Designs already offer customers multi-dimensional experiences, with ceiling or headrest channels and speakers, but what is the event-horizon beyond which manufacturers are trying to look?   What will be the possible scenario once the debate about self-driven cars will safely land into a set of regulations freeing drivers from their (previously) necessary duties?
Dr Ludo Ausiello has worked across the industry in product development and R&D, including working for Tannoy, Harman Automotive and Premium Sound Solutions.   He is currently a Senior Lecturer in Audio and Acoustic Engineering at Solent University, specialising in transducer design, audio systems design, audio production, and acoustics in the built environment.

To register for this meeting follow this link.


Say what? How manipulations of object-based audio can improve speech intelligibility in multi-media

Date: 16 Jan 2019
Time: 18:30

Location: Department of Theatre, Film and Television, University of York
Department of Theatre, Film and Television, University of York
York

Say what? How manipulations of object-based audio can improve speech intelligibility in multi-media

Date: 16th January 2019
Time: 18:30
Venue: Department of Theatre, Film and Television, University of York

Speech intelligibility refers to the proportion of the original spoken signal within a sound stream which can be understood by the listener. Issues relating to speech intelligibility affect not only the estimated 11 million people in the UK currently thought to have some kind of hearing impairment, but also normal hearing listeners: a dialogue stream can be perfectly audible, but not understood, and even when the intelligibility of a dialogue stream is high, it might not be accepted by the listener due to other issues such as poor sound quality or personal taste.

Object-based audio (OBA) and the forthcoming initial roll-out of 5G mobile networks are two technological innovations with the potential to improve not only audio accessibility for multi-media platforms, but also to provide greater levels of personalization, immersion, and interactivity.

The OBA approach is to capture and transmit individual audio objects comprising of stems and their corresponding metadata; a renderer at the user end then creates the audio mix based on the metadata, the number of listening devices available and their configuration. The rendering process can be adapted as per an individual’s needs or preferences, making OBA a much more flexible approach than channel- or scene-based audio.

This talk will present some of the latest research emerging from the University of Salford and S3A project into manipulations of object-based audio, in particular highlighting how this approach can improve speech intelligibility for both headphone and loudspeaker reproduction for multi-media platforms.

Bio:
Philippa Demonte is currently a PhD Acoustics & Audio Engineering student at the University of Salford. Her career path thus far has been unconventional, but the common link has been a love of sound. Active involvement in student radio and electro-acoustic music composition lead to a decade-long career with a record company, and then…volcano and geyser seismo- and aero-acoustics. A serendipitous tweet 2 years ago then brought Philippa into the realm of psychoacoustics and back to an interest in audio engineering.

You can register to this talk through this link.


Behind the Scenes at Celtic Connections

Date: 22 Jan 2019
Time: 18:00

Location: Glasgow Caledonian University
70 Cowcaddens Road
Glasgow G4 0BA

Venue: GCU Student Association Building, Cowcaddens Road, Glasgow, G4 0BA

This joint AES Scotland, Celtic Connections, GCU STEAM (Science, technology, Engineering, Arts and Mathematics) event will feature a range of industry professionals explaining and demonstrating the latest sound, lighting and broadcasting technology used for Celtic Connections on Campus, the popular lunchtime concert series.  The event is aimed primarily at those aged 16-24 who might be thinking about this area for a career but is open to anyone with an interest live sound events.

The event is free but will require a ticket which will be available via Eventbrite shortly


In Conversation with Ellie Williams

Date: 23 Jan 2019
Time: 18:30

Location: Department of Theatre, Film and Television, University of York
Department of Theatre, Film and Television, University of York
York

Date: 23rd January 2019
Time: 18:30
Venue: Department of Theatre, Film and Television, University of York

Ellie Williams is a field Sound Recordist specialising in wildlife documentaries.

She has worked for the BBC Natural History Unit for 16 years, first in Production then as a freelance recordist. She is currently working on BBC2’s ‘First Year on Earth’ series which has taken her to an ancient monkey temple in Sri Lanka, the wildlife-rich Samburu National Reserve in Kenya and the remote fjords of Iceland.

During her career in wildlife television she has camped in deep snow, climbed her kit up mountain ridges on the edge of the Arctic Circle, followed fluking sperm whales and waded knee-deep across a crocodile inhabited river.

Ellie has always been fascinated by sound – from recording with her father’s dictaphone as a child to gigging in bands and working on stage at festivals.

“I don’t think I’ll ever lose the child-like wonder that I feel when listening to the world through headphones. Everything becomes hyper-real – the rushing of a waterfall, the deep rumble of an elephant, the warmth of the human voice. My mission is to accurately and creatively capture location sound and in doing so help immerse the viewer into the world of the film, adding authenticity and depth to the moving images”.

To register to the event please visit our Eventbrite page.

 


In Conversation with Anna Bertmark

Date: 6 Feb 2019
Time: 18:30

Location: Department of Theatre, Film and Television, University of York
Department of Theatre, Film and Television, University of York
York

Date: 6th February 2019
Time: 18:30
Venue: Department of Theatre, Film and Television, University of York

We are delighted to be welcoming Anna Bertmark to York!

Anna Bertmark is a sound designer and supervising sound editor. Originally from Sweden, she has over 15 years experience in sound post production in the UK and has worked on films, TV-Drama, documentaries, commercials and XR projects (VR & 360 videos).

She started her career working for sound designer Paul Davies (Hunger, We Need to Talk About Kevin, ’71) where she learnt dialogue and sfx editing, while assisting on films like The Queen and The Proposition.

She won the BIFA for Best Sound for God’s Own Country and is a mentor to up-and-coming sound designers, having been mentored herself by Dawn Airey, CEE of Getty Images. She is also part of BIFA’s Nomination Committee and has served as Vice Chair of AMPS (Association of Motion Picture Sound).

Some of her credits include:
Gwen (TIFF Official selection 2018)
God’s Own Country
Walk With Me
You Were Never Really Here
Adult Life Skills
The Goob
Lilting

You can register to the event here.


Developments in Auditorium Acoustic Design. Rob Harris – Rob Harris Design Ltd

Date: 13 Feb 2019
Time: 06:00

Location: Southampton Solent University
Southampton Solent University, East Park Terrace
Southampton

Developments in Auditorium Acoustic Design.  Rob Harris – Rob Harris Design Ltd

Date: 13th February 2019
Time: 18:00
Venue: Palmerston Lecture Theatre, Solent University, Southampton

In auditorium acoustics objective science and engineering are employed to elicit subjective perceptions and emotions.  This lecture discusses the auditorium acoustic design of concert halls, opera houses and theatres, by one of the leading concert hall designers in the UK.   It starts with describing some fundamental requirements for audiences, performers and other stakeholders. Both scientific and other factors are considered. The lecture will then look at how the art and science have developed over the last 35 years.  This will include the use of scale and computer modelling, the paradigm shift of auralisation and the integration of architecture and creative digital audio.  The scientific and engineering responses to changing artistic and audience needs will be examined.

Rob Harris started out as a stage lighting designer and sound engineer in the West End of London, mixing major musicals such as Hair and Jesus Christ Superstar.   After studying for an MSc in acoustics at ISVR, University of Southampton, Rob moved into the acoustics industry, working for Arup for 33 years, ending up as a Director of Acoustics, Theatre Design and Arts and Culture.  Since then he has set up his own business Rob Harris Design Limited, and is a visiting professor at ISVR, University of Southampton.

Rob’s auditorium acoustic design credits include Bridgewater Hall Manchester, City Recital Hall Sydney, Glyndebourne Opera House, Bruges Concertgebouw, Wales Millennium Centre Cardiff, the Royal Opera House London, Copenhagen Opera House, Oslo Opera House, Kings Place recital hall London, the Bord Gáis Energy Theatre Dublin, Kristiansand Performing Arts Centre and the orchestral rehearsal / performance hall at Glasgow Royal Concert Hall. Current projects include the Royal Opera House London, a new opera house in the Middle East, a large theatre in Shanghai and the West Kowloon Cultural District in Hong Kong

To register for this meeting, follow this link.


Audio Definition Modelling in Broadcast. Joint meeting with SMPTE, David Marston, BBC R&D.

Date: 19 Mar 2019
Time: 18:00

Location: Southampton Solent University
Southampton Solent University, East Park Terrace
Southampton

Audio Definition Modelling in Broadcast.  Joint meeting with SMPTE,  David Marston, BBC R&D. 

Date: 19th March 2019
Time: 18:00
Venue: Palmerston Lecture Theatre, Solent University, Southampton

The Audio Definition Model is a specification of metadata that can be used to describe object-based audio, scene-based audio and channel-based audio. It can be included in BWF WAVE files or used as a streaming format in production environments.   This talk will discuss the development and application of the Audio Definition Model, including the BBC Audio Toolkit.

Dave Marston attended the University of Birmingham and achieved a B.Eng in Electronic Engineering. After a short spell working for Galatrek designing uninterruptible power supplies he joined Ensigma specialising in DSP programming and speech coding research. In 2000 he joined BBC R&D, initially working on DAB and then moving on to audio coding research and testing. Among the many areas of audio research Dave has been involved in, subjective testing has been an important area. He has been involved in EBU projects over several years, and was chairman of the FAR-BWF group (improving the BWAV file format). Another area of his expertise is semantic audio, including managing of the M4 (Making Musical Mood Metadata) collaborative project. Over recent years Dave has been involved in the development of the Audio Definition Model (ADM), a metadata model used to describe future audio formats. He has developed the ADM from the initial concept and has turned into an international standard that is now being adopted across the audio industry. He was has also been involved in two recent EU-funded projects: ICoSOLE (lead the BBC work) and ORPHEUS.

Registration for this meeting will be available from January.


2019 AES International Conference on Immersive and Interactive Audio

Date: 27 Mar 2019
Time: 00:00

Location: University of York
University of York
York

2019 AES International Conference on Immersive and Interactive Audio

March 27 – 29th

University of York UK

 

Call for Papers

Immersive audio systems are ubiquitous and range from macro-systems installed in cinemas, theatres and concert halls to micro-systems for domestic, in-car entertainment, VR/AR and mobile platforms. New developments in human-computer interaction, in particular head and body motion tracking and artificial intelligence, are paving the way for adaptive and intelligent audio technologies that promote audio personalisation and heightened levels of immersion. Interactive audio systems can now readily utilise non-tactile data such as listener location, orientation, gestural control and even biometric feedback such as heart rate or skin conductance, to intelligently adjust immersive sound output. Such systems offer new creative possibilities for a diverse range of applications from virtual and augmented reality through to automotive audio and beyond. This opens up exciting new opportunities for artists and audio producers to create compelling immersive experiences. This three-day conference will explore the unique space where interactive technologies and immersive audio meet and aims to exploit the synergies between these fields. The conference will include keynote lectures, technical paper sessions, tutorials and workshops, as well as technological and artistic demonstrations of immersive and interactive audio.

Deadline for papers is 1st October 2018

Deadline for proposals for Workshops and Tutorials is 1st November 2018

You can find the Call for Papers here.