Say what? How manipulations of object-based audio can improve speech intelligibility in multi-media
Date: 16th January 2019
Venue: Department of Theatre, Film and Television, University of York
The OBA approach is to capture and transmit individual audio objects comprising of stems and their corresponding metadata; a renderer at the user end then creates the audio mix based on the metadata, the number of listening devices available and their configuration. The rendering process can be adapted as per an individual’s needs or preferences, making OBA a much more flexible approach than channel- or scene-based audio.
This talk will present some of the latest research emerging from the University of Salford and S3A project into manipulations of object-based audio, in particular highlighting how this approach can improve speech intelligibility for both headphone and loudspeaker reproduction for multi-media platforms.
Philippa Demonte is currently a PhD Acoustics & Audio Engineering student at the University of Salford. Her career path thus far has been unconventional, but the common link has been a love of sound. Active involvement in student radio and electro-acoustic music composition lead to a decade-long career with a record company, and then…volcano and geyser seismo- and aero-acoustics. A serendipitous tweet 2 years ago then brought Philippa into the realm of psychoacoustics and back to an interest in audio engineering.
You can register to this talk through this link.