Forthcoming Meetings

Music X Architecture - Gestalt

Date: 27 Feb 2019
Time: 18:30

Location:

Gestalt discuss their work interacting with space and architecture.

The different approaches they use and processes they work with when creating these works. Looking at how new technology is opening up different possibilities for exploration; expanding and intersecting the traditional constructs of a music release, performance, exhibit or gallery space.

Gestalt

A collaborative ‘Audio Visual’ project directed & curated by composers Joel Wells & Abi Wade;

With a focus on creating experimental music works and soundscapes, which have an intrinsic relationship to visual art.

Wells and Wade combine an alternative and experimental approach to sound, composition, performance and instrumentation; Classical instruments are subverted in innovative ways, preparing the piano with found objects, manipulating vocals to create ambient soundscapes and exploring the cello percussively using beaters, contact microphones and unusual bowing techniques. The pair also extensively experiment with field recordings, new technologies and the creation of unique sample instruments. Tuned percussion such as the Kalimba and Slate marimba are recorded and re-sequenced using analog and digital hardware inc Elektron’s infamous Octatrack sampler and the Analog Rytm drum machine, to become warped ‘fourth world’ polyrhythms or textural drones.

 

Hogg Lecture Theatre – L294, Second Floor, University Of Westminster, 35 Marylebone Road, London NW1 5LS


Audio Definition Modelling in Broadcast. Joint meeting with SMPTE, David Marston, BBC R&D.

Date: 19 Mar 2019
Time: 18:00

Location: Southampton Solent University
Southampton Solent University, East Park Terrace
Southampton

Audio Definition Modelling in Broadcast.  Joint meeting with SMPTE,  David Marston, BBC R&D. 

Date: 19th March 2019
Time: 18:00
Venue: Palmerston Lecture Theatre, Solent University, Southampton

The Audio Definition Model is a specification of metadata that can be used to describe object-based audio, scene-based audio and channel-based audio. It can be included in BWF WAVE files or used as a streaming format in production environments.   This talk will discuss the development and application of the Audio Definition Model, including the BBC Audio Toolkit.

Dave Marston attended the University of Birmingham and achieved a B.Eng in Electronic Engineering. After a short spell working for Galatrek designing uninterruptible power supplies he joined Ensigma specialising in DSP programming and speech coding research. In 2000 he joined BBC R&D, initially working on DAB and then moving on to audio coding research and testing. Among the many areas of audio research Dave has been involved in, subjective testing has been an important area. He has been involved in EBU projects over several years, and was chairman of the FAR-BWF group (improving the BWAV file format). Another area of his expertise is semantic audio, including managing of the M4 (Making Musical Mood Metadata) collaborative project. Over recent years Dave has been involved in the development of the Audio Definition Model (ADM), a metadata model used to describe future audio formats. He has developed the ADM from the initial concept and has turned into an international standard that is now being adopted across the audio industry. He was has also been involved in two recent EU-funded projects: ICoSOLE (lead the BBC work) and ORPHEUS.

Registration for this meeting will be available from January.


2019 AES International Conference on Immersive and Interactive Audio

Date: 27 Mar 2019
Time: 00:00

Location: University of York
University of York
York

2019 AES International Conference on Immersive and Interactive Audio

March 27 – 29th

University of York UK

 

Call for Papers

Immersive audio systems are ubiquitous and range from macro-systems installed in cinemas, theatres and concert halls to micro-systems for domestic, in-car entertainment, VR/AR and mobile platforms. New developments in human-computer interaction, in particular head and body motion tracking and artificial intelligence, are paving the way for adaptive and intelligent audio technologies that promote audio personalisation and heightened levels of immersion. Interactive audio systems can now readily utilise non-tactile data such as listener location, orientation, gestural control and even biometric feedback such as heart rate or skin conductance, to intelligently adjust immersive sound output. Such systems offer new creative possibilities for a diverse range of applications from virtual and augmented reality through to automotive audio and beyond. This opens up exciting new opportunities for artists and audio producers to create compelling immersive experiences. This three-day conference will explore the unique space where interactive technologies and immersive audio meet and aims to exploit the synergies between these fields. The conference will include keynote lectures, technical paper sessions, tutorials and workshops, as well as technological and artistic demonstrations of immersive and interactive audio.

Deadline for papers is 1st October 2018

Deadline for proposals for Workshops and Tutorials is 1st November 2018

You can find the Call for Papers here.