Meetings Archive – 2015

AES67 and Audio Networking in IT-based Studios

Date: 10 Feb 2015
Time: 18:30

Location: Dolby Europe’s London Office
4–6 Soho Square
London W1D 3PZ

NOTE. This lecture is open to the public, as are all our lectures. However, it will be preceded by the AES UK’s Annual General Meeting, to which only members are invited. Refreshments will be served from 6pm; the AGM will be at 6.30pm; the lecture will begin at 7pm. The lecture is being hosted at Dolby’s London theatre. After the AGM and the lecture, Dolby will host a screening of Gravity in Dolby Atmos.

6:00pm – drinks served at Dolby’s reception (open both to members and non-members)

6:30pm – AGM in the screening room (non-members asked to remain in reception lobby until the end of the AGM)
7:00pm – AES 67 lecture by Mark Yonge (open both to members and non-members)
8:30-9:00pm – screening of Gravity in Dolby Atmos (open both to members and non-members)

Lecture by Mark Yonge, AES Standards Committee

The recent AES67 standard for audio streaming over IP networks provides interoperability between proprietary networking systems, and offers a basis for large-scale studio operations.

In many large-scale installations, audio signals for production and postproduction. The IEEE 1588 precision time protocol offers the prospect of media clocks that are coherent throughout an IP network without the need for separately-distributed sync pulses.

Self-contained sound studios based on must now consider a significant technology shift.

This lecture will discuss the role of audio networks and the rationale behind AES67, together with results from recent interoperability testing of practical implementations.

Could AES67 become the new AES/EBU?


Perceptual Sound Field Reconstruction and Coherent Synthesis

Date: 13 Jan 2015
Time: 18:30

Location: King’s College London, Nash Lecture Theatre, K0.02
Room K0.02, Strand Building
London, WC2R 2LS

Lecture by Zoran Cvetkovic, Professor of Signal Processing at King’s College London.

Imagine a group of fans cheering their team at the Olympics from a local pub, who want to feel transposed to the arena by experiencing a faithful and convincing auditory perspective of the scene they see on the screen. They hear the punch of the player kicking the ball and are immersed in the atmosphere as if they are watching from the sideline. Alternatively, imagine a small group of classical music aficionados following a broadcast from the Royal Opera at home, who want to have the experience of listening to it from best seats at the opera house. Imagine having finally a surround sound system with room simulators that actually sound like the spaces they are supposed to synthesise, or watching a 3D nature film in a home theatre where the sound closely follows the movements one sees on the screen. Imagine also a video game capable of providing a convincing dynamic auditory perspective that tracks a moving game player and responds to his actions, with virtual objects moving and acoustic environments changing. Finally, place all this in the context of visual technology that is moving firmly in the direction of ”3D” capture and rendering, where enhanced spatial accuracy and detail are key features. In this talk we will present a technology that enables all these spatial sound applications using low-count multichannel systems.

Zoran Cvetkovic is Professor of Signal Processing at King’s College London. He received his Dipl. Ing. and Mag. degrees from the University of Belgrade, Yugoslavia, the M.Phil. from Columbia University, and the Ph.D. in electrical engineering from the University of California, Berkeley. He held research positions at EPFL, Lausanne, Switzerland (1996), and at Harvard University (2002-04). Between 1997 and 2002 he was a member of the technical staff of AT&T Shannon Laboratory. His research interests  are in the broad area of signal processing, ranging from theoretical aspects of signal analysis to applications in audio and speech technology,and neuroscience. From 2005 to 2008 he served as an Associate Editor of IEEE Transactionson Signal Processing.


The audio engineering revolution in mobile consumer devices

Date: 10 Mar 2015
Time: 18:30

Location: Red Bull Studios
155-171 Tooley Street
London SE1 2JP

Lecture by Michael Page (System Architect, Cirrus Logic)

The audio and acoustic engineering in mobile devices has changed beyond recognition in the last few years, and a great deal of the audio engineering has been inspired by established pro-audio technology. In this lecture, Michael Page will lead a tour of some of the most important technical advancements in mobile device audio quality, including HiFi-quality playback, advanced multi-mic DSP for clear voice in very noisy environments, and loudspeaker protection and enhancement, among other features. We will also take a peek into the potential future of mobile devices, and what it means for audio engineering.

Michael Page is a System Architect at Cirrus Logic, the market-leading supplier of audio ICs for the world’s top mobile device and consumer electronics brands.


Technical Visit to the BBC Maida Vale Studios, London

Date: 21 Apr 2015
Time: 18:00

Location: Maida Vale Studios
120-129 Delaware Road, W9 2LG
London

Technical Visit to the BBC Maida Vale Studios, London

21st of April, meet at the reception at 18:00 for 18:30 start. The tour will last about 2h. After it we will head to the nearest pub for drinks and discussion.

NOTE: This event is sold out and its waiting list is also already full. We are doing our best to arrange more places for this visit, but at present this seems unlikely to happen.

This visit is for AES members only.

Maida Vale Studios is a complex of seven BBC studios (of which five are in regular use) on Delaware Road, Maida Vale, London.

It has been used to record thousands of classical music, popular music and drama sessions for BBC Radio 1, BBC Radio 2, BBC Radio 3, BBC Radio 4 and BBC Radio 6 Music from 1946 to the present. On 30 October 2009, BBC Radio 1 celebrated 75 Years of Maida Vale by exclusively playing 75 tracks recorded at the studios over the years.

  • Studio MV1 is one of the largest recording spaces available in the UK. Equipped with a Studer D950 digital desk, MV1 is currently home to the BBC Symphony Orchestra. It was also used by the BBC Radio Orchestra on some of its larger sessions until the early 1990s.
  • Studio MV2 had its technical installation decommissioned some years ago. It currently provides rehearsal space for the BBC Singers and the BBC Symphony Chorus.
  • Studio MV3 is another large studio (equal in size to MV2). With a SSL 9000J series analogue desk installed, MV3 is used for a large number of BBC Radio 2 programmes and some BBC Radio 1 session recordings and live audience shows. Bing Crosby made his last recording session in this studio in 1977 – 3 days before he died of a heart attack on a golf course in Spain.
  • Studio MV4 is a smaller studio with vocal booth and balcony. Utilising a SSL 9000J series analogue desk, MV4 was home to the Peel sessions and has continued to be used to record the BBC Radio 1 sessions for shows that have replaced John Peel’s.
  • Studio MV5 is now one of two spaces used for the Live Lounge and plays host to a large number of current pop acts.
  • Studio MV6 is a drama studio still in regular use to produce programmes for BBC Radio 4.
  • Studio MV7 was a drama studio but is now decommissioned and used for tape storage


Industrial Electroacoustic Transducer Design

Date: 12 May 2015
Time: 18:30

Location: SAE Institute London
297 Kingsland Road
London E8 4DD

Time: 18:30 for 19:00 lecture start. Location: Once you arrive at the reception you will be directed to the Canteen on the 4th floor, where coffee and tea will be served for all guests. The lecture will be held in Lecture Theatre 1, which is on the 1st floor.

Lecture by Kelvin Griffiths (Director, Electroacoustic Design)

Electroacoustic transducers are found in applications as varied as mobile phones, laptops, cars, fire alarm systems, and headphones with each presenting individual design targets and challenges.  In some situations, heightened levels of performance are expected of loudspeakers that may have hitherto served an application adequately, a response to new markets and extended functionalities of the host device.  This includes a growing class of problem of integrating loudspeaker hardware into environments not primarily designed for high quality sound reproduction and the balance of performance and cost being an ever-present factor.

Traditional electroacoustic design approaches using lumped elements are only partly useful in predicting acoustic performance and a more accurate and effective process is required to provide a fuller insight into system concepts earlier in the design process.  The lecture will illustrate electroacoustic transducers in diverse scenarios and discuss modern design methodologies that provide valuable information on both the transducer behaviour and the acoustic problems posed through system integration.


Music & Code: Programming Languages, Algorithms and Contemporary Trends in Computer Music.

Date: 30 Apr 2015
Time: 19:00

Location: The Performance Hub, University of Wolverhampton
University of Wolverhampton
Walsall

Speaker: Marinos Koutsomichalis

Eventbrite link: http://www.eventbrite.com/e/aes-midlands-march-lecture-marinos-koutsomichalis-tickets-16231933149

The long lasting relationship between music and algorithms has been formalised in the latter part of the 20th century and through the development of specialised programming languages. While the first generation of such systems has been rather focused in how to remediate existent musical paradigms, it nevertheless set the grounds for a more profound re-interpretation and re-evaluation of the fundamentals of music to follow. The subsequent computerisation of society led to our era where most aspects of contemporary culture are either computer-driven or dependent on computer-specific paradigms. Accordingly, specialised and music/audio-centric programming languages in conjunction with advances in contemporary media theory caused an all-inclusive exploration of music in both technical and æsthetical respects and, more, impelled new and ground-breaking compositional paradigms.

In that vein, this lecture attempts a brief overview of the most important programming languages—both historical and contemporary—that are relevant to music composition and to audio synthesis. It further discusses the most prominent algorithms and compositional methodologies that are found in such a context. More importantly, it scrutinises how we propelled from what was intended to be an interface to existent musical ideas all the way to the era where computer code largely affects and determines the æsthetics of electronic/electroacoustic music and sound-art for their greatest part—both within academies and within the broader experimental music scene, as well as several popular music genres (albeit to a lesser extend).

The event will be held in room WH123 at The Walsall Campus, University of Wolverhampton.


The NESS Project: Large Scale Physical Modeling Sound Synthesis

Date: 14 May 2015
Time: 18:30

Location: Reid Concert Hall, Edinburgh
Bristo Square
Edinburgh

Lecture by Stefan Bilbao and Colleagues from the Acoustics and Audio Group at Edinburgh University

The NESS project, funded by the European Research Council, is running jointly between the Acoustics and Audio Group and the Edinburgh Parallel Computing Centre at the University of Edinburgh between 2012 and 2017. It is concerned with physical modelling sound synthesis on a large scale, using classical time stepping methods for what ultimately become very large problems in simulation. Particular systems under study include brass instruments, percussion instruments, and stringed instruments such as violins and guitars; another important area is the modelling of 3D spaces, for purposes of both room reverberation modelling, and in order to embed such instruments in a fully spatialized virtual environment. The goals of here are multifold: to increase synthetic sound quality, to offer simplified user control, and to allow flexible new instrument design exploration while retaining sound output with an acoustic character. In spirit, such an approach is analogous to similar developments in computer graphics, but in more technical regards, however, it presents distinct challenges. One centres around adapting numerical designs to the constraints of human audio perception, in order to avoid audible artefacts; another major challenge, due to relatively high audio sample rates, is that of computational cost, particularly for systems in 3D, and in this regard, fast implementations in multicore and on GPU are under development. Sound examples and video demonstrations will be presented.

http://www.ness-music.eu

Stefan Bilbao (B.A., Physics, Harvard, 1992, MSc., PhD., Electrical Engineering, Stanford, 1996 and 2001, respectively) is currently a Reader in the Acoustics and Audio Group at the University of Edinburgh. His main research interest is in the development of numerical methods for physical modeling sound synthesis. ​

Other presenters include: Charlotte Desvages, Paul Graham, Alan Gray, Brian Hamilton, Reg Harrison, Kostas Kavoussanakis, James Perry and Alberto Torin.

The event is free but we ask you to register at the following page:

Eventbrite - AES-Scotland: The NESS Project


Immersive audio - status and challenges

Date: 9 Jun 2015
Time: 18:30

Location: King’s College London, Nash Lecture Theatre, K0.02
Room K0.02, Strand Building
London, WC2R 2LS

Time: Tea and coffee will be surved at 6:30pm, the lecture will start at 7:00pm.

Lecture by Francis Rumsey (, Editor for the AES journal and Chairman of the AES Technical Council)

The history of sound reproduction is peppered with attempts to get fully immersive audio off the ground in more than a niche way. In the 1940s there was a version of the Fantasound system that had a ceiling loudspeaker, many IMAX theater installations have had a top center “voice of God” loudspeaker, and Ambisonic systems since the 1970s have promised the possibility of “periphony”. The majority of commercial installations however, both in the cinema and in the home, have remained resolutely “horizontal only” until recently. Now there is renewed interest in what some have called 3D audio, or what is increasingly termed immersive audio, with a renewed commercial motivation and a number of competing formats, centred on the movie industry, but also having relevance to other areas of entertainment. In this lecture the history and development of immersive audio are reviewed, with a look at emerging trends, standards activity, and remaining challenges.


Audio and Music Hackathon (SOLD OUT)

Date: 18 Jul 2015
Time: 11:00

Location: Media and Arts Technology Studios, Queen Mary University of London
Mile End Rd
London

The AES is supporting this weekend of hacking audio and music devices and software at Queen Mary University of London.

The event is sponsored by AES sustainable member Harman, providing food and bringing hackable wireless loudspeaker systems. Other partners, like Abbey Road Red and Big Bear Audio, are also providing hackable kit and prizes, and supporting evening events.

This event is now fully sold out. Add your name to the waitlist at: http://www.eventbrite.com/e/audio-and-music-hackathon-london-tickets-17183538426

Newbies and experts all welcome!

Please note that this is a two day event.  It will start on Saturday, July 18, 2015 at 11:00 AM and finish on Sunday, July 19, 2015 at 7:00 PM (BST).


AES-Midlands Workshop on Intelligent Music Production

Date: 8 Sep 2015
Time: 13:30

Location: Room 203, Birmingham City University
Millennium Point
Birmingham

Audio Engineering and Music Production are inherently technical disciplines, often involving extensive training and the investment of time. The processes involved implicitly require knowledge of signal processing and audio analysis, and can present barriers to musicians and non-specialists. The emerging field of Intelligent Music Production addresses these issues via the development of systems that map complex processes to intuitive interfaces and automate elements of the processing chain.

The event will provide an overview of some of the tools and techniques currently being developed in the field, whilst providing insight for audio engineers, producers and musicians looking to gain access to new technologies. The event will consist of presentations from leading academics, with additional posters and demonstrations.

Schedule:

1.30pm: Registration and Coffee
2.00pm: Josh Reiss, Queen Mary University of London. Intelligent Music Production: Challenges, Frontiers and Implications.
2.45pm: Hyunkook Lee, University of Huddersfield. Perceptually motivated 3D music production.

3.30pm: Coffee/Tea (+ posters)

3.45pm: Brecht De Man, Queen Mary University of London. Understanding The Mix: Learning music production practices through subjective evaluation.
4.30pm: Sean Enderby, Birmingham City University. Making Music Production More Accessible using Semantic Audio Analysis.

5.15pm: Sandwiches (+ posters/demos)

5.45pm: Alessandro Palladini, Music Group UK. Smart Audio Effects for Live Audio Mixing.
6.30pm: Alex Wilson, University of Salford. Navigating the “mix-space” in multitrack recordings.

The event is completely free and open to everyone. To attend, please register here.

See http://www.semanticaudio.co.uk/events/wimp2015/ for videos of the talks.

Abstracts:

2.00pm: Intelligent Music Production: Challenges, Frontiers and Implications
Josh Reiss, Queen Mary University of London.

In recent years there have been tremendous advances towards the creation of intelligent systems that are capable of performing audio production tasks which would typically be done manually by a sound engineer. These systems build on the emerging fields of multitrack signal processing, machine listening and cross-adaptive digital audio effects, as well as exploiting new knowledge regarding the psychoacoustics and perception of multitrack audio content. Here, we give an overview of these approaches, the challenges in the field and the current research directions. We also discuss the implications of this research; whether it is even possible to automate creative production tasks, and what this might mean for practicing musicians and sound engineers.

 

2.45pm: Perceptually motivated 3D music production.
Hyunkook Lee, University of Huddersfield.

Next-gen multichannel audio formats employ height and/or overhead channels in order to provide the audience with a three-dimensional immersiveness in sound reproduction. Psychoacoustic principles for vertical stereophony are fundamentally different to those for horizontal stereophony, and therefore new methods are required for the effective recording and rendering of 3D multichannel sound. This talk will introduce recent research into perceptually motivated methods for 3D audio recording, upmixing and downmixing, and discuss their applications to intelligent 3D music production.

 

3.45pm: Understanding The Mix: Learning music production practices through subjective evaluation.
Brecht De Man, Queen Mary University of London.

Overview of the methodology and results of PhD research on mixing music. In this work, mixing ‘best practices’ are uncovered, confirmed or contradicted primarily by analysing real-world mixes and perceptual evaluation of these mixes.

 

4.30pm: Making Music Production More Accessible using Semantic Audio Analysis.
Sean Enderby, Birmingham City University.

Music production is an inherently technical discipline, which often requires extensive training. In this talk, we present initial developments from the SAFE project (semanticaudio.co.uk), in which we use semantic audio analysis to provide intuitive in-DAW platform for audio processing.

 

5.45pm: Smart Audio Effects for Live Audio Mixing.
Alessandro Palladini, Music Group UK.

Despite the recent advances in audio mixing offered by digital technologies, live mixing still presents many challenges: low latency and real time processing constraints, limited setup time, suboptimal acoustics and unexpected changes, just to name a few. Intuitive interfaces and processing tools that offer a fast and reliable workflow are therefore a key selling point of many modern digital mixing consoles. At Midas, we believe that the potentials of advanced signal processing and artificial intelligence have not been yet fully exploited. In this presentation we will talk about our approach to the development of smart interfaces and smart audio effects for live audio mixing and demonstrate our first commercially available products.

 

6.30pm: Navigating the “mix-space” in multitrack recordings.
Alex Wilson, University of Salford.

With the continued research and development of automated music production tools, relatively little is known about the nature of quality-perception in music production practices. This talk will describe a statistical analysis of large dataset of alternative human-made mixes in order to determine the dimensions of mix-variation and how they relate to quality. Also, the formulation of a “mix-space” will be described, which provides insight into how level balances are achieved in a simple mixing exercise as well as a conceptual framework for the future study of the complex art of mix engineering.


Discussions on Acoustic Modelling

Date: 3 Sep 2015
Time: 18:30

Location: Anglia Ruskin University
Anglia Ruskin University East Road, CB1 1PT
Cambridge

This session will include 3 short papers on the topics of acoustic absorber design, the use of virtual acoustics for the study of medieval drama and the application of computer models to the study of cathedrals.

Holistic Acoustic Absorber Design: from modeling and simulation to laboratory testing and practical realization
Dr Rob Toulson (CoDE Research Institute, Anglia Ruskin University)
Dr Silvia Cirstea (VERU, Anglia Ruskin University)

Mathematical models for many acoustic absorption methods have previously been developed; however there is very little accessible data describing how those models perform in a practical implementation of the design. This paper describes the development of a novel slotted film sound absorber and presents the results at each design iteration. Initially a number of mathematical models are considered, in order to optimise the design. The modelled designs are laboratory tested with an impedance tube system, and finally, the practical acoustic absorber design is tested in an ISO accredited reverberation chamber. The results presented demonstrate that the simulation and impedance tube results match very closely, whereas the practical implementation performance is lower in terms of acoustic absorption.

Medieval Drama Acoustics
Dr Mariana Lopez (CoDE Research Institute, Anglia Ruskin University)

Research on pre-seventeenth century theatre acoustics has focused either on Greek and Roman or Elizabethan theatre, leaving aside the variety of performance venues used in medieval times. This paper focuses on research on the York Mystery Plays, a series of plays that were performed in the streets of York (UK) from the fourteenth to the sixteenth century. The application of impulse response measurements on site and the design of a multiplicity of computer models are analysed according to ISO 3382-1: 2009 parameters. A methodology for the study of medieval performance spaces is presented and results demonstrate that organisers of medieval plays were very likely aware of the impact of the staging configuration on acoustics, allowing them to make decisions that considered the aural dimension of the plays.

Acoustics of large places of worship: Spanish Cathedrals
Lidia Alvarez-Morales (PhD Candidate, University of Seville)

The last few years have seen an increase in the interest in the acoustic behaviour of heritage buildings and how such studies can increase our understanding of the cultural use of the space. This work describes the methodology used for studying the acoustic environment of the Catholic cathedrals of southern Spain, nowadays conceived as multifunctional enclosures. Sound propagation in these large reverberant spaces is assessed through the analysis of monaural and binaural impulse responses measured throughout the audience area considering sound source positions that correspond to the liturgical, musical, and cultural activities that take place in the temple nowadays. Furthermore, a 3D simulation model is created for each cathedral, which is used to assess the influence of occupancy and to evaluate different rehabilitation options and acoustic treatments.


Large scale sound system design

Date: 16 Sep 2015
Time: 18:00

Location: WSP Holborn, WC2A 1AF
70 Chancery Lane
London

A presentation by Simon Kahn and Mott MacDonald.

Large scale sound systems are required for entertainment, for information transmission, or for emergency communication in spaces such as large buildings, stations and airport terminals, shopping malls, arenas and stadiums, and at festivals. This presentation will discuss the principles of designing systems and the challenges and opportunities of complex systems and spaces.

NOTE: As this will be a joint event with the IOA (Institute Of Acoustics) it will be held on the 16th of September, Wednesday. This event is free to both members and the general public. As this is a joint IOA / AES event, numbers have been limited. Places will be allocated on a first come basis. In order to register for this event, please go to the registration page at: http://www.eventbrite.co.uk/e/institute-of-acoustics-and-audio-engineering- society-evening-meeting-tickets-18201989641

If you register and are no longer able to attend the event, please cancel your ticket via the same link or contact Vicky.Stewart@atkinsglobal.com so that your place can be offered to someone else.


Open Source Entrepeneurship and the OWL

Date: 13 Oct 2015
Time: 18:30

Location: Eng 209, Engineering Building, Queen Mary University of London
Mile End Road
London

A talk by Martin Klang on the subject of “building an online
community of musicians and coders around an embedded audio device”.

Martin will explain and demonstrate how the Hoxton OWL project is integrating the hardware experience with their online community, using Web Audio to run patches in the browser and Web MIDI to control the device, and how they are extending the hardware with visual, interactive virtual interfaces.

Time: 18:30 for 19:00 lecture start.

Getting there: The closest tube stations are Stepney Green (to the left, 5 minute walk, District and Hammersmith & City lines) and Mile End (to the left, 8 minute walk; Central, District and Hammersmith & City lines). Several bus routes stop at the campus as well.
Please note that no general parking is available on campus and the surrounding area is a controlled parking zone.

 

QMUL Engineering Building

The entrance of the Engineering Building


AES-Midlands Workshop on Networked Audio

Date: 4 Nov 2015
Time: 13:00

Location: Room MP388 Birmingham City University
Millennium Point
Birmingham

Audio-over-network is becoming increasingly more popular in fields such as professional music production and live sound. Digital networks are also fast-becoming essential platforms to support live music performance. Within this field, there are many interesting technical challenges such as the management of Quality of Service, interoperability and latency. This workshop brings industry experts and academics together to share their views and experiences with audio networking.

Location: Room MP388/MP391 , Birmingham City University, The Millennium Point, Birmingham

Provisional Schedule (04-11-15)

  • 13:00-13:30 Registration
  • 13:30 – 2:15, Tony Green (CBA Springwood) – “Building a network infrastructure to support Music Festivals : Challenges and Issues.”
  • 2:15 – 3:00, Simon Short (Focusrite) – “Building a network for professional audio production and live audio: the RedNet/Dante approach”

Abstract: “This presentation will highlight how Dante has become a continuously requested standard in the Industry, along with its methods and implementation. This relates to recording studio practices, live audio and live recording to name a few”

  • 3:00 – 3:15, Coffee break
  • 3:15 – 4:00, John Grant (AES, Senior Fellow) – “Reclaiming digital from the IT industry”

Abstract: “The fundamental differences between the requirements of live digital media (AV) and functions such as web browsing and downloading of files (IT) are often not well understood. IP-based networking was developed for IT, but is increasingly being used for AV and other functions which are sensitive to latency and packet loss. The talk will examine the differences between AV and IT, and outline a system which provides an appropriate service to each kind of traffic. It will also briefly survey the work on this topic in a number of standards bodies, including a new initiative by the mobile industry: their next generation (5G) aims to provide sub-millisecond latency on the air interface, and they recognise they need a better system than the current IP-based transport to give adequate end-to-end performance.”

  • 4:00-4:30, “Ask Experts” panel session, audiences can ask any audio networking questions to the panel members.
  • 4:30 – 5:30 Networking, with refreshments and demos

 

Registration is free, everyone is welcome. Please register here.

 


Front of House Sound Special

Date: 4 Nov 2015
Time: 18:30

Location: Edinburgh Napier University (Merchiston Campus)
Lecture Theatre F12
Edinburgh

PLEASE NOTE ROOM CHANGE: Due to the demand for tickets, this event will now take place in ROOM A17 on Merchiston Campus, Edinburgh Napier University.

The next event for AES Scotland event will be a special event focused on Front of House Sound. It will feature two talks by Staff from BBC Scotland and The Warehouse Sound.  Registration is free, everyone is welcome. Please register here.

 

Noise levels for FOH engineers

Andrew Britton from BBC Scotland explains the law and regulations for front of house sound

A life in Front of House sound

Derek Blair and Pete Harris from Warehousesound explain what it’s like working for one of the UK’s busiest PA companies

 

An event flyer is available here: AES Scotland FOH Seminar

Provisional Schedule

18.30 doors open/networking

19.00 Safe sound levels for FOH Andrew Britton

19.45 What is it like to work FOH – a real life experience. Pete Harris and Derek Blair from Warehouse

20.30 Questions

21.00 Finish


Analogue Compression - Theory and Practice

Date: 12 Nov 2015
Time: 18:30

Location: British Grove Studios
20 British Grove
London

British Grove

This event will consist of short presentations, panel discussion and demonstrations of various analogue compressor designs.

 

Panellists: 

• Tim de ParaviciniFounder of EAR Yoshino
• Carlos Lellis – Programme Coordinator at Abbey Road Institute / Author / Independent Music Professional
• David Stewart – Studio Manager, British Grove Studios
• Brian Gibson – 1967-1998: Technical Department at Abbey Road. 1998 – to date: Freelance Technical engineer specialising in EMI equipment
• Charlie Slee – AES British Section Committee / Founder of Big Bear Audio
• Nik Georgiev –  AES British Section Committee Chair / Lecturer in Audio and Music Production at SAE Institute London / Plug-in Developer at Acustica Audio

 

The event is likely to have longer duration than our usual monthly lectures (estiated to finish at 9:30pm).

Please keep in mind that due to expected high demand and limited number of places this event will be available for members only and it will require registration. Tickets are allocated on a first-come, first-served basis.

 

AES Members can book tickets here. Please note that there are no more tickets available and you could only add your name to the wait list. If a ticket becomes free the first on the list will get an automatic notification to confirm attendance within 6h. Without confirmation after 6h the ticket will automatically be offered to the next person on the list.

 

Tickets booked by non-members will be canceled without any notice.

 

If you register and are no longer able to attend the event, please cancel your ticket by using the registration page so that your place can be offered to someone else.

 


Audio Signal Processing with E-Textiles

Date: 26 Nov 2015
Time: 18:30

Location: Lab 214, Anglia Ruskin University
Anglia Ruskin University, East Road, CB1 1PT
Cambridge

Becky Stewart will give an introduction to the field of electronic textiles, focusing on sharing her work combining music with textiles. She will show Solo Disco Scarf and Sound Direction, both developed as tools to teach simple analogue audio signal processing through sewing and crocheting circuits. She will also show her current work using textile interfaces as capacitive touch control surfaces for music and dance performance.

Becky Stewart is an engineer, developer, and educator working with physical computing and specialising in e-textiles. She completed her PhD in Acoustics, Spatial Audio and Interfaces for Music with the Centre for Digital Music at Queen Mary, University of London in 2010, an MSc in Music Technology at the University of York in 2006 and a BMus in Music Engineering Technology and Computer Science at the University of Miami in 2005. Anti-Alias Labs is where Becky works on creative technology projects. She is currently working with Di Mainstone on the Human Harp, a project to transform suspension bridges into playable harps and that has been covered by the BBC and New York Times. Becky is also a co-founder and the Head of Learning at Codasign, an education company that teaches coding and electronics workshops in museums and art galleries. She recently released the book, Adventures in Arduino (Wiley), teaching programming and electronics to ages 11-15.


Lecture by Chris Watson and Christmas Social

Date: 8 Dec 2015
Time: 20:00

Location: Dolby Europe’s London Office
4–6 Soho Square
London W1D 3PZ

Chris-Watson

20:00 – 21:15 – Drinks/Social networking

21:15 – 22:30 – Lecture and sound demos by Chris Watson (one of world’s leading recordists of wildlife and natural phenomena).

NOTE: This event is open both to members and non-members. Registration is not be required.

Dolby_Logo_w

Catering and venue for this event are kindly provided by Dolby Labs Europe.


Good vibrations: bringing radio drama to life 

Date: 16 Dec 2015
Time: 18:30

Location: Lab 214, Anglia Ruskin University
Anglia Ruskin University, East Road, CB1 1PT
Cambridge

There has been relatively little disruption to sound design and radio production methods since the introduction of digital and non-linear audio editing in the 90s. However, new technologies and audience demand for new experiences mean audio production and sound design is currently undergoing a mini-revolution.

In this presentation Eloise Whitmore will discuss how technological change has impacted the production process and how, in turn, sound design influences storytelling. The talk will cover the resurgence in demand of binaural and look at cutting edge production methods such as object based audio and 3D sound design. The talk will be illustrated by Eloise’s work on a project with Bjork at MoMA, and numerous projects for the BBC.


AES Scotland Christmas Lecture 2015

Date: 15 Dec 2015
Time: 10:30

Location: Edinburgh Napier University (Craiglockhart Campus)
Glenlockhart Road
Edinburgh

Can’t make it to Edinburgh? Thanks to the support of BBC Scotland the lecture will be streamed live on the Internet AES Scotland Christmas Lecture Live Stream

 

Title: You are not supposed to do that!! – how audio recording has adapted to advances and changes in technology

This year the AES Scotland Christmas lecture will be given by Dr Paul Ferguson and Mr Dave Hook of Edinburgh Napier University.

Paul and Dave will take a light-hearted look at how the creativity of users moves music technology in directions unimagined by the original designers. They will look at things as tape, autotuning, turntables, electronic hacking, and end up with some possibilities for audio across distance.  Event flyer here.

The event will take place on Tuesday 15th Dec (audience seated by 10:30am) and will be held in the iconic Lindsay Stewart Lecture Theatre which is on the Craiglockhart Campus. You can register freely using our Eventbrite page. All welcome.


Inaugural AES South/Joint IoA Southern Branch: Dolby Atmos Demonstration

Date: 15 Dec 2015
Time: 18:15

Location: Southampton Solent University
Southampton Solent University, East Park Terrace
Southampton

Lecture by James Shannon (Dolby Laboratories)

This is the inaugural lecture of AES Southern Group, joint with the IoA Southern Branch.   Anyone interested in participating in AES South is encouraged to come along.

This lecture is presented by James Shannon from Dolby Laboratories, who will be discussing spatial audio in cinema, and demonstrating examples of the newly installed Dolby Atmos system in the cinema at Southampton Solent University.  The lecture will be followed by a screening in Dolby Atmos of Roger Waters ‘€˜The Wall’, released earlier this Autumn.  This is a rarity as it has been screened in very few UK cinemas! The event will be held in Lecture Theater 1, above Costa Coffee at the East Park Terrace entrance.

Advance registration for this event is required.  Registration will close 48 hours before the event. https://www.eventbrite.co.uk/e/joint-ioa-southern-branch-and-inaugural-aes-southern-group-meeting-tickets-19275180584    Event is 6:15pm for a 7pm start.   As it is the last event before Christmas, wine and mince pies will be provided before the event!