Meetings Archive – 2012

Ambisonics — the Once and Future System?

Date: 21 Feb 2012
Time: 18:00

Location: School of Music, University of Leeds
University of Leeds
12 Cavendish Rd

Lecture by Dave Malham, Music Research Centre, University of York.

Ambisonics has been around as a system since the early 1970s, although its basics in some ways date back to Alan Blumlein’s work on stereo and Harry F. Olson’s development of directional microphones at the start of the 1930s.

Tarred with the same brush as the Quadraphonics debacle of the 1970s, it was kept alive by a small band of enthusiasts who realised the much greater capabilities inherent in the system. This continued to be the case until the advent of low cost digital technology towards the end of the 20th Century meant that it became, at last, accessible to many more people. In the past decade far more papers have been published on Ambisonics and Ambisonics-related subjects than in the whole of the preceding three decades. Does this mean it has finally triumphed?

Dave Malham has written VST plug-ins for Ambisonic processing, the ‘MRC Stereometer, a
K-system metering plug-in, and (with Matt Paradis) the ‘ambilib’ Ambisonic processing library for PD and Max/MSP. He also has a patent, WO02085068, for the Ambisonic Sound Object Format. Dave teaches digital audio, signal preservation, sound spatialisation and recording techniques on the Music Technology MA course at York.


Technical Visit to BBC Scotland's TV Studios

Date: 16 Feb 2012
Time: 18:30

Location: To be confirmed
To be confirmed
UK

Our regional group in Scotland has arranged a Technical Visit to BBC Scotland’s studio facilities at Pacific Quay, Glasgow on Thursday 16th February at 6.30pm. BBC staff will talk about the design and work of the TV studios, followed by a tour of the facility.


Loss of our Musical Heritage? – The Rise of the Digital Remaster

Date: 22 Mar 2012
Time: 19:00

Location: Anglia Ruskin University, Rom Mel001
Mellish Clark Building
Cambridge CB1 1PT

Lecture by John Ward and Bill Campbell, Digital Remasters.

Teaching Music Production and Sound Engineering requires students to be able to access and hear milestone recordings from the past to inform their learning and practice. In a wider context, the discerning …audiophile also wishes to hear such recordings as close as possible to the original studio masters. Unfortunately, to some extent, all they can now purchase are digital remasters. Remasters are marketed mainly as improvements to the original releases, but in many cases this claim is very debatable.

Recordings such as Pink Floyd’s Dark Side of the Moon, Miles Davis’ Kind of Blue, David Bowie’s Hunky Dory, Queen’s Night at the Opera and Led Zeppelin’s Physical Graffiti, are seminal recordings which listeners should be able to hear in a way that reveals the passion in the performance and the skill and artistry in the engineering and production. This paper will argue that in some cases, extreme digital remastering is robbing people of access to the true beauty of highly important and seminal recording, and presenting them with modern remasters that in some cases lose much of the feel of the originals. It will also suggest that such radical remastering is actually cultural vandalism that would not be tolerated in other art forms – imagine the outcry if The Mona Lisa was retouched in such a way that all the blues were overemphasised, the contrast reduced and the brightness increased.

There are a number of ways remastering is approached.

One is to attempt to ‘clean up’ the original mix, remove tape hiss and repair tape dropouts, generally removing the “patina”, but without any radical alteration of EQ and dynamics. This method does not particularly trouble the author although some may argue that it is an “Intentional Fallacy”

Another is to quite radically alter the studio master with digital EQ, compression and limiting. It is the latter approach that is most widely used and which the author finds most questionable, and examples of this approach to remastering will form the main focus of the presentation to ASARP.

Other sometimes quite radical methods are used, especially on very old recordings.

The talk will be illustrated with A/B comparisons of high quality recordings of original vinyl releases from the author’s own extensive collection, with digital remasters available on CD. These will include snippets of some of the recordings named above and others, and will demonstrate how in some cases the feel, groove and soul of the originals have been altered. The recordings from LPs have been made at 24/96 resolution and dithered down to 16/44.1 resolution for playback to enable direct comparisons with tracks from remastered CDs. Comparable analysis of dynamic range and frequency spectra will be presented to show quantitatively and qualitatively how digital remastering alters the sound compared to the originals, in some cases reducing the dynamic range to increase loudness and boosting high frequencies to produce a false perception of higher fidelity. These analyses will used to explain the demonstrable perceived differences in voices, instruments, and rhythmic feel and groove.


Whose voice is it anyway?

Date: 13 Mar 2012
Time: 18:30

Location: Royal College of Pathologists
2 Carlton House Terrace
London SW1Y 5DG

Lecture by Jeff Bloom, Synchro Arts.

A recording of this lecture is available here (45MB mp3)

Editing audio to fix timing and tuning problems has now become so commonplace that listeners would be hard-pressed to know when the timing, pitch or other characteristics of a recorded actor or singer have been manipulated to be more accurate or to simply sound better.

However, even with sophisticated software tools, in many situations the editing work required to achieve such polished precision can still be tedious and time consuming, and require considerable skill.

In this talk new processing techniques will be demonstrated which offer automated and precise solutions to certain common situations. These techniques involve automatically extracting, from an accurate ‘guide’ voice or instrument recording, selected characteristics such as timing, pitch, vibrato and loudness, and imposing these features on other less accurate recordings of similar performances.

This approach has many applications in consumer and professional audio processing products, including the following…

For professional applications:
1)     Double and triple (or more) tracks can be made quickly to match, with adjustable precision, an accurate lead vocal.
2)     Alternative performance characteristics can be transferred to a lead vocal.
3)     Prosodic features (including timing, inflection and stress) of recorded dialogue can be transformed to have different but natural sounding features transferred from another recording.

For consumers:
4)     In websites or mobile applications, recordings of amateur singers can be automatically transformed to have the characteristics of a professional vocalist.
5)     A language student’s recording of his or her attempt at mimicking a teacher’s recorded sentence can be modified to make the student’s timing and pitch sound like the teacher’s and provide constructive feedback.

Jeff Bloom — who in 1984 invented the first audio time-alignment algorithms upon which these new techniques are based — will also chart the history of automatic time alignment in dialogue replacement and music applications.


DSP – Why so Hard?

Date: 10 Apr 2012
Time: 18:30

Location: Royal College of Pathologists
2 Carlton House Terrace
London SW1Y 5DG

Lecture by Peter Eastty, Oxford Digital.

A recording of this lecture is available here (51MB mp3)

If you’ve ever wondered why audio DSP programming is so hard when the algorithms are so simple, this is the place for you. Hundreds of strange and wonderful audio processors have been developed over the past four decades and the presenter has struggled with dozens of them.

In order to learn from our mistakes this master class will tour examples of gross bad practice (suitably anonymized to protect the guilty) and in doing so we’ll extract some general principles useful to those who will design audio DSPs in the future. As a practical example of what can be achieved, we’ll go from simulator based algorithm development to listening to production quality code in a matter of minutes.


Applications of thermionic valves in modern recording studios

Date: 25 Apr 2012
Time: 19:00

Location: Anglia Ruskin University, Rom Mel001
Mellish Clark Building
Cambridge CB1 1PT

Lecture by Charlie Slee, Thermionic Culture.

In a world where digital technology has transformed the recording studio and where outboard equipment has been replaced by the DAW, this lecture will take a look into the importance of valve outboard equipment in the modern recording studio.

This lecture will give an introduction into circuit design and the challenges faced when using thermionic valves in the recording studio, and will show their benefits and excellence when used properly.

We will take an in depth look into classic designs and their design philosophies, and with detailed circuit analysis give an understanding of different topologies and their uses, and explore performance improvement using modern techniques and technologies.


Visible Sound

Date: 8 May 2012
Time: 18:30

Location: Royal College of Pathologists
2 Carlton House Terrace
London SW1Y 5DG

Lecture by Ian Butterworth, National Physical Laboratory (NPL).

A recording of this lecture is available here (mp3)

Acoustic designers have increasingly accurate and rapid predictive tools at their disposal, enabling progressively impressive and high-fidelity acoustic products. However, the acoustic validation of such techniques still relies upon the use of a physical microphone to be scanned through the sound field through laborious manual adjustments, or requiring complex automated positioning hardware. The challenges in this approach usually result in limited spatial sampling and thus limited validation.

Ian Butterworth will be presenting recent work from the NPL that has enabled the rapid, remote and unperturbing measurement of a sound field using laser based techniques. The methods he has developed allow you to see the propagating waves, much like you would see a surface wave move in a ripple tank.

Exploiting the acousto-optic effect through the use of high sensitivity Laser Scanning Vibrometers, and a novel experimental setup, the new Rapid Acousto-Optic Scanning (RAOS) technique allows a sound field to be autonomously scanned, providing time-gated spatially distributed data that gives a picture of the propagating acoustic field in the time domain. Furthermore, complex time-averaging can also be employed to show averaged directivity maps.

The complexities of scanning a 3-dimensional field are discussed with comparison to computer simulations, and studies of various acoustic sources and reflective artefacts, such as QRD diffusers, are shown.

NPL’s Acoustics department undertakes leading-edge research to develop new and improved measurement methods, and holds the UK primary standard for various acoustic measurements, covering Sound-in-Air, Ultrasonics, and Underwater Acoustics.


Hearing is Believing

Date: 15 May 2012
Time: 18:00

Location: School of Music, University of Leeds
University of Leeds
12 Cavendish Rd

Lecture by Dr Rob Toulson, Anglia Ruskin University and RT Sixty Ltd

A number of concepts of analogue and digital audio can be discussed and researched with reference to simple sine wave examples and ideal signal flow scenarios. However, real audio signals are complex waveforms involving detailed frequency spectra and transient envelopes.

This lecture discusses some of the simple theory associated with digital sampling, dynamic range compression, comb filtering and equalisation, and evaluates the difference between simple sine wave analysis and the results found when using real audio examples. In many cases the taught theory is an excellent grounding for educational purposes, but often – in reality – it is simply what we hear that truly matters.


Spatial Audio Applications

Date: 24 Apr 2012
Time: 18:30

Location: Glasgow Caledonian University (City Campus)
70 Cowcaddens Road
Glasgow G4 0BA

The evening will feature two lectures based on posters presented at the recent UK conference, Spatial Audio in Today’s 3D World.

An Ambisonic Format Reproduction Array Standard
Andrew J. Horsburgh

The focus of this paper is to detail tools and listening environment requirements for using the Ambisonic format within common digital audio production workflows. In specifying layouts, allowing for qualitative preferential changes, Ambisonic speaker arrays up to 3rd Order reproduction can be used in-line with current audio listening standards. These standards relate to speaker positioning, sound pressure level, frequency response as well as optimal acoustical conditions. Ambisonics allows for dynamic speaker feed decoding by using complex matrices to optimise the soundfield reproduction for the given speaker array. The layouts can be planar (horizontal) or periphonic (spherically based). Benefits of standardising Ambisonic speaker array layouts will ensure increased accuracy in translation between production and listening environments.

The adoption of Ambisonics to be a array layout independent, programme material agnostic, universally acceptable reproduction format requires a symbiotic process similar to that of other discrete audio formats. The ITU-R BS 775-1, or 5.1, layout specification adoption for film and music is one similar instance of cohesion between a format ensuring quality across both production and listening environments. Beyond suggesting standard listening environments, listening levels, speaker calibration the author provides an insight into optimisation of numerically large speaker arrays.

A prototype third order Ambisonic delay plug-in using randomising and oscillating positional encoding
Robert E. Davis and D. Fraser Clark

Ambisonics is a scalable format for spatial sound reproduction which has extensive applications in sound design and music composition, however there remains a lack of practical DAW plug-in tools to enable ambisonics to be utilised for content production. The present paper will discuss the design and implementation of a prototype third order ambisonic delay plug-in effect. The plug-in allows the distribution of a sound source throughout the sound-field using randomising and oscillating positional encoding.

The design of the processing system will be explained and its implementation using a graphical prototyping environment will be outlined. Test data is presented to describe the output of the plug-in in relation to an arbitrary input signal, and suggestions for further improvements of the plug-in are given.


Audio Over IP - How it can work

Date: 12 Jun 2012
Time: 18:30

Location: Royal College of Pathologists
2 Carlton House Terrace
London SW1Y 5DG

Lecture by Peter Weitzel, Weitzel.tv

A recording of this lecture is available here (mp3)

Peter Weitzel relates his experiences sending Audio over IP during the past 13 years. He will share the lessons he and his team at the BBC (later Siemens) learned using various systems to share audio feeds with broadcasters in the UK and USA.

Peter’s lecture aims to demystify Audio over IP and show that there can be some surprisingly cost effective solutions for an audio engineer with a good grasp of the fundamentals, and who observes three basic rules when designing, building and operating Audio (or Video) over IP systems.


Thoughts from an Audio Networking Agnostic

Date: 10 Jul 2012
Time: 18:30

Location: Royal College of Pathologists
2 Carlton House Terrace
London SW1Y 5DG

Lecture by Andy Cooper, Yamaha Commercial Audio Support Centre – Europe.

A recording of this lecture is available here (mp3)

For designers of audio systems it can be considered an exciting time – or confusing, bewildering and unsure. As digital audio networking has become the norm during the past 10 years or so, a number of protocols have emerged, while others are still being developed. This has led to many new and great possibilities… and to many uncomfortable incompatibilities. All offer significant advantages over the analogue alternative, but which types of audio networking are still going to be with us in another 10 years?

This lecture, presented by someone who has first-hand experience of virtually all the major networking formats, will aim to demystify the differences, show the strengths and uncover the weaknesses of each protocol.

 


Loudspeaker Design: Tradition Versus Science

Date: 10 Jan 2012
Time: 18:30

Location: Royal College of Pathologists
2 Carlton House Terrace
London SW1Y 5DG

Lecture by John Watkinson.

We regret that no recording is available of this lecture.

Despite enormous progress in understanding how the human auditory system works, most present day loudspeakers cling to outmoded and discredited techniques that have not changed in decades.

The availability of advanced materials and design tools means that the task of advanced speaker design has never been easier, but the necessary steps simply are not taken.

This presentation will look at the criteria for accurate sound reproduction and will show that these criteria can be met. Demonstrations of some alternative loudspeaker designs will be given.


Meeting at SSL, Begbroke: Analogue - five years to obsolescence (1986) / Codecs, Compression and the Pro-Codec

Date: 26 Jun 2012
Time: 17:30

Location: Solid State Logic
25 Spring Hill Road
Begbroke, Kidlington, OX5 1RU

Two lectures by SSL’s Niall Feldman and Josep Maria Sola from Sonnox.

We look forward to welcoming many of our members from the Midlands area to this event. Non-members are also welcome to attend.

The lectures will be preceded by a members-only tour of SSL’s studio facilities, starting at 4.00pm. (Registration for the tour is essential. See here for details.)


Vintage Sound Recording Technology: Practicality, Source and Place

Date: 11 Jun 2012
Time: 19:00

Location: Anglia Ruskin University, Rom Mel001
Mellish Clark Building
Cambridge CB1 1PT

Lecture by Dr Samantha Bennett, University of Westminster.
this lecture’s page on facebook

As part of a wider, post-Doctoral study into the use of technological precursors in contemporary sound recording, this talk exemplifies vintage technologies and use value. The discussion draws upon primary research surrounding vintage systems in the modern workplace, with case study examples including Liam Watson’s Toerag Studios, Lewis Durham’s Evangelist Studios, Steve Albini’s Electrical Audio Studios, amongst others. What are the practical issues surrounding technological precursors and current recording practice? What are the benefits and challenges pertaining to vintage systems and integration with modern DAWs? From where are sound recording precursors sourced and how are they maintained?

Also considered are cultural issues surrounding technological ‘iconicity’; misperception of vintage systems as ‘memorabilia’ or as symbols of nostalgia, as well as monetary value and the second-hand equipment market.

Dr. Samantha Bennett is Senior Lecturer in Commercial Music Production at the University of Westminster. Having completed her AHRC funded PhD in recording techniques at the University of Surrey, she is currently working on her first book: Modern Records, Maverick Methods – Technology and Process in Contemporary Record Production, to be published by Michigan University Press as part of their ‘Tracking Pop’ series. Additionally, she is collaborating with Professor Allan Moore on a project concerning the technological and processual ‘shaping’ of recorded popular song. The project contributes to the AHRC Centre for Musical Performance as Creative Practice, based at Oxford, Cambridge and King’s College Universities.


Moving forward in audio - The impact of new technologies on the production and consumption of recorded music (panel session)

Date: 9 Oct 2012
Time: 18:30

Location: Royal College of Pathologists
2 Carlton House Terrace
London SW1Y 5DG

A recording of this panel session is available here (mp3)

With the advent of low-cost domestic recording software and hardware, and the increased ease of digital music distribution and consumption, it has become easier than ever to record, distribute and buy professional audio.

We’ve invited a group of young professionals who are actively involved in the production and marketing of music to discuss how the proliferation of new computer-based recording and creation technologies are affecting those involved in the different facets of the recorded music industry – from those creating the technology to professional artists using it. Specific reference will be made to traditional recording techniques and equipment, and the changing role of studios, producers and engineers in today’s industry.

Our panel members include those who have been involved in the development of best-selling audio equipment, and the recording and production of top-selling independent and major artists.

Chair: Will Evans, A&R/audio engineer – Focusrite, Tape Club Records, AES UK

Nigel R Glasgow, producer, engineer, tour manager – B Flat Productions, Red Bull Studios

Jean-Baptiste Thiebaut, head of technology – Sea Labs

Alex Robinson, label/artist management  – The Other Hand, Stones Throw

Kwes, recording artist/producer – Warp Records

 


Cutting Edge Research - from University of Southampton

Date: 13 Nov 2012
Time: 18:30

Location: Royal College of Pathologists
2 Carlton House Terrace
London SW1Y 5DG

A recording of the introductory presentation to the work of the ISVR is available here (mp3)

For this year’s Cutting Edge Research event we have invited students from the University of Southampton’s Institute of Sound and Vibration Research to present and demonstrate some of their current projects.

This is a great opportunity for our members with industry experience to discuss, offer advice and opinions, and perhaps even challenge the research conducted by these audio industry practitioners of the future. We would particularly encourage our Sustaining Members to participate in this event.

Abstract:

The Institute of Sound and Vibration Research (ISVR) at the University of Southampton was founded in 1963. It is an internationally recognised centre of excellence for research and training in acoustics and vibration. The ISVR is committed to improving our understanding of acoustics and vibration and their impact on the wellbeing of the community, and the quality and performance of engineering products.

The Virtual Acoustics and Audio Engineering team of the ISVR is part of the Fluid Dynamics and Acoustics research group. The team has been working for the last twenty years on using signal processing techniques and on developing electroacoustical solutions to improve the quality of sound reproduction, with special attention to the spatial attributes of the reproduced sound scene.

Our investigation method is threefold: firstly we conduct research into the physics of sound reproduction, understanding how sound is captured and generated by electroacoustical transducers, how acoustic waves propagate and interact with the space in which sound is reproduced and how multiple sources of sound interact.

Secondly, we study and model the human perception of sound, especially with respect to localization, and we attempt to relate this to the physical quantities that we can observe. Finally, we develop and apply signal processing techniques, often based on the theory of inverse problems, in order to control the reproduced acoustic field and to improve the quality of sound reproduction.

A large part of our research has focused on binaural audio reproduction, which led to the development of systems such as the Stereo Dipole and, more recently, the Optimal Source Distribution. These technologies allow for an optimal cross-talk cancellation without compromising with respect to the quality of audio or the dynamic range of reproduction. Further work has seen the successful application of these reproduction techniques to auralisation tasks.

Further intensive research efforts have been dedicated to gaining a solid and rigorous understanding of the problem of multi-channel audio reproduction, which has in turn led to the development of novel sound field reproduction algorithms. These find their application in the telecommunications and professional audio industry.

We have also worked on the development of numerical models of the human auditory process for sound localization. Excellent results have been achieved with our 2D (horizontal) localization model and we are currently working towards the extension of this model to include also the identification of perceived source elevation. The quality of reproduction of low-frequency signals has also been investigated and novel methods for quantifying low-frequency response accuracy have been developed.

Our most recent work is related to compact arrays of loudspeakers and of microphones. The former allow for the reproduction of sound in localized areas and for the delivery of diversified audio material, including purist stereo, to multiple listeners. We use microphone arrays to tackle the reciprocal problem, namely to identify and separate different sources of sound, thus enabling us to capture the spatial attributes of a desired sound scene.

Our work has attracted a variety of national and international collaborators and sponsors. We have had the privilege to collaborate with companies such as Samsung, BBC, KEF, Meridian Audio, Martin Audio and ETRI, among others.

On the occasion of the Cutting Edge Research Evening, Prof Philip Nelson, Dr Keith Holland and Dr Filippo Fazi will present a brief overview of our research activity. This will be followed by audio demonstrations and posters presented by the researchers and doctoral students of our team.

We all look forward to this unique opportunity to interact with an expert audience, both from industry and academia, and to discussing our work and to receiving your feedback.

 


AES Technical Visit - Dolby Atmos

Date: 26 Nov 2012
Time: 18:30

Location: Dolby Europe’s London Office
4–6 Soho Square
London W1D 3PZ

On Monday 26th November AES members will have an opportunity to hear the revolutionary new Atmos immersive audio system, courtesy of its creators, Dolby Laboratories.

The demonstration will take place in Dolby Europe’s state-of-the-art preview theatre at the company’s recently opened Soho Square offices in the heart of London’s West End.


Atmos combines traditional channel-based surround sound with individual sound ‘objects’ to create an immersive experience using up to 64 channels, which can be tailored for an individual cinema. (More information on Atmos can be found here.)

The time is 6.30pm for a prompt 7.00pm start.

Places are limited for this members-only event so early booking is recommended.

How to book:
Following a successful trial by our Scotland group, we will in future be using EventBrite to handle the bookings for Technical Visits. Simply click on Book Now below to start the process.
Please note that this is the only way to book a place, but you do not need to create an EventBrite account in order to obtain your ticket. It will be sent as an attachment to your confirmation e-mail. Just print this out and bring it with you.

Book Now
(AES members only)


133rd AES Convention Review

Date: 11 Dec 2012
Time: 18:30

Location: Red Bull Studios
155-171 Tooley Street
London SE1 2JP

Couldn’t go to the 133rd AES Convention in San Francisco? Want a first-hand account of the highlights?

For December’s London meeting we’re assembling a panel of AES members who attended the U.S. convention at the end of October. They will discuss what they learned in the papers sessions, highlight the products that caught their eye in the exhibition, and give their opinions on the new initiatives being tried this year such as the Project Studio Expo. In the Q&A that follows, attendees will have the opportunity to ask the panellists about the areas that particularly interest them.

Moderator: Zenon Schoepe, Editorial Director, Resolution

Crispin Murray, Wisseloord Studios

Mandy Parnell, Black Saloon Studios

Rob Toulson, Anglia Ruskin University

John Richards, Oxford Digital

We will also be providing some ‘festive cheer’ in the form of mulled wine and mince pies, courtesy of John Emmett’s Broadcast Project Research. Cheers, John!

Please note the different venue for December’s meeting – Red Bull Studios, located near the southern end of Tower Bridge.  Nearest station is London Bridge, then walk approx half a mile along Tooley Street, or take bus 47, 343 or RV1 from Stop R to City Hall.


Audio work at the MRC Institute of Hearing Research

Date: 28 Nov 2012
Time: 18:30

Location: Glasgow Caledonian University (CPD Building)
70 Cowcaddens Road
Glasgow G4 0BA

Lecture by Michael Akeroyd, Alan Boyd, Owen Brimijoin, Bill Whitmer, MRC Institute of Hearing Research, Glasgow Royal Infirmary

Abstract:

The MRC Institute of Hearing Research conducts multi-disciplinary research on hearing and hearing disorders, from basic research of scientific discovery to translation into clinical practice. The Scottish Section of IHR, based in the Glasgow Royal Infirmary, concentrates on hearing impairment, its consequences, and the benefits offered by hearing aids, with the aim of understanding the problems encountered by hearing-impaired people and what hearing-aids can do.

This talk will give an overview of the Scottish Section of IHR and its work. Three of the IHR scientists will then give short talks on some of its current projects, including motion tracking and hearing aids.


The psychology of technology: Multidisciplinary approaches to developing user centred audio and music technology

Date: 29 Oct 2012
Time: 18:30

Location: Glasgow Caledonian University (CPD Building)
70 Cowcaddens Road
Glasgow G4 0BA

Lecture by by Dr Don Knox, senior audio technology lecturer, Glasgow Caledonian University

Abstract:

Music listening behaviour has changed. People now have access to large music collections on portable devices, computers and tablets, and are increasingly using music to accompany their daily activities. Music listening is increasingly goal-directed, and has been found to enhance task performance and affect behaviour. Listening to our favourite music has also been shown to have a range of beneficial effects, including significant effects on health and wellbeing.

New and developing technology for browsing and interaction with large music databases has the potential to support the needs and preferences of the listener, and go far beyond simply browsing by genre, artist and track title. In order to advance this area, audio engineers must embrace a multidisciplinary view of the technology and the human being who will ultimately benefit from it.

The lecture will be preceded by the Annual Report on the AES Scotland group by its chair, Dr Elena Prokofieva, Edinburgh Napier University


CANCELLED!! The Sound of Stonehenge

Date: 20 Nov 2012
Time: 18:00

Location: School of Music, University of Leeds
University of Leeds
12 Cavendish Rd

Due to problems with the venue we have been obliged to postpone this lecture until the New Year. We sincerely apologise to everyone who had planned to attend but we regret that this is entirely outside our control and it is now too late to organise an alternative venue.

Lecture by Dr Bruno Fazenda, Acoustics Research Centre, University of Salford

Abstract:

Stonehenge is the largest and most complex ancient stone circle known to mankind. In its original form, the concentric shape of stone rings would have surrounded an individual both visually and aurally. It is an outdoor space and most archaeological evidence suggests it did not have a roof. However, its large, semi-enclosed structure, with many reflecting surfaces, would have reflected and diffracted sound within the space creating an unusual acoustic field for the Neolithic Man.

This presentation describes acoustic measurement studies taken at the Stonehenge site in the United Kingdom and at a full size and fully reconstructed replica site in Washington State, USA. The aim of the research is to understand the acoustics of this famous stone circle and discuss whether it would have had striking effects.

Features of the acoustic response and state of the art modelling will be presented and used to discuss the existence or otherwise of audible effects such as flutter echoes, low frequency resonances and whispering gallery effects. A description of an auralisation system based on Ambisonic and wave field synthesis technology will be given. A stereo rendition of the sound of Stonehenge will then be presented to the audience.


Hearing is Believing?

Date: 28 Nov 2012
Time: 19:00

Location: Anglia Ruskin University, Rom Mel001
Mellish Clark Building
Cambridge CB1 1PT

Lecture by Dr Rob Toulson, Anglia Ruskin University, Cambridge

Abstract:

This lecture will discuss how a number of concepts of analogue and digital audio can be considered and evaluated with reference to simple sine wave examples and ideal signal flow scenarios. However, real audio signals are complex waveforms involving detailed frequency spectra and transient envelopes, so there is often a point where the theory alone gives insufficient grounding to make a professional judgement on sound.

A number of live recording projects will be presented and the many compromises of theoretical best practice are highlighted and discussed. The presentation will consider the notion that ‘if it sounds right, it is right’ and identify import aspects of professional practice which are often required alongside a theoretical understanding in order to be a successful recording engineer. Finally, a number of research topics related to the limitations of our hearing are also considered in order to discuss where and when we can truly believe our ears.

Rob Toulson is Acting Director of the Cultures of the Digital Economy Research Institute at Anglia Ruskin University, Cambridge. Rob is a music producer and recording engineer with a number of years experience teaching and developing innovative research projects in audio engineering. His first degree is in Engineering and his PhD in Digital Signal Processing. He is a Chartered Engineer and an active member of the Audio Engineering Society, the UK Music Producers Guild and the Institute of Engineering and Technology.