Meetings Archive – 2012
Date: 21 Feb 2012
Lecture by Dave Malham, Music Research Centre, University of York.
Ambisonics has been around as a system since the early 1970s, although its basics in some ways date back to Alan Blumlein’s work on stereo and Harry F. Olson’s development of directional microphones at the start of the 1930s.
Tarred with the same brush as the Quadraphonics debacle of the 1970s, it was kept alive by a small band of enthusiasts who realised the much greater capabilities inherent in the system. This continued to be the case until the advent of low cost digital technology towards the end of the 20th Century meant that it became, at last, accessible to many more people. In the past decade far more papers have been published on Ambisonics and Ambisonics-related subjects than in the whole of the preceding three decades. Does this mean it has finally triumphed?
Dave Malham has written VST plug-ins for Ambisonic processing, the ‘MRC Stereometer, a
K-system metering plug-in, and (with Matt Paradis) the ‘ambilib’ Ambisonic processing library for PD and Max/MSP. He also has a patent, WO02085068, for the Ambisonic Sound Object Format. Dave teaches digital audio, signal preservation, sound spatialisation and recording techniques on the Music Technology MA course at York.
Date: 16 Feb 2012
Location: To be confirmed
To be confirmed
Our regional group in Scotland has arranged a Technical Visit to BBC Scotland’s studio facilities at Pacific Quay, Glasgow on Thursday 16th February at 6.30pm. BBC staff will talk about the design and work of the TV studios, followed by a tour of the facility.
Date: 22 Mar 2012
Lecture by John Ward and Bill Campbell, Digital Remasters.
Teaching Music Production and Sound Engineering requires students to be able to access and hear milestone recordings from the past to inform their learning and practice. In a wider context, the discerning …audiophile also wishes to hear such recordings as close as possible to the original studio masters. Unfortunately, to some extent, all they can now purchase are digital remasters. Remasters are marketed mainly as improvements to the original releases, but in many cases this claim is very debatable.
Recordings such as Pink Floyd’s Dark Side of the Moon, Miles Davis’ Kind of Blue, David Bowie’s Hunky Dory, Queen’s Night at the Opera and Led Zeppelin’s Physical Graffiti, are seminal recordings which listeners should be able to hear in a way that reveals the passion in the performance and the skill and artistry in the engineering and production. This paper will argue that in some cases, extreme digital remastering is robbing people of access to the true beauty of highly important and seminal recording, and presenting them with modern remasters that in some cases lose much of the feel of the originals. It will also suggest that such radical remastering is actually cultural vandalism that would not be tolerated in other art forms – imagine the outcry if The Mona Lisa was retouched in such a way that all the blues were overemphasised, the contrast reduced and the brightness increased.
There are a number of ways remastering is approached.
One is to attempt to ‘clean up’ the original mix, remove tape hiss and repair tape dropouts, generally removing the “patina”, but without any radical alteration of EQ and dynamics. This method does not particularly trouble the author although some may argue that it is an “Intentional Fallacy”
Another is to quite radically alter the studio master with digital EQ, compression and limiting. It is the latter approach that is most widely used and which the author finds most questionable, and examples of this approach to remastering will form the main focus of the presentation to ASARP.
Other sometimes quite radical methods are used, especially on very old recordings.
The talk will be illustrated with A/B comparisons of high quality recordings of original vinyl releases from the author’s own extensive collection, with digital remasters available on CD. These will include snippets of some of the recordings named above and others, and will demonstrate how in some cases the feel, groove and soul of the originals have been altered. The recordings from LPs have been made at 24/96 resolution and dithered down to 16/44.1 resolution for playback to enable direct comparisons with tracks from remastered CDs. Comparable analysis of dynamic range and frequency spectra will be presented to show quantitatively and qualitatively how digital remastering alters the sound compared to the originals, in some cases reducing the dynamic range to increase loudness and boosting high frequencies to produce a false perception of higher fidelity. These analyses will used to explain the demonstrable perceived differences in voices, instruments, and rhythmic feel and groove.
Date: 13 Mar 2012
Lecture by Jeff Bloom, Synchro Arts.
A recording of this lecture is available here (45MB mp3)
Editing audio to fix timing and tuning problems has now become so commonplace that listeners would be hard-pressed to know when the timing, pitch or other characteristics of a recorded actor or singer have been manipulated to be more accurate or to simply sound better.
However, even with sophisticated software tools, in many situations the editing work required to achieve such polished precision can still be tedious and time consuming, and require considerable skill.
In this talk new processing techniques will be demonstrated which offer automated and precise solutions to certain common situations. These techniques involve automatically extracting, from an accurate ‘guide’ voice or instrument recording, selected characteristics such as timing, pitch, vibrato and loudness, and imposing these features on other less accurate recordings of similar performances.
This approach has many applications in consumer and professional audio processing products, including the following…
For professional applications:
1) Double and triple (or more) tracks can be made quickly to match, with adjustable precision, an accurate lead vocal.
2) Alternative performance characteristics can be transferred to a lead vocal.
3) Prosodic features (including timing, inflection and stress) of recorded dialogue can be transformed to have different but natural sounding features transferred from another recording.
4) In websites or mobile applications, recordings of amateur singers can be automatically transformed to have the characteristics of a professional vocalist.
5) A language student’s recording of his or her attempt at mimicking a teacher’s recorded sentence can be modified to make the student’s timing and pitch sound like the teacher’s and provide constructive feedback.
Jeff Bloom — who in 1984 invented the first audio time-alignment algorithms upon which these new techniques are based — will also chart the history of automatic time alignment in dialogue replacement and music applications.
Date: 10 Apr 2012
Lecture by Peter Eastty, Oxford Digital.
A recording of this lecture is available here (51MB mp3)
If you’ve ever wondered why audio DSP programming is so hard when the algorithms are so simple, this is the place for you. Hundreds of strange and wonderful audio processors have been developed over the past four decades and the presenter has struggled with dozens of them.
In order to learn from our mistakes this master class will tour examples of gross bad practice (suitably anonymized to protect the guilty) and in doing so we’ll extract some general principles useful to those who will design audio DSPs in the future. As a practical example of what can be achieved, we’ll go from simulator based algorithm development to listening to production quality code in a matter of minutes.
Date: 25 Apr 2012
Lecture by Charlie Slee, Thermionic Culture.
In a world where digital technology has transformed the recording studio and where outboard equipment has been replaced by the DAW, this lecture will take a look into the importance of valve outboard equipment in the modern recording studio.
This lecture will give an introduction into circuit design and the challenges faced when using thermionic valves in the recording studio, and will show their benefits and excellence when used properly.
We will take an in depth look into classic designs and their design philosophies, and with detailed circuit analysis give an understanding of different topologies and their uses, and explore performance improvement using modern techniques and technologies.
Date: 8 May 2012
Lecture by Ian Butterworth, National Physical Laboratory (NPL).
A recording of this lecture is available here (mp3)
Acoustic designers have increasingly accurate and rapid predictive tools at their disposal, enabling progressively impressive and high-fidelity acoustic products. However, the acoustic validation of such techniques still relies upon the use of a physical microphone to be scanned through the sound field through laborious manual adjustments, or requiring complex automated positioning hardware. The challenges in this approach usually result in limited spatial sampling and thus limited validation.
Ian Butterworth will be presenting recent work from the NPL that has enabled the rapid, remote and unperturbing measurement of a sound field using laser based techniques. The methods he has developed allow you to see the propagating waves, much like you would see a surface wave move in a ripple tank.
Exploiting the acousto-optic effect through the use of high sensitivity Laser Scanning Vibrometers, and a novel experimental setup, the new Rapid Acousto-Optic Scanning (RAOS) technique allows a sound field to be autonomously scanned, providing time-gated spatially distributed data that gives a picture of the propagating acoustic field in the time domain. Furthermore, complex time-averaging can also be employed to show averaged directivity maps.
The complexities of scanning a 3-dimensional field are discussed with comparison to computer simulations, and studies of various acoustic sources and reflective artefacts, such as QRD diffusers, are shown.
NPL’s Acoustics department undertakes leading-edge research to develop new and improved measurement methods, and holds the UK primary standard for various acoustic measurements, covering Sound-in-Air, Ultrasonics, and Underwater Acoustics.
Date: 15 May 2012
Lecture by Dr Rob Toulson, Anglia Ruskin University and RT Sixty Ltd
A number of concepts of analogue and digital audio can be discussed and researched with reference to simple sine wave examples and ideal signal flow scenarios. However, real audio signals are complex waveforms involving detailed frequency spectra and transient envelopes.
This lecture discusses some of the simple theory associated with digital sampling, dynamic range compression, comb filtering and equalisation, and evaluates the difference between simple sine wave analysis and the results found when using real audio examples. In many cases the taught theory is an excellent grounding for educational purposes, but often – in reality – it is simply what we hear that truly matters.
Date: 24 Apr 2012
The evening will feature two lectures based on posters presented at the recent UK conference, Spatial Audio in Today’s 3D World.
An Ambisonic Format Reproduction Array Standard
Andrew J. Horsburgh
The focus of this paper is to detail tools and listening environment requirements for using the Ambisonic format within common digital audio production workflows. In specifying layouts, allowing for qualitative preferential changes, Ambisonic speaker arrays up to 3rd Order reproduction can be used in-line with current audio listening standards. These standards relate to speaker positioning, sound pressure level, frequency response as well as optimal acoustical conditions. Ambisonics allows for dynamic speaker feed decoding by using complex matrices to optimise the soundfield reproduction for the given speaker array. The layouts can be planar (horizontal) or periphonic (spherically based). Benefits of standardising Ambisonic speaker array layouts will ensure increased accuracy in translation between production and listening environments.
The adoption of Ambisonics to be a array layout independent, programme material agnostic, universally acceptable reproduction format requires a symbiotic process similar to that of other discrete audio formats. The ITU-R BS 775-1, or 5.1, layout specification adoption for film and music is one similar instance of cohesion between a format ensuring quality across both production and listening environments. Beyond suggesting standard listening environments, listening levels, speaker calibration the author provides an insight into optimisation of numerically large speaker arrays.
A prototype third order Ambisonic delay plug-in using randomising and oscillating positional encoding
Robert E. Davis and D. Fraser Clark
Ambisonics is a scalable format for spatial sound reproduction which has extensive applications in sound design and music composition, however there remains a lack of practical DAW plug-in tools to enable ambisonics to be utilised for content production. The present paper will discuss the design and implementation of a prototype third order ambisonic delay plug-in effect. The plug-in allows the distribution of a sound source throughout the sound-field using randomising and oscillating positional encoding.
The design of the processing system will be explained and its implementation using a graphical prototyping environment will be outlined. Test data is presented to describe the output of the plug-in in relation to an arbitrary input signal, and suggestions for further improvements of the plug-in are given.
Date: 12 Jun 2012
Lecture by Peter Weitzel, Weitzel.tv
A recording of this lecture is available here (mp3)
Peter Weitzel relates his experiences sending Audio over IP during the past 13 years. He will share the lessons he and his team at the BBC (later Siemens) learned using various systems to share audio feeds with broadcasters in the UK and USA.
Peter’s lecture aims to demystify Audio over IP and show that there can be some surprisingly cost effective solutions for an audio engineer with a good grasp of the fundamentals, and who observes three basic rules when designing, building and operating Audio (or Video) over IP systems.