Meetings Archive – 2011
Date: 10 May 2011
Lecture by John Vanderkooy.
Active acoustic absorbers can replace low-frequency ‘passive’ absorption techniques. Passive techniques generally involve solving resonance problems in a room by introducing absorbing materials or resonant structures. General-purpose acoustic absorption, comprising sheets of heavy material attached to rigid frames, are somewhat impractical in many rooms, because the size at which they become effective is between a quarter- and a half-wavelength of the frequency of interest — around six feet when treating a 50Hz resonance. Membrane absorbers and resonators are tuned to move when stimulated by certain frequencies, and hence to terminate standing waves. These can be relatively small in size, but because they can react to only a small range of frequencies, several may be required to treat a room. We can alleviate serious problems in a room by equalising the loudspeakers that excite them, but this addresses only problems of sound pressure level. Equalisation cannot treat the equally insidious phase and reverberation time discontinuities that afflict specific frequencies.
Active absorption is effected by positioning subwoofers or full-range loudspeakers strategically, and driving them with a specially-calculated signal that cancels a large range of frequencies, using less space and treating a greater range of frequencies than a passive absorber could.
A relatively simple and theoretically ideal example of this is the ‘delay and cancel’ scheme. Taking a rectangular room, we can place two loudspeakers 25% and 75% of the distance along a wall, and drive them coherently. The images of these loudspeakers reflected in the other walls are evenly spaced, creating a plane wave along the room. This can be cancelled at the rear of the room using a similar arrangement of loudspeakers, delayed appropriately. The bass then becomes effectively anechoic at low frequencies.
Given a rectangular room of dimensions 8 × 7 × 3.5 metres and a reverberation time of one second, we can use the Sabine formula to calculate that the room contains 31.5 sabins (square metres of ideal absorption). The effective area of an active absorber is equal to:
Aabs = λ2 / 4π
This is about 3.7 sabins at 50Hz (an extra 12% of absorption in this room), and 24 sabins at 20Hz (an extra 74%). The theoretical benefits of ideal active absorption are clear, but there are some practical difficulties. Firstly, an ideal loudspeaker is a point source radiator, and the wavefront that we want to treat is generally closer to a plane wave. How do we know that our absorber is not interacting elsewhere with the wave that we are attempting to absorb? Secondly, how does this treatment work in a room where the absorption signal itself is reflected?
John Vanderkooy’s derivation of the driving voltage for an acoustic absorber was performed very rapidly at the lecture, but the brief answer is that both conditions are met without difficulty. The emerging formula for the absorbing signal is:
q(t) = 2πc/ρ × ∫ ∫ p(r,t) dt dt
Where q(t) is the desired volume velocity of the active absorber loudspeaker. Thus, the cone velocity of the absorber is proportional to the double time integral of the pressure at the loudspeaker from the room. To produce this volume velocity we could use a velocity-sensing coil on the loudspeaker in feedback, for example. The pressure p(r,t) must not be contaminated by the absorber signal itself, so we must know the absorber response and subtract it from the microphone signal. Eliminating the self-pressure of the loudspeaker (caused by its provision of the absorbing signal), and shaping the output transfer function to be both stable and correct for the loudspeaker is a significant challenge.
In summary, active absorption is an acoustically valid way of treating low-frequency problems in real rooms, but there are considerable practical difficulties in doing it well. One practical barrier is the necessity for near zero-latency analogue-to-digital conversion and DSP in order to suppress the local absorber signal, read the instantaneous external pressure from the room, react to it, and hence calculate the desired absorption signal.
Report by Ben Supper
Date: 3 May 2011
Lecture by Gary Spittle, Cambridge Silicon Radio.
Date: 17 May 2011
Lecture by Simon Humphrey, The ChairWorks Studio.
Simon Humphrey is a recording engineer and producer based at the ChairWorks, a thriving independent recording facility in Castleford, near Leeds.
Simon will discuss the burning questions that ‘float’ around today when it comes to the role of the engineer, producer and recording studio. He will tackle that all-too-often-questioned subject of recording studios and their validity in a modern recording industry – ‘why do studios matter?’
Using the ChairWorks Studio as a blueprint, Simon will discuss ‘how they do it’. While some may think studios are the ‘kiss of death’, we will hear, perhaps most importantly, ‘why do they do it?’
Moving on to studio equipment, Simon puts forward the question, ‘Mixing out of the box, the New way?’ Computers could have consigned analogue mixers and the outboard rack to history, but they haven’t. Simon will discuss Rack V’s plug-ins and explain how they work together with outboard on a mix.
Simon’s experience working in education has allowed students a privileged door to the inner workings of the music Industry and it seems students everywhere ask the same three ‘golden questions’. Simon will discuss and give his answers to these.
Throughout, Simon will be talking about his career over the past almost 40 years. It is a invaluable look into the life of a seasoned engineer’s career through the decades, whose work many see as the benchmark of engineering today.
Date: 25 May 2011
Lecture by Nick Durup, Sharps Redmore Partnership.
Date: 14 Jun 2011
Lecture and demonstration by John Lambert and Geert Bevin.
Developed in the UK, the Eigenharp represents a significant departure from previous electronic instruments. A wholly new sensor design, software system and physical form factor combine to make a highly expressive and inspiring experience for the musician who wishes to use software synthesis and sampling. The instrument has a growing following and a number of important artists now own them including film composer Hans Zimmer, Grammy award winner Imogen Heap and saxophonist Courtney Pine.
The talk will introduce the instrument and demonstrate its capabilities, before exploring the engineering challenges that were encountered and solved as part of its eight year development process. The sensor design, physical layout and communications system will be described along with a discussion on the importance of different types of software instruments, the emerging Open Sound Control standard and the use and limitations of MIDI in expressive environments.
John Lambert will be accompanied by Geert Bevin, a senior software engineer and musician at Eigenlabs who will be demonstrating and playing the instrument.
Time permitting, there will be an opportunity to try an Eigenharp at the end of the talk.
Date: 21 Jun 2011
Lecture by Walter Samuel.
Walter Samual will talk about his life working as an engineer since the 1970s. He will explore how the role of the recording engineer has changed over the years and his experiences in the music industry. This is a rare insight into the career of such a distinguished practitioner. Walter will also demonstrate his recording techniques using DPA microphones.
Walter’s talk will be followed by a guided tour around The Chairworks, a rare opportunity to see behind the scenes of this highly specified studio complex.
Date: 29 Jun 2011
Lecture by Paul Malpas, Engineered Acoustic Designs.
Paul Malpas has over 20 years’ experience working within acoustic consultancy. The majority of this time he has operated as a specialist designer responsible for the delivery of electro acoustic, audio system and speech intelligibility projects, and solutions within multi-disciplinary design environments. Paul’s seminar will involve, amongst other things, discussion and demonstration of high amplification speech systems for crowd management.
As always non-AES members are welcome. Don’t forget to interact with us on Facebook too at www.facebook.com/aescambridge
Date: 19 Apr 2011
Lecture by Roger Quested.
**Free shuttle bus from PLASA Focus. Contact firstname.lastname@example.org to book your place.**
Roger Quested will use his knowledge of the world’s top studios and recordings to explain how to make a studio monitoring system produce the best possible surround mixes. Demonstrating on his company’s own Quested 5.1 V3110 System, he will discuss i) positioning of speakers and subwoofers, ii) the LFE channel and how best to integrate subwoofers into the system and environment, and iii) common mistakes to be avoided.
With experience of surround systems gained from working with such names as Hans Zimmer (Gladiator, Pearl Harbour, The Dark Knight), Hackenbacker Studios (Downton Abbey, Spooks, Shaun Of The Dead) and Trevor Horn, this will be a chance for those hoping to maximise their surround mixes to understand the complex monitoring elements that affect all studio environments.
Date: 12 Apr 2011
Lecture by Keith Howard
It is a truism that harmonic distortion affects the perceived quality of an audio signal. It is less readily accepted that such distortion may sometimes be pleasant. In 1977, Hiraga’s article ‘Amplifier Musicality’1 controversially suggested that certain kinds of harmonic distortion may improve the perceived quality of a Hi‑Fi system. This notion is now dubbed ‘euphonic distortion’, although more than thirty years later, few, if any, new insights exist on the subject.
Less controversially, many recording engineers insist on specifying equipment that introduces certain types of harmonic distortion at high input levels, and then overdriving it: valve amplifiers and analogue tape, for example. The effect of this distortion is not obvious, but imparts a diaphanous quality of warmth or complexity. Other types of harmonic distortion, such as Class B amplifier crossover distortion, are undoubtedly dysphonic: even tiny amounts of crossover distortion are audible, and very unpleasant.
Keith Howard has measured, characterised and emulated harmonic distortion in certain situations. This research led to a number of important conclusions. One of these gives this lecture its title: is not sufficient to record only the level and spectrum of harmonic distortion to reproduce it effectively. Rendering the correct harmonic phase correctly is just as important.
The distortion algorithm that Keith Howard uses in his experiments is based around a waveshaping kernel. This is a mapping function which changes a certain input sample value to an output sample value. Being time and frequency invariant, this is a simplification of a class of systems that are commonly applied for non-linear signal processing. The mapping function may be controlled, using any of a number of methods, to produce a certain pattern of harmonics for an input sinusoid of a certain amplitude. To add a second harmonic, for example, the following trigonometric identity is used as a starting point:
2 cos2 x - 1 = cos 2x
So 2x2 - 1 is the waveshaping kernel function, from which the d.c. on the output must be filtered if anything but a full-amplitude sinusoid is presented. (Click the image to see an animated version.)
For the third harmonic, a different identity is used:
4 cos3x - 3 cos x = cos 3x
So 4x3 – 3x is used as a waveshaping kernel function to generate the third harmonic. (Click the image to see an animated version.)
In waveshaping, the amplitude of a harmonic falls faster than that of the input signal, so that attenuating the input (in this case, by 3dB) changes the shape of the output wave. (Click the image to see an animated version.)
The generation of wave shaping functions may also be performed iteratively using Chebyshev polynomials.
Keith’s method of designing and applying waveshaping distortion is encapsulated in a free program called AddDistortion, available from the freeware page of his web site.
Beyond the second and third harmonics, the fractions of each order of polynomial become strongly interdependent. For any input signal that is not a sinusoid at a certain predetermined amplitude, it is not possible to add a fourth harmonic without also introducing a second harmonic. The same is true for any harmonic beyond the third. Also, because the distortion kernel is derived from a series of continuous functions, discontinuities such as corners or jumps in the transfer characteristic cannot be modelled. A final complication is that the signal must be interpolated before waveshaping and decimated afterwards. This prevents aliasing distortion from occuring when the upper harmonics pass the Nyquist limit.
The ramifications of these limitations are powerful. For example, we could attempt to correct a system that distorts audio in a known way, by applying pre-distortion to the input. However, this results in problems. If the system introduces a second harmonic, we might generate this harmonic in antiphase in the input so that it cancels the distortion product. However, the second harmonic introduced in the input will itself be distorted by the system, and will generate a fourth harmonic in the output, and very likely a third harmonic as an intermodulation product. We eliminated the second harmonic, but possibly made the problem somewhat worse. If we then anticipate the fourth harmonic, there will then be an eighth harmonic in the output, and so on. Such correction cannot therefore be performed using analogue circuitry. This rule was often advanced in the argument against the use of corrective feedback when the debate raged in the Hi-Fi community a few decades ago. However, a correct transfer characteristic may carefully be derived in the digital domain by generating a true inverse function, which is effective at least until a certain maximum frequency is reached.
When more complicated signals are distorted by nonlinear functions, it is known that harmonic distortion is a very small part of the overall picture: Brockbank and Wass determined analytically that, for a signal containing thirty harmonic products, the intermodulation distortion generated by a nonlinearity in the system comprises 99% of the total distortion power2. Full measurement and analysis of intermodulation distortion requires at least as many components in the input signal as harmonics that are under scrutiny.
This method and these observations take us to a practical example of the importance of harmonic phase. Keith advanced three case studies, the first of which demonstrates the point effectively; the other two highlight the opportunities for wider research.
Case study 1: Crossover distortion
In 1975, James Moir performed a series of listening experiments in which a Class AB amplifier was biased at different levels, and the audibility of the resulting distortion measured3. Keith Howard’s first attempt to reproduce these results using a waveshaping kernel was not effective: amounts of distortion that would have been perceived as unacceptable in the listening test were barely audible in practice. The generated transfer characteristic looks nothing like crossover distortion, and has very little effect on a low-amplitude signal.
However, by alternating the polarity of the harmonic partials but keeping them at the same level, a more familiar characteristic is revealed:
For a full-deflection sine tone, these would measure exactly the same on a spectrogram or a THD+n meter, but they are clearly not the same. The resulting waveform reproduces the results of Moir’s test satisfactorily, and keeps the distortion components far higher as the amplitude falls. It also proves that when we are analysing or modelling distortion, we are interested just as much in the waveshaping function as the absolute level of the harmonic partials levels.
Case study 2: Hysteresis in transformers
In addition to the nonlinearities caused by saturation, audio transformers exhibit an asymmetrical transfer characteristic caused by their magnetic memory (hysteresis). As well as being frequency dependent, this characteristic makes modelling the distortion very difficult, because phase shift is introduced into the signal as well as wave shaping. Keith suggested a number of ways in which this could be incorporated into the distortion model in future, by using two waveshaping kernels in quadrature.
Case study 3: Loudspeaker distortion
The mechanisms that cause loudspeaker distortion are split into many different types: some, such as the cone or spider hitting their maximum excursion, are proportional to the displacement of the loudspeaker; some, such as eddy currents, are proportional to the force applied to the coil; others are proportional to cone velocity. The problem of modelling this distortion therefore falls into the same category as hysteresis in transformers: non-linearities act in different ways at different phases of the signal, and a static waveshaping kernel is clearly of limited use.
1. Hiraga, J. ‘Amplifier Musicality’. Hi-Fi News & Record Review. Vol. 22(3), March 1977, pp.41–45.
2. Brockbank, R. A., and Wass, C. A. A. ‘Non-linear distortion in transmission systems’. J. I.E.E., Vol. 92, III, 17, 1945, pp.45–56.
3. Moir, J. ‘Crossover Distortion in Class AB Amplifiers’. 50th AES Conference, March 1975. Paper number L-47.
Howard, K. ‘Weighting Up’. Multimedia Manufacturer, September/October 2005, pp.7–11.
Report by Ben Supper
Date: 8 Mar 2011
A group of leading mastering engineers discuss the latest techniques and the challenges they face.
A recording of the workshop is available here (66MB mp3)
Now that digital recording technology has superseded analogue, is it the ‘perfect sound forever’ that we were promised at the launch of the CD back in 1982?
A group of leading mastering engineers discuss a range of topics encompassing:
Synchronisation: How come it’s the sound that is always out of sync? Why is it not the pictures?
Dither: Is it important any more? Can we hear the difference? What changed?
Compression: How loud does it need to be? What is required for the best results when broadcasting or digitally distributing data compressed files?
Creation of a future proof archive: Just which of those 37 files labelled ‘Master – Final Version’ is actually the master, and whose responsibility is it to keep a record of this information?
On the panel are:
Crispin Murray – Metropolis Mastering (Moderator)
Mazen Murad – Metropolis Mastering
Ray Staff – AIR Mastering
David Woolley – Thornquest
They will share some experiences and advice, along with hopefully some amusing anecdotes of what to avoid in order to produce the best results.