Meetings Archive – 2011
Date: 10 May 2011
Lecture by John Vanderkooy.
Active acoustic absorbers can replace low-frequency ‘passive’ absorption techniques. Passive techniques generally involve solving resonance problems in a room by introducing absorbing materials or resonant structures. General-purpose acoustic absorption, comprising sheets of heavy material attached to rigid frames, are somewhat impractical in many rooms, because the size at which they become effective is between a quarter- and a half-wavelength of the frequency of interest — around six feet when treating a 50Hz resonance. Membrane absorbers and resonators are tuned to move when stimulated by certain frequencies, and hence to terminate standing waves. These can be relatively small in size, but because they can react to only a small range of frequencies, several may be required to treat a room. We can alleviate serious problems in a room by equalising the loudspeakers that excite them, but this addresses only problems of sound pressure level. Equalisation cannot treat the equally insidious phase and reverberation time discontinuities that afflict specific frequencies.
Active absorption is effected by positioning subwoofers or full-range loudspeakers strategically, and driving them with a specially-calculated signal that cancels a large range of frequencies, using less space and treating a greater range of frequencies than a passive absorber could.
A relatively simple and theoretically ideal example of this is the ‘delay and cancel’ scheme. Taking a rectangular room, we can place two loudspeakers 25% and 75% of the distance along a wall, and drive them coherently. The images of these loudspeakers reflected in the other walls are evenly spaced, creating a plane wave along the room. This can be cancelled at the rear of the room using a similar arrangement of loudspeakers, delayed appropriately. The bass then becomes effectively anechoic at low frequencies.
Given a rectangular room of dimensions 8 × 7 × 3.5 metres and a reverberation time of one second, we can use the Sabine formula to calculate that the room contains 31.5 sabins (square metres of ideal absorption). The effective area of an active absorber is equal to:
Aabs = λ2 / 4π
This is about 3.7 sabins at 50Hz (an extra 12% of absorption in this room), and 24 sabins at 20Hz (an extra 74%). The theoretical benefits of ideal active absorption are clear, but there are some practical difficulties. Firstly, an ideal loudspeaker is a point source radiator, and the wavefront that we want to treat is generally closer to a plane wave. How do we know that our absorber is not interacting elsewhere with the wave that we are attempting to absorb? Secondly, how does this treatment work in a room where the absorption signal itself is reflected?
John Vanderkooy’s derivation of the driving voltage for an acoustic absorber was performed very rapidly at the lecture, but the brief answer is that both conditions are met without difficulty. The emerging formula for the absorbing signal is:
q(t) = 2πc/ρ × ∫ ∫ p(r,t) dt dt
Where q(t) is the desired volume velocity of the active absorber loudspeaker. Thus, the cone velocity of the absorber is proportional to the double time integral of the pressure at the loudspeaker from the room. To produce this volume velocity we could use a velocity-sensing coil on the loudspeaker in feedback, for example. The pressure p(r,t) must not be contaminated by the absorber signal itself, so we must know the absorber response and subtract it from the microphone signal. Eliminating the self-pressure of the loudspeaker (caused by its provision of the absorbing signal), and shaping the output transfer function to be both stable and correct for the loudspeaker is a significant challenge.
In summary, active absorption is an acoustically valid way of treating low-frequency problems in real rooms, but there are considerable practical difficulties in doing it well. One practical barrier is the necessity for near zero-latency analogue-to-digital conversion and DSP in order to suppress the local absorber signal, read the instantaneous external pressure from the room, react to it, and hence calculate the desired absorption signal.
Report by Ben Supper
Date: 3 May 2011
Lecture by Gary Spittle, Cambridge Silicon Radio.
Date: 17 May 2011
Lecture by Simon Humphrey, The ChairWorks Studio.
Simon Humphrey is a recording engineer and producer based at the ChairWorks, a thriving independent recording facility in Castleford, near Leeds.
Simon will discuss the burning questions that ‘float’ around today when it comes to the role of the engineer, producer and recording studio. He will tackle that all-too-often-questioned subject of recording studios and their validity in a modern recording industry – ‘why do studios matter?’
Using the ChairWorks Studio as a blueprint, Simon will discuss ‘how they do it’. While some may think studios are the ‘kiss of death’, we will hear, perhaps most importantly, ‘why do they do it?’
Moving on to studio equipment, Simon puts forward the question, ‘Mixing out of the box, the New way?’ Computers could have consigned analogue mixers and the outboard rack to history, but they haven’t. Simon will discuss Rack V’s plug-ins and explain how they work together with outboard on a mix.
Simon’s experience working in education has allowed students a privileged door to the inner workings of the music Industry and it seems students everywhere ask the same three ‘golden questions’. Simon will discuss and give his answers to these.
Throughout, Simon will be talking about his career over the past almost 40 years. It is a invaluable look into the life of a seasoned engineer’s career through the decades, whose work many see as the benchmark of engineering today.
Date: 25 May 2011
Lecture by Nick Durup, Sharps Redmore Partnership.
Date: 14 Jun 2011
Lecture and demonstration by John Lambert and Geert Bevin.
Developed in the UK, the Eigenharp represents a significant departure from previous electronic instruments. A wholly new sensor design, software system and physical form factor combine to make a highly expressive and inspiring experience for the musician who wishes to use software synthesis and sampling. The instrument has a growing following and a number of important artists now own them including film composer Hans Zimmer, Grammy award winner Imogen Heap and saxophonist Courtney Pine.
The talk will introduce the instrument and demonstrate its capabilities, before exploring the engineering challenges that were encountered and solved as part of its eight year development process. The sensor design, physical layout and communications system will be described along with a discussion on the importance of different types of software instruments, the emerging Open Sound Control standard and the use and limitations of MIDI in expressive environments.
John Lambert will be accompanied by Geert Bevin, a senior software engineer and musician at Eigenlabs who will be demonstrating and playing the instrument.
Time permitting, there will be an opportunity to try an Eigenharp at the end of the talk.
Date: 21 Jun 2011
Lecture by Walter Samuel.
Walter Samual will talk about his life working as an engineer since the 1970s. He will explore how the role of the recording engineer has changed over the years and his experiences in the music industry. This is a rare insight into the career of such a distinguished practitioner. Walter will also demonstrate his recording techniques using DPA microphones.
Walter’s talk will be followed by a guided tour around The Chairworks, a rare opportunity to see behind the scenes of this highly specified studio complex.
Date: 29 Jun 2011
Lecture by Paul Malpas, Engineered Acoustic Designs.
Paul Malpas has over 20 years’ experience working within acoustic consultancy. The majority of this time he has operated as a specialist designer responsible for the delivery of electro acoustic, audio system and speech intelligibility projects, and solutions within multi-disciplinary design environments. Paul’s seminar will involve, amongst other things, discussion and demonstration of high amplification speech systems for crowd management.
As always non-AES members are welcome. Don’t forget to interact with us on Facebook too at www.facebook.com/aescambridge
Date: 19 Apr 2011
Lecture by Roger Quested.
**Free shuttle bus from PLASA Focus. Contact email@example.com to book your place.**
Roger Quested will use his knowledge of the world’s top studios and recordings to explain how to make a studio monitoring system produce the best possible surround mixes. Demonstrating on his company’s own Quested 5.1 V3110 System, he will discuss i) positioning of speakers and subwoofers, ii) the LFE channel and how best to integrate subwoofers into the system and environment, and iii) common mistakes to be avoided.
With experience of surround systems gained from working with such names as Hans Zimmer (Gladiator, Pearl Harbour, The Dark Knight), Hackenbacker Studios (Downton Abbey, Spooks, Shaun Of The Dead) and Trevor Horn, this will be a chance for those hoping to maximise their surround mixes to understand the complex monitoring elements that affect all studio environments.
Date: 12 Apr 2011
Lecture by Keith Howard
It is a truism that harmonic distortion affects the perceived quality of an audio signal. It is less readily accepted that such distortion may sometimes be pleasant. In 1977, Hiraga’s article ‘Amplifier Musicality’1 controversially suggested that certain kinds of harmonic distortion may improve the perceived quality of a Hi‑Fi system. This notion is now dubbed ‘euphonic distortion’, although more than thirty years later, few, if any, new insights exist on the subject.
Less controversially, many recording engineers insist on specifying equipment that introduces certain types of harmonic distortion at high input levels, and then overdriving it: valve amplifiers and analogue tape, for example. The effect of this distortion is not obvious, but imparts a diaphanous quality of warmth or complexity. Other types of harmonic distortion, such as Class B amplifier crossover distortion, are undoubtedly dysphonic: even tiny amounts of crossover distortion are audible, and very unpleasant.
Keith Howard has measured, characterised and emulated harmonic distortion in certain situations. This research led to a number of important conclusions. One of these gives this lecture its title: is not sufficient to record only the level and spectrum of harmonic distortion to reproduce it effectively. Rendering the correct harmonic phase correctly is just as important.
The distortion algorithm that Keith Howard uses in his experiments is based around a waveshaping kernel. This is a mapping function which changes a certain input sample value to an output sample value. Being time and frequency invariant, this is a simplification of a class of systems that are commonly applied for non-linear signal processing. The mapping function may be controlled, using any of a number of methods, to produce a certain pattern of harmonics for an input sinusoid of a certain amplitude. To add a second harmonic, for example, the following trigonometric identity is used as a starting point:
2 cos2 x – 1 = cos 2x
So 2x2 – 1 is the waveshaping kernel function, from which the d.c. on the output must be filtered if anything but a full-amplitude sinusoid is presented. (Click the image to see an animated version.)
For the third harmonic, a different identity is used:
4 cos3x – 3 cos x = cos 3x
So 4x3 – 3x is used as a waveshaping kernel function to generate the third harmonic. (Click the image to see an animated version.)
In waveshaping, the amplitude of a harmonic falls faster than that of the input signal, so that attenuating the input (in this case, by 3dB) changes the shape of the output wave. (Click the image to see an animated version.)
The generation of wave shaping functions may also be performed iteratively using Chebyshev polynomials.
Keith’s method of designing and applying waveshaping distortion is encapsulated in a free program called AddDistortion, available from the freeware page of his web site.
Beyond the second and third harmonics, the fractions of each order of polynomial become strongly interdependent. For any input signal that is not a sinusoid at a certain predetermined amplitude, it is not possible to add a fourth harmonic without also introducing a second harmonic. The same is true for any harmonic beyond the third. Also, because the distortion kernel is derived from a series of continuous functions, discontinuities such as corners or jumps in the transfer characteristic cannot be modelled. A final complication is that the signal must be interpolated before waveshaping and decimated afterwards. This prevents aliasing distortion from occuring when the upper harmonics pass the Nyquist limit.
The ramifications of these limitations are powerful. For example, we could attempt to correct a system that distorts audio in a known way, by applying pre-distortion to the input. However, this results in problems. If the system introduces a second harmonic, we might generate this harmonic in antiphase in the input so that it cancels the distortion product. However, the second harmonic introduced in the input will itself be distorted by the system, and will generate a fourth harmonic in the output, and very likely a third harmonic as an intermodulation product. We eliminated the second harmonic, but possibly made the problem somewhat worse. If we then anticipate the fourth harmonic, there will then be an eighth harmonic in the output, and so on. Such correction cannot therefore be performed using analogue circuitry. This rule was often advanced in the argument against the use of corrective feedback when the debate raged in the Hi-Fi community a few decades ago. However, a correct transfer characteristic may carefully be derived in the digital domain by generating a true inverse function, which is effective at least until a certain maximum frequency is reached.
When more complicated signals are distorted by nonlinear functions, it is known that harmonic distortion is a very small part of the overall picture: Brockbank and Wass determined analytically that, for a signal containing thirty harmonic products, the intermodulation distortion generated by a nonlinearity in the system comprises 99% of the total distortion power2. Full measurement and analysis of intermodulation distortion requires at least as many components in the input signal as harmonics that are under scrutiny.
This method and these observations take us to a practical example of the importance of harmonic phase. Keith advanced three case studies, the first of which demonstrates the point effectively; the other two highlight the opportunities for wider research.
Case study 1: Crossover distortion
In 1975, James Moir performed a series of listening experiments in which a Class AB amplifier was biased at different levels, and the audibility of the resulting distortion measured3. Keith Howard’s first attempt to reproduce these results using a waveshaping kernel was not effective: amounts of distortion that would have been perceived as unacceptable in the listening test were barely audible in practice. The generated transfer characteristic looks nothing like crossover distortion, and has very little effect on a low-amplitude signal.
However, by alternating the polarity of the harmonic partials but keeping them at the same level, a more familiar characteristic is revealed:
For a full-deflection sine tone, these would measure exactly the same on a spectrogram or a THD+n meter, but they are clearly not the same. The resulting waveform reproduces the results of Moir’s test satisfactorily, and keeps the distortion components far higher as the amplitude falls. It also proves that when we are analysing or modelling distortion, we are interested just as much in the waveshaping function as the absolute level of the harmonic partials levels.
Case study 2: Hysteresis in transformers
In addition to the nonlinearities caused by saturation, audio transformers exhibit an asymmetrical transfer characteristic caused by their magnetic memory (hysteresis). As well as being frequency dependent, this characteristic makes modelling the distortion very difficult, because phase shift is introduced into the signal as well as wave shaping. Keith suggested a number of ways in which this could be incorporated into the distortion model in future, by using two waveshaping kernels in quadrature.
Case study 3: Loudspeaker distortion
The mechanisms that cause loudspeaker distortion are split into many different types: some, such as the cone or spider hitting their maximum excursion, are proportional to the displacement of the loudspeaker; some, such as eddy currents, are proportional to the force applied to the coil; others are proportional to cone velocity. The problem of modelling this distortion therefore falls into the same category as hysteresis in transformers: non-linearities act in different ways at different phases of the signal, and a static waveshaping kernel is clearly of limited use.
1. Hiraga, J. ‘Amplifier Musicality’. Hi-Fi News & Record Review. Vol. 22(3), March 1977, pp.41–45.
2. Brockbank, R. A., and Wass, C. A. A. ‘Non-linear distortion in transmission systems’. J. I.E.E., Vol. 92, III, 17, 1945, pp.45–56.
3. Moir, J. ‘Crossover Distortion in Class AB Amplifiers’. 50th AES Conference, March 1975. Paper number L-47.
Howard, K. ‘Weighting Up’. Multimedia Manufacturer, September/October 2005, pp.7–11.
Report by Ben Supper
Date: 8 Mar 2011
A group of leading mastering engineers discuss the latest techniques and the challenges they face.
A recording of the workshop is available here (66MB mp3)
Now that digital recording technology has superseded analogue, is it the ‘perfect sound forever’ that we were promised at the launch of the CD back in 1982?
A group of leading mastering engineers discuss a range of topics encompassing:
Synchronisation: How come it’s the sound that is always out of sync? Why is it not the pictures?
Dither: Is it important any more? Can we hear the difference? What changed?
Compression: How loud does it need to be? What is required for the best results when broadcasting or digitally distributing data compressed files?
Creation of a future proof archive: Just which of those 37 files labelled ‘Master – Final Version’ is actually the master, and whose responsibility is it to keep a record of this information?
On the panel are:
Crispin Murray – Metropolis Mastering (Moderator)
Mazen Murad – Metropolis Mastering
Ray Staff – AIR Mastering
David Woolley – Thornquest
They will share some experiences and advice, along with hopefully some amusing anecdotes of what to avoid in order to produce the best results.
Date: 5 Feb 2011
The symposium will be held on Saturday 5th February 2011 at the National Film and Television School, Beaconsfield, starting at 9.00am and finishing at 5.30pm. It is aimed at audio engineers seeking to broaden their horizons and students wishing to get a flavour of the wide range of visuals-related disciplines that make up today’s audio industry.
For more information about the event, the provisional programme and details of the various ways to register, please visit the A4V: Audio for Visuals web page.
Date: 11 Jan 2011
Lecture by Dr. Josh Reiss, Senior Lecturer, Centre for Digital Music, Queen Mary University of London.
A recording of the lecture is available here (81MB mp3)
The tools of our trade have transformed in the last twenty years, but the workflow of a mixing engineer is almost the same. A large proportion of the time and effort spent mixing down a multitrack recording is invested not in the execution of creative judgement, but in the mundane manipulation of equalisers, dynamics compressors, panning, and replay levels, so that the timbre and blend of individual channels is correct enough to attempt a balance.
There are two good reasons why much of this work has not already been automated. The first is that the task is not trivial: it is a highly parallel and cross-adaptive problem, and the correct value for every setting will depend to some extent on every other. The second reason is a resistance from those who assume that automating the mixdown process will either remove the requirement for a skilled hand and ear, or result in lazy use of automation to the extent that their careers or their integrity will be threatened. To make all music sound the same is not the goal of automation. Rather, automatic mixing will speed up the repetitive parts of an engineer’s job so that more effort can be expended on the art of production.
We need only look at the evolution of digital cameras to see what could be possible with audio. A typical consumer camera of twenty years ago would have had a fixed focal length and aperture, and perhaps an adjustable shutter speed. Now, multi-point auto-focus is a standard feature, the exposure time, aperture, and colour balance are adjusted automatically, a digital signal processor ameliorates camera shake, and so on. Poor shots may be recognised and retaken as many times as is necessary, because the photographer can immediately view their photograph. In spite of these enhancements, professional photographers still exist, and still need to be taught about the optics and anatomy of a camera. However, the emphasis of photographic discipline has shifted towards the creative side of the profession: there is less time spent setting up the camera and developing exposures, and more time in perfecting the technique and shot, and retouching the images.
There are, broadly speaking, four kinds of automatic sound processing tool:
Adaptive processing. Adaptive processes adjust instantaneously to the material that is being played through them. De-noisers and transient shapers are adaptive in nature.
Automatic processing. Automatic processes place some aspects of operation under user control, and make intelligent guesses about the positions of other controls. The ‘automatic’ mode on a dynamics compressor is such an example.
Cross-adaptive processing. A cross-adaptive tool must be aware of, and react to, every signal within the system. For example, the automatic level control on a public address system that adjusts to the ambient noise level may be cross-adaptive.
Reverse engineering tools. Deconstruction of a mix for historical reasons would involve taking the multitrack session master and the stereo master, and determining which processes must be applied to the former to derive the latter. It would be useful to automate some of this.
Adaptive mixing tools require two components: an accumulative feature extraction process, and a set of constrained control rules. Much of the difficulty of getting these tools right is in obtaining the correct information from the audio in the first place: to detect, for example, the pattern of onsets, the correct loudness, and thus precise masking information. The target for an equaliser can then be to reduce temporal and spectral masking, rather than to aim for a flat frequency response. Panning can be used to reduce spatial masking. A compressor can be inserted when the probability of a particular instrument being heard falls below a certain threshold, and it can be boosted to have a certain average loudness without its peak loudness exceeding a higher threshold.
Dr. Reiss played some examples of automated mixing from the Centre for Digital Music, showing us the system element by element. First, each instrument was manipulated in isolation. Then an automatic fader balance was performed. Finally, with one button, the compressor, equaliser, panning, and fader settings were set up for an entire multitrack jazz recording. The result was surprisingly effective, although the automatic nature of the balancing was clear. The vocals, for example, were somewhat quieter than custom usually allows, and the mix was equalised to a fairly flat spectrum whereas most commercial music is boosted at the top and bottom ends. Nevertheless, the power of automated mixing was effectively demonstrated – the result was perfectly reasonable for a monitor mix and, as the algorithms are perfected, the results will certainly improve further.
Suggestions and examples of other automatic tools were shown: an eliminator of feedback for live sound, which set itself the target of keeping the loop gain of the system below 0dB in every frequency band. It achieved this by finding the transfer function of the system and calculating its inverse. A plug-in for automatically correcting inter-channel delay was also demonstrated, which successfully reduced the artefacts created by spill between one microphone and another. The aim of these tools is again to free up the balance engineer’s hands and mind for the more creative aspects of live sound engineering.
The scope for further work in refining these tools is clear, although they already work impressively well. Informal blind testing has shown that it is hard to discern the automated mixes from those executed by students (at least, in short excerpts). In an act of subterfuge, Dr Reiss entered an automated mixdown into a student competition, and confessed his crime only after the competition was judged. Although the mix failed to win a place in the competition, it also failed to pique the judges. Inevitably, technology will soon change our craft beyond recognition. Fortunately for us, the researchers appear no closer to developing a substitute for talent.
Report by Ben Supper
Date: 31 Mar 2011
Lecture by Dr Richard Hoadley and Sam Aaron.
Transducers for converting physical data into digital data and algorithmic procedures for generating and controlling audio have existed for many years, but it has been only recently that affordable systems and products able to unite the two have become widely accessible. This paper examines the development of hardware and software systems designed to explore the nature of movement and gesture in musical creation, performance and expression.
Our experience suggests that when we move, any resulting actions will reflect that behaviour. In digital systems that can detect these actions there is no direct causal link between event and action: any mapping has to be specifically implemented and might therefore be referred to as Œmetaphorical. Such implementations can include desirable features that would be difficult or impossible to implement in reality. Such features have been described as Œmagical.
This paper describes and analyses work seeking to investigate the amalgamation of these two areas in practice-led activities, where ‘magic’ and ‘delight’ equate with musical qualities usual considered to be aesthetic rather than technical. Music performance in particular uses unique methods of articulating and implementing expressive gesture through physical interaction with objects. Similar undertakings by other performers (such as dancers) and fine artists are also considered.
Richard Hoadley is a composer affiliated to the Digital Performance Laboratory at Anglia Ruskin University.
Sam Aaron leads Improcess, a collaborative research project exploring the combination of powerful sound synthesis techniques with tactile and linguistic user interfaces to build new forms of musical device with a high capacity for improvisation. His current main avenue of exploration is through the use of the monome, a grid of backlit buttons capable of bi-directional communication and Overtone, a novel Clojure front-end to SuperCollider server.
Date: 13 Dec 2011
Lecture by James Lewis and Neil Harris, HiWave Technologies.
A recording of the lecture is available here (34MB mp3)
The lecture is divided into two parts: the first from Chief Scientist Neil Harris and the second given by the CEO of HiWave Technologies (formerly NXT), James Lewis.
Neil explores issues relating to multi-modal human-computer interfaces. These employ our senses of sight, hearing and touch in consort. The development of more intuitive user interfaces demands an improved understanding of sensory fusion. The cost-effective provision of tactile feedback in such interfaces brings the emerging field of psychohaptics into focus. Examples will be given of 2D panel-based interfaces that incorporate bending-wave haptics technology.
James Lewis talks about the strategy of combining expertise in actuator design and electronics design under one roof. This joined-up thinking opens the door to commercial exploitation of previously unexplored technical synergies. Focusing on the consumer electronics market, examples will be given of audio products and touch input devices that can benefit from such synergies.
Date: 2 Dec 2011
Lecture by Dr Bruno Fazenda, University of Salford.
Stonehenge is the largest and most complex ancient stone circle known to mankind. In its original form, the concentric shape of stone rings would have surrounded an individual both visually and aurally. It is an outdoor space and most archaeological evidence suggests it did not have a roof. However, its large, semi-enclosed structure, with many reflecting surfaces, would have reflected and diffracted sound within the space creating an unusual acoustic field for the Neolithic Man.
This presentation describes acoustic measurement studies taken at the Stonehenge site in the United Kingdom and at a full size and fully reconstructed replica site in Washington State, USA. The aim of the research is to understand the acoustics of this famous stone circle and discuss whether it would have had striking effects for its users in Neolithic times.
Features such as impulse response, reverberation time, reflections and speech transmission index will be presented and used to discuss the existence or otherwise of audible effects such as flutter echoes, low frequency resonances and whispering gallery effects. A description of an auralisation system based on ambisonic and wave field synthesis technology will be given. A stereo rendition of the sound of Stonehenge will then be presented to the audience.
After the lecture there will be a visit to the Christmas Market in the City Centre.
Date: 28 Nov 2011
Lecture by Dr Rob Toulson, Research Fellow, The Cultures of The Digital Economy Research Institute, ARU.
Digital culture has transformed our world, and is still evolving the way we live. We are surrounded by technology; wireless networks, mobile devices, access to more data and information than we could ever need. One aspect of the government’s Digital Economy act is aimed at ensuring that we make the most of this technology, as many devices are over engineered for the initial design purpose. Mobile and wireless devices are being used for far more than their initial purpose, allowing remote access to technology, art and data. We are surrounded by digital hardware, so it is now possible to rapidly develop applications to generate wealth, improve efficiency and enhance our lives – this is the growing culture of our digital economy.
The hardware and infrastructure is already in place, so the cost is purely in innovating new ways to utilise the technology that surrounds us. We use our mobile systems for global positioning applications, audio and video processing, remote data and information retrieval and even self diagnosis for healthcare and medical applications. The present opportunity for innovation is at the crossroads where technology meets culture and creativity, and government driven research and knowledge transfer funding is available to encourage this.
This lecture, a collaborative venture organised by CoDE, the Audio Engineering Society and Creative Front, will present a number of case studies in fields related to digital culture, to act as a foundation for brainstorming and collaboration between Cambridge innovators from academia and industry. The lecture will be followed by a sponsored wine reception and networking opportunity.
18:00 – Registration and seating for lecture
18:30 – Lecture by Dr Rob Toulson
19:45 – Wine reception and networking
21:00 – Close
Date: 27 Oct 2011
Lecture by Daniel Halford.
Daniel Halford has over 10 years’ experience working as a recording engineer and has made over 40 professional releases with a contemporary classical record label and also for various independent music groups. Several of his recordings have received critical acclaim for their sound quality.
He also lectures in music and sound design technology at the University of Hertfordshire and has taught the same subject at Southampton University. Daniel’s interests in audio and music are wide and varied and, in addition to his work as a recording engineer, Daniel works freelance on a wide range of projects including music technology workshops, video and mixed media work, live sound engineering, composition, radio production and sound design.
Date: 18 Oct 2011
Lecture by John Watkinson.
We regret that no recording is available of this lecture.
Despite enormous progress in understanding how the human auditory system works, most present day loudspeakers cling to outmoded and discredited techniques that have not changed in decades.
The availability of advanced materials and design tools means that the task of advanced speaker design has never been easier, but the necessary steps simply are not taken.
This presentation will look at the criteria for accurate sound reproduction and will show that these criteria can be met. Demonstrations of some alternative loudspeaker designs will be given.
Date: 8 Nov 2011
A recording of Prof Mark Plumbley’s and Dr Josh Reiss’s introduction to the work of the CD4M is available here (12MB mp3)
The Centre for Digital Music (C4DM) at Queen Mary University of London is a world-leading multidisciplinary research group in the field of Music & Audio Technology. Since its founding members joined Queen Mary in 2001, the Centre has grown to become arguably the UK’s leading Digital Music research group. With its broad range of skills and a strong focus on making innovation usable, the Centre for Digital Music is ideally placed to work with industry leaders in forging new business models for the music industry.
C4DM has over 50 full-time members, including academic staff, research staff, research students and visitors. While the Centre for Digital Music has an underpinning in digital signal processing and computer science, it includes academic and research staff with a much wider range of skills. Many of the academic and research staff and students are themselves active musicians, and are connected into several different aspects of the music community. It is also experiencing rapid growth, and has recently hired four more permanent, research-active academic staff; two lecturers and two professors.
Our projects span many different disciplines, including Music Informatics, Machine Listening, Interactional Sound and Audio Engineering. We emphasize adventurous and trans-disciplinary research, pushing the boundaries of DSP, computer science, philosophy and psychology. We investigate topics such as music information retrieval, music scene analysis, semantic audio processing, object-based audio coding, human machine interaction and digital performance. Most of our research targets real users, seeking to build new algorithms into usable and useful software, realizing both that with public funding we have a duty to take research results to the wider public, and also that their engagement with us helps to take our research in new directions.
At this Cutting Edge Research evening, we will showcase our research with a collection of posters and demonstrations covering such diverse topics as Music 2.0: semantic web applications, audio source separation, music information retrieval, intelligent systems for sound engineering, automatic mixing, microphone artefact removal, audio score alignment, polyphonic music transcription and many more.
For additional information please visit: http://www.elec.qmul.ac.uk/digitalmusic
This is a great opportunity for our members with industry experience to discuss, offer advice and opinions, and perhaps even challenge the research conducted by these audio industry practitioners of the future. We would particularly encourage our Sustaining Members to participate in this event.
Date: 15 Sep 2011
For further details please go to the ‘News’ section.
Click here to download a .pdf of the full agenda.
Chameleon subwoofer arrays — Adaptable loudspeaker polar responses for improved control in the low-frequency band
Date: 11 Oct 2011
A recording of the lecture is available here (37MB mp3)
Lecture by Adam Hill.
Current work in the Audio Research Laboratory (ARL) at the University of Essex has engaged in the development of a flexible networked low-frequency response control system, termed a chameleon subwoofer array (CSA) due to its intrinsic capability of adapting to acoustical surroundings. CSA technology provides multiple degrees of freedom towards system correction, achieved either with multi-component subwoofers or standard single drive unit subwoofers calibrated automatically.
CSA correction lends itself to a wide range of applications. Specifically, small-room home theater utilization will be highlighted with the goal to minimize spatial variation of the frequency response over a defined listening area, or alternatively to provide individualized control at a collection of points. Additionally, large-scale live sound applications will be discussed, jointly targeting spatial variation minimization in the audience area and low-frequency energy attenuation on stage.
Simulated examples will be presented using the ARL Finite-Difference Time-Domain (FDTD) acoustics simulation toolbox (available online: http://www.adamjhill.com/fdtd ) as well as a discussion on important considerations concerning the practical implementation of a CSA, including experimental results from the prototype system.
One of these practical implementation considerations addresses the common difficulty of controlling particular narrow frequency bands due to a system’s physical configuration. This problem necessitates a supplementary room correction technique exploiting the psychoacoustical capabilities of virtual bass (synthesis toolbox available online: <http://www.adamjhill.com/vb>, whereby the problematic narrow-bands are filtered from the signal and replaced by harmonic components, thus subjectively enforcing the signal in the missing bands. Common time domain and frequency domain virtual bass synthesis methods will be highlighted along with a novel hybrid technique based on transient content detection; all having been evaluated in previously conducted listening tests.
Date: 12 Jul 2011
A recording of the lecture is available here (145MB mp3)
Lecture by William McVicker.
William McVicker, organ curator at the Royal Festival Hall and an organ consultant, will provide an overview of the relationship between the acoustician and the architect, with special reference to the Royal Festival Hall.
The question of room acoustics and music will be explored, along with the ways in which our appreciation and understanding of live music has changed, as have the views of architects and acousticians.
Date: 8 Feb 2011
Lecture by Brian Gibson, TG Electronics.
A recording of the lecture is available here (52MB mp3)
Origins of the REDD51 Development
The Record Engineering Development Department was set up to develop Stereo Recording Technologies in 1955 under Len Page. Up until then the simple 8 i/p Mono Consoles had two sizeable racks of equipment (amps, pre-amp’s, power supplies etc.). Following the developments of EMI in Germany in 1958 the REDD17 console was built with all the electronics integrated, with M+S on both channels 1/2 & 7/8 of the 8 channels – this model was only ever used for mobile recordings, and not in Abbey Road. The console was then expanded to have 4 channel monitoring to work with the EMI 4 track tape machine and became the REDD37. These used the Siemens and Halske cassette 40 dB V72S amplifier. EMI then developed the REDD47 amplifiers to replace the V72S in order to be more self reliant as a company and to reduce production costs – however these used 5 times the power and consequently heat! This combination became the REDD51 console, and was installed from 1963 onwards; two being installed at EMI’s Studios in Abbey Road. Console production numbers are uncertain as a number of consoles were built for other territories.
The main difference between the Siemens and the EMI REDD47 amplifiers is the requirement for external power supplies (Siemens are self contained with 240 v input); therefore there are six sizeable psu’s, two in each base, and one in each upper unit.
EF86 (or CV4085) pentode in an antivibe mount followed by a dual triode E88CC low impedance cathode follower into the transformer.
Operational Signal Path
Very much an ‘old school’ design using a series of three 40 dB amplifiers, with passive sections in between to give attenuation, equalisation and panning – this is a novel configuration in that a mono signal is split in a hybrid transformer and then fed through a pair of faders in a differential arrangement to give panning. There follow 4 group outputs to feed a 4 track tape machine with 4 tape returns for monitoring.
The console is arranged as five separate sections that are held in position by locating pins and are clamped together, with a separate distribution for mains power. Signal distribution is by a set of plugs and sockets with a screw clamp to maintain integrity. This was simply so that the console could be broken down and transported easily, and hence could be used for mobile applications. In the left hand side of the console is a test panel and meter to fully test the operation of the REDD47 amplifiers in situ. The centre section of the console is filled with EMI ‘in house’ built transformers to buffer all stages.
Plug In Equalisers
‘Pop’ and ‘Classic’ units, as removable or exchangeable modules. All are capacitor and inductor circuits, for their phase characteristics, and are consequently quite lossy (hence the gain structure of the console) but great sounding. The Pop equaliser is a peaking eq based on 5k centre frequency, whereas Classic equaliser is a shelf based on a centre frequency of 10k, with up to 10 dB cut or gain in 2 dB steps.
There are fourteen faders arranged across the centre section, 1-4 input faders, aux channel (originally echo return), the 4 track sends, another aux channel, 5-8 inputs.
Below meters are the routing switches, beneath that are the ‘spreaders’ for the MS channels that are selectable via plug in modules on the rear panel, and panpots for the others channels. Lower down are the echo sends (two switchable) and the returns. Above the four track sends in the centre are the track routing switches, generally left 1, 2, 3, 4. The quadrant faders are Painton stud faders, and the majority of the controllers are also from the same source, with each stud connected to a separate precision resistor: quality first, cost second!
The main difference between the Siemens and the EMI REDD47 amplifiers is the requirement for external power supplies (Siemens are self contained with 240 v input); therefore there are six sizeable power supplies, two in each base, and one in each upper unit. Valve Architecture EF86 (or CV4085) pentode in an antivibe mount followed by a dual triode E88CC low impedance cathode follower into the transformer.
Limiting and Outboard
There was no limiting built into the console however operationally there tended to be a pair of Altec limiters on Tape Channels 1 & 2 for the rhythm tracks, and a pair of mono Fairchild 660’s on 3 & 4 for the vocals.
Although originally supplied with PPM metering, EMI replaced these with the VU Meters that are still installed. The meter drive amp is self powered from 240 v, and is of a transistor design. It features a +10, 0, -10, -20 control that can also be remote controlled via a relay array, with push buttons on the top panel to allow for auxiliary meter gain during quiet sections of classical music. 4 VU meters are set in a bridge or penthouse with a phase correlation meter in the centre.
No One Needs More Than 8 Microphones
Len Page had decided that 8 microphones would be sufficient, however quite a number of engineers started to require more, so a 4 channel sub mixer was created that was often routed into one of the Echo Faders (by now Aux Faders). An original unit was sourced from a former Abbey Road engineer and has been restored to original condition and is used regularly.
The British Grove Console
This console came to light at a brokerage in Italy as “The Beatles Console”…. British Grove Studios sent Dave Harries (legendary Abbey Road and Beatles and Air Studios boffin) to verify the provenance and he confirmed it was both complete and genuine. It was possible to determine that it had been installed in the Milan Studios of EMI, however it had languished unpowered for the last 30 years. Following negotiations it was purchased and brought back to England into the care of Brian Gibson to restore.
40 years of dust had to be cleaned out and it was very sticky! For the most part the console was completely dismantled and was entirely cleaned with a small brush and soapy water – a very patient operation.
Where for the most part the old components were just cleaned, the electrolytic capacitors were all replaced for both sonic and safety reasons; there is a multi stage capacitor that has in most cases been retained, however it was found that there was a company in the USA that was remaking this particular model with identical values, so any suspect ones were replaced as a matter of course.
And So to British Grove
The console has pride of place in the main studio alongside a later EMI TG Console and a modern Neve 88R, it is used regularly on most recording sessions in the studios, often as an insert into the Neve console, its younger sibling the TG console is also used regularly in this fashion to much acclaim, giving artists, engineers and producers probably the best of all worlds.
Report by Crispin Murray