Date: 30 Apr 2015
Speaker: Marinos Koutsomichalis
The long lasting relationship between music and algorithms has been formalised in the latter part of the 20th century and through the development of specialised programming languages. While the first generation of such systems has been rather focused in how to remediate existent musical paradigms, it nevertheless set the grounds for a more profound re-interpretation and re-evaluation of the fundamentals of music to follow. The subsequent computerisation of society led to our era where most aspects of contemporary culture are either computer-driven or dependent on computer-specific paradigms. Accordingly, specialised and music/audio-centric programming languages in conjunction with advances in contemporary media theory caused an all-inclusive exploration of music in both technical and æsthetical respects and, more, impelled new and ground-breaking compositional paradigms.
In that vein, this lecture attempts a brief overview of the most important programming languages—both historical and contemporary—that are relevant to music composition and to audio synthesis. It further discusses the most prominent algorithms and compositional methodologies that are found in such a context. More importantly, it scrutinises how we propelled from what was intended to be an interface to existent musical ideas all the way to the era where computer code largely affects and determines the æsthetics of electronic/electroacoustic music and sound-art for their greatest part—both within academies and within the broader experimental music scene, as well as several popular music genres (albeit to a lesser extend).
The event will be held at The Walsall Campus, University of Wolverhampton.
Date: 12 May 2015
Lecture by Kelvin Griffiths (Director, Electroacoustic Design)
Electroacoustic transducers are found in applications as varied as mobile phones, laptops, cars, fire alarm systems, and headphones with each presenting individual design targets and challenges. In some situations, heightened levels of performance are expected of loudspeakers that may have hitherto served an application adequately, a response to new markets and extended functionalities of the host device. This includes a growing class of problem of integrating loudspeaker hardware into environments not primarily designed for high quality sound reproduction and the balance of performance and cost being an ever-present factor.
Traditional electroacoustic design approaches using lumped elements are only partly useful in predicting acoustic performance and a more accurate and effective process is required to provide a fuller insight into system concepts earlier in the design process. The lecture will illustrate electroacoustic transducers in diverse scenarios and discuss modern design methodologies that provide valuable information on both the transducer behaviour and the acoustic problems posed through system integration.
Date: 14 May 2015
Lecture by Stefan Bilbao and Colleagues from the Acoustics and Audio Group at Edinburgh University
The NESS project, funded by the European Research Council, is running jointly between the Acoustics and Audio Group and the Edinburgh Parallel Computing Centre at the University of Edinburgh between 2012 and 2017. It is concerned with physical modelling sound synthesis on a large scale, using classical time stepping methods for what ultimately become very large problems in simulation. Particular systems under study include brass instruments, percussion instruments, and stringed instruments such as violins and guitars; another important area is the modelling of 3D spaces, for purposes of both room reverberation modelling, and in order to embed such instruments in a fully spatialized virtual environment. The goals of here are multifold: to increase synthetic sound quality, to offer simplified user control, and to allow flexible new instrument design exploration while retaining sound output with an acoustic character. In spirit, such an approach is analogous to similar developments in computer graphics, but in more technical regards, however, it presents distinct challenges. One centres around adapting numerical designs to the constraints of human audio perception, in order to avoid audible artefacts; another major challenge, due to relatively high audio sample rates, is that of computational cost, particularly for systems in 3D, and in this regard, fast implementations in multicore and on GPU are under development. Sound examples and video demonstrations will be presented.
Stefan Bilbao (B.A., Physics, Harvard, 1992, MSc., PhD., Electrical Engineering, Stanford, 1996 and 2001, respectively) is currently a Reader in the Acoustics and Audio Group at the University of Edinburgh. His main research interest is in the development of numerical methods for physical modeling sound synthesis.
Other presenters include: Charlotte Desvages, Paul Graham, Alan Gray, Brian Hamilton, Reg Harrison, Kostas Kavoussanakis, James Perry and Alberto Torin.
The event is free but we ask you to register at the following page: