Article - Issue 62, March 2015
How to maximise loudspeaker quality
Dr Jack Oclee-Brown
Anechoic chambers are ideal for researching the sound that people hear at home. Pictured is the 3D sound field simulation equipment at the Division of Applied Acoustics, Chalmers University of Technology (c. 1974) © Mendel Kleiner
Ingenia asked Dr Jack Oclee-Brown, Head of Acoustics at KEF Audio, to outline the considerations that audio engineers need to make when developing high-quality speakers.
There has been a surge in the popularity of headphones in recent years, and ‘lossy’ compressed formats, such as mp3, allow thousands of songs to fit on convenient devices and pass easily through low-bandwidth connections. However, for those whose passion for music leads them to strive for the best possible sound reproduction, there is no substitute for a high-quality hi-fi system.
Loudspeakers are a key part of any hi-fi system. Arguably, loudspeakers have a greater impact on the quality of reproduced sound than any other component in an audio system. It is relatively straightforward to design a loudspeaker that relays sound in a way that we can hear and enjoy. But the real challenge lies in delivering one that can convince a listener that they are at a live venue, listening to the ‘real thing’.
ANATOMY OF A LOUDSPEAKER
The basic principle of operation of a loudspeaker driver is simple. A mechanical diaphragm is forced to vibrate back and forth in sequence to the audio signal to be reproduced. This vibration is transferred to the air and generates soundwaves. With an electro-dynamic loudspeaker driver, the diaphragm is driven using an electro-magnetic motor. This consists of a coil of wire (the voice coil) directly attached to the diaphragm and located in a strong magnetic field.
When an electrical audio signal is fed to the voice coil, the current flowing in the magnetic field generates a Lorenz force on the coil that is transferred to the diaphragm, resulting in the required vibration. The diaphragm and voice coil are held in place by a flexible suspension system that allows the diaphragm to vibrate freely while keeping the voice coil centred in the motor. The suspension system also provides an air seal around the outside of the diaphragm. To maximise the strength of the magnetic field around the voice coil, steel parts are used to focus the field on the voice coil.
Various different diaphragm geometries are used, depending on their application. The conical shape is very common, as it is geometrically stiff, which helps to avoid vibrational resonance within the working bandwidth of the driver.
It is extremely difficult for a single loudspeaker driver to effectively reproduce the entire audible frequency range. High-quality loudspeakers will typically use several loudspeaker drivers, each optimised to work in a sub-band of the overall frequency range with an electrical filter network (the crossover) to divide the incoming audio signal into different bands.
HEARING AND PERCEPTION
To understand what it takes to realistically reproduce the sound of a voice, an orchestra, or rock band, it is necessary to understand how our hearing works. Human hearing is extremely sensitive and able to detect a wide range of sounds. The audible frequency range is commonly estimated to be from 20 Hz to 20 kHz on steady sine wave tones, corresponding to wavelengths in air of 17 m and 17.5 mm respectively. Our perception of this range is approximately logarithmic. We perceive each doubling of frequency (or octave) as more or less equally spaced. In terms of dynamic range, our perception of loudness is also approximately logarithmic. Each amplitude doubling of a sound wave is perceived as a close-to-equal increment in the loudness. Because of this, audio engineers commonly use decibels to describe the amplitude of signals. For sound waves, we use sound pressure level (or dB SPL). The quietest sound that we can hear has a pressure amplitude of roughly 20 μPa (0 dB SPL) and the loudest − before it gets painful − has a pressure amplitude of around 63 Pa (130 dB SPL).
Without the loudspeaker cabinet, air simply moves from the front of the diaphragm to the rear and little sound is generated. Forward movement of the diaphragm generates positive acoustic pressure at the front side of the loudspeaker diaphragm and negative acoustic pressure at the rear side
Another area where human hearing is particularly adept is estimating the direction and distance of sounds. In a controlled environment, tests have shown that we are able to judge the direction of origin to within almost 1° for eye-level sources in front of us. Towards the sides, or above or below our heads, our judgement drops to around 15° accuracy. We think that this extreme acuity has evolved as a natural defence against predators, and it is largely thanks to us having one ear on either side of our field of vision. The ear and brain work together to process and compare signals from the left and right ears to determine where sounds come from. Small differences in the arrival time and the signal amplitude are all the brain needs to work this out. Interestingly, we can easily do this for several sources simultaneously and without any conscious effort.
A SHORT HISTORY OF LOUDSPEAKERS
A loudspeaker converts an electrical audio signal into sound. Before the invention of the loudspeaker, sound reproduction was limited. Devices such as the phonograph could store sound waveforms mechanically etched in a groove. The sound could then be replayed by tracing the groove with a stylus attached to a radiating diaphragm. However, the movement of the diaphragm was very small and the reproduced sound very quiet. Using horns, the sound level could be amplified to some extent, such as in the iconic HMV gramophone phonograph, but even with these the sound level was limited.
The very first loudspeakers were invented during the development of the early telephone systems in the late 1800s. These were generally of very poor quality and only just suitable for intelligible voice reproduction. It was the invention of the vacuum tube in around 1912 that made the loudspeaker a truly practical device. For the first time, electrical audio signals could be amplified to high enough power to directly drive electro-mechanical devices.
Many inventors around the beginning of the 20th century designed various devices that used a magnetic field to convert electrical audio signals into mechanical vibrations. However, it was Chester W Rice and Edward W Kellogg who, in 1924, came up with what would be the precursor to the electro-dynamic loudspeaker widely in use today. Their design used a moving coil of wire, positioned in a magnetic field generated by a powerful electromagnet, and attached to a conical diaphragm. Despite many and varied attempts, this basic arrangement for loudspeaker design has remained remarkably unchanged.
Studies suggest that bass performance contributes as much as 25% of listeners’ preference ratings when comparing different loudspeakers. There are a number of challenges to reproducing the entire audible frequency range without damage or distortion. The first difficulty stems from the large acoustic wavelengths at low frequencies. The sound output from a loudspeaker is proportional to the amount of air displaced by the drivers. For a constant sound pressure level, the required air displacement is proportional to the acoustic wavelength squared.
A 20 Hz sine wave has a corresponding acoustic wavelength of 17 m in air. A loudspeaker reproducing this signal at a moderate listening level (3 m listening distance at a SPL of 85 dB), would need to displace 1,000 cm³ air at the signal peaks. At the very least, this means that the speaker must use high-quality, low-frequency drivers capable of moving a large amount of air. For true full-range performance, the low-frequency drivers must be large or be used in multiples.
A second problem is in controlling the sound that emanates from the rear side of the loudspeaker drivers. The diaphragm’s movement generates sound at both the front and rear of the driver, with the signal polarity reversed at the rear. A cabinet is necessary to contain the sound from the rear side of the diaphragm. Without the cabinet, negative sound from the rear of the diaphragm cancels the sound from its front side and very little low-frequency sound reaches the listener. There are several styles of loudspeaker cabinet. Some simply contain this rearward sound, while others convert some of the rear energy into useful output to augment the sound directly from the driver. All designs suffer from the same fundamental problem: unless the cabinet is large, the driver movement is restricted because of the pressure difference inside and outside the cabinet. As a consequence, the loudspeaker efficiency at low frequencies is also restricted. Put simply, a small loudspeaker cabinet cannot produce efficient sound output at low frequencies.
We spend the majority of our lives in enclosed spaces. These spaces have a dramatic effect on the sounds that reach our ears. Walls and objects inside a room reflect the soundwaves, often with relatively little attenuation. The signals that reach us are augmented by repetitions of the original signal. Provided that most of these reflections arrive within 40 ms of the initial sound, and that they have a reasonably similar frequency balance to the original, we hear them as a single sound event. This is known as the precedence effect. We have seemingly evolved to recognise this common acoustic effect. If an incoming sound is rapidly followed by another with the same characteristics, our auditory system recognises that it is likely to come from the same source.
Indeed, research shows that, when these repetitions are present in the right proportion and with a similar frequency balance to the original, they actually increase our ability to hear details in sounds. In a sense, they give our ears another chance to capture and understand the sound. The single perceived sound event is made up of the accumulated information from the entire set of signals.
With the repetitions removed, the effect can be quite disturbing. Many acoustic tests require a completely controlled ‘dead’ acoustic environment. One of the tricks up an acoustic engineer’s sleeve is the anechoic chamber. This is a specially treated room designed to minimise any reflections. Standing in an anechoic chamber for the first time is a strange experience: it is quite common for people to feel slightly disoriented and unsteady. The closest one can come to experiencing this acoustic situation outside the lab is probably standing on top of a snowy hill in perfectly still weather.
Sound reaching a listener can be roughly divided into (a) direct sound, and slightly later (b) early reflections that have made a single bounce before reaching the listener. Finally come the late reflections that include all other sound
Room acoustics is also an integral part of a musical performance, particularly for classical music where the sound of the recording location adds atmosphere and a sense of space to a recording. For electronic music too, producers often use acoustic effects to enhance the sound, such as artificial reverberation or echoes on instruments and voices. Without these details, recordings would be much drier and less interesting. These effects highlight what the record producer and the artist are trying to achieve: their intention is not to transport the performer to the listener’s room, but rather to transport the listener to the recording space. For acoustic performances, the listener wants to feel as if they are in the concert hall. For other types of music, such as electronic music, the concept can be more abstract, but the goal is the same: to transport the listener to another acoustic space.
Historically, there has been a great deal of research effort on understanding the acoustics of concert halls, performance spaces and recording spaces – see Auralisation – engineering sound for public places (Ingenia 40). However, there has been comparatively little research on the acoustical behaviour of domestic rooms where hi-fi systems are typically located – with two notable exceptions. The EUREKA project Archimedes, an EU-funded research project that ran from 1987 to 1993, specifically focused on identifying a basis for improved high-quality sound reproduction in domestic spaces. More recently, a body of work by Floyd Toole and Sean Olive of Harman International looked into how the characteristics of the loudspeaker can be linked to the perceived sound quality in domestic rooms. Although these projects took place around 10 years apart, and used different investigative methods, both agree that you can greatly improve the listener experience in relatively untreated rooms by carefully controlling how the loudspeaker directs sound into the listening room.
The directivity of a sound source describes how effectively the sound is radiated in various different directions. For example, a sound source with wide directivity radiates an approximately equal amount of sound in all directions, whereas a narrow directivity sound source only directs sound in specific directions. The extreme case is an omnidirectional source, which is one that radiates sound in all directions equally.
Directivity is a particularly important characteristic of a loudspeaker. Both the Archimedes and Toole/Olive studies identified it as critical for high-quality sound reproduction in domestic environments.
Hybrid FEM (finite element method) and BEM (boundary element method) model of a KEFhigh-frequency driver. This model is of a 1/18th segment of a high-frequency driver that is positioned in the centre of a mid-range driver.The left image shows the finite element meshused for the computation. The right hand image shows the computed instantaneous pressure field for an input frequency of 20 kHz. Soundwaves are visible leaving the tweeter dome (marked with dotted lines) and spreading out over the mid-range conical diaphragm surface towards the listener
In a typical domestic listening environment, a great proportion of the sound reaching the listener is indirect sound reflected off the walls. The sound reaching the listener roughly falls into three different categories. When the loudspeaker outputs a signal, the initial sound that reaches the listener is the ‘direct sound’ – without reflecting off any nearby surfaces. Generally, the listener will be sitting a short distance in front of the loudspeaker so this sound mostly emanates from the front of the loudspeaker.
Secondly, there are the ‘early reflections’, which reach the listener shortly after the direct sound. Early reflections are sounds that have left the loudspeaker in various other directions, reflected off a single room boundary and then reached the listener. And there are late reflections, that consist of the remainder of the sound that reaches the listener and arrives a short time later. This sound has reached the listener after more than one reflection from the room environment. The late reflected sound is roughly equivalent to the reverberation in a large room. The sounds contained within the late reflections will have left the loudspeaker at a large range of different angles.
The loudspeaker’s directivity largely determines the characteristics of the direct, early and late reflected sounds. Consequently, the loudspeaker’s directivity is critical in determining how a loudspeaker will sound in a typical domestic room. Sean Olive conducted a large number of listening tests using different loudspeaker designs with varied directivity. He was able to show a near perfect correlation between those with smooth and resonance-free off-axis frequency responses and listener preference. Loudspeakers with these characteristics result in early and late reflections that are closer to perfect repetitions of the direct sound and in this case the precedence effect combines them into a single auditory event. In terms of the experience, this allows the listener to hear more information in the recorded signal and even gives them a better sense of the recorded performance’s original acoustics.
Unfortunately, it is not easy to design a loudspeaker with this type of directivity characteristic. At low frequencies, drivers are more or less omnidirectional, outputting sound equally effectively in all directions. But when the acoustic wavelength is comparable in size to the driver diaphragm, then the sound becomes highly directional. In the same upper frequency region, most drivers tend to also have significant vibrational resonances. Again, this is because the driver size is large compared to a wavelength. Using a number of differently sized drivers in the loudspeaker, each covering a sub-band of the overall frequency range can help to give a more consistent directivity, but at the end of each sub-band there is a discontinuity in the dispersion between the smaller higher-frequency and the larger lower-frequency driver.
Another issue is that it is well-nigh impossible to get the separated drivers close enough to stop the adjacent drivers interfering acoustically with each other. The interference problem is worse for mid to high frequencies, when the acoustic wavelength is relatively small. When two sources output sound simultaneously, as happens at the transitions of the sub-bands, their output can sum both constructively and destructively. This causes an interference pattern in the radiated sound: in some directions the two soundwaves add, in others they subtract. The outcome is an abrupt discontinuity in the directivity at the edges of each sub-band.
One solution to the directivity issue is to design drivers, or arrays of drivers, that have a matched directivity and source position at the transition frequencies. This is by no means a new idea. As early as 1940, Harry Olsen suggested one such configuration in his book, Acoustics. The most common approach is to mount the mid-range and high-frequency drivers coaxially, with the high frequency driver most commonly positioned behind the mid-range and with the sound passing through a hole in the centre of the mid-range driver. The design of such drivers is more complex than conventional approaches, because of the close proximity and interactions between the two drivers. Such designs are still relatively uncommon even in high-performance loudspeakers.
The KEF Blade loudspeaker is designed to have a finely controlled directivity pattern. The high-frequency and mid-frequency drivers are integrated into a combined Uni-Q array on the front of the loudspeaker. The low-frequency drivers are positioned symmetrically around it to mimic the output of an acoustic point source
KEF is one of the UK’s biggest high-end loudspeaker companies. Raymond Cooke founded the company in Maidstone in 1962, in a spare building on the premises of Kent Engineering Foundry, from which the company took its name. Cooke had previously worked at Warfedale loudspeakers, but set up KEF in an effort to bring the then-fledgling plastics technology into loudspeaker driver design. The early products proved popular, and KEF soon built a strong reputation. Today, a dedicated team of engineers still designs KEF products at the original Maidstone site.
Following the Archimedes project, in 1989, KEF developed a new type of loudspeaker driver designed to overcome the directivity issues that thwarted traditional loudspeaker design. Rather than using two distinct mid- and high-frequency drivers, the two are integrated into a single device with the high-frequency driver mounted directly at the apex of the mid-range diaphragm cone. This configuration is known as the Uni-Q due to the fact that the Q (a measurement of the directivity) of the two drivers is matched.
Tightly integrating two drivers in this way was previously very difficult because of the large amount of space occupied by the permanent magnet driver motor systems. It was not until the late 1980s that the Uni-Q configuration became possible, thanks to new rare-earth magnet materials, such as neodymium. These materials meant that it was possible to shrink the size of the high frequency driver so that it could be positioned within the central pole of a mid-range driver.
In the last few years, KEF has been looking to apply the same directivity considerations to the loudspeaker system as a whole, rather than just the mid-range and high-frequency region covered by the Uni-Q driver. This has led to the Blade series of loudspeakers. These loudspeakers are arranged so that as far as possible the origin of the sound and the directivity are consistent over the entire audible frequency spectrum.
Modern computational engineering tools allow accurate predictive simulation of many aspects of both loudspeaker driver and loudspeaker system performance. In recent years, there has been great progress in our understanding of loudspeaker driver behaviour at the limits of their operation. As a result, significant incremental improvements are still possible in loudspeakers, and the latest generation of hi-fi loudspeakers can offer remarkable performance. However, there is still a long way to go before a loudspeaker can match the distortion and frequency response of electronic devices.
New materials and manufacturing techniques, such as nanotechnology and 3D printing, are another avenue for improvement, potentially allowing greater flexibility in the geometry of the driver parts. This flexibility could be used to improve performance or to allow more versatile and compact diaphragm shapes.
Digital signal processing of the driver input signals also can offer potential improvements. Recent methods can predictively correct for distortion in bass drivers and monitor the behaviour to avoid damage, allowing the loudspeaker to be operated closer to the limits of performance and maximising the performance from limited size cabinets.
Fundamentally, though, the electro-dynamic loudspeaker driver has not changed in close to 100 years. Despite a number of interesting and promising looking technologies, the electro-dynamic driver currently remains the most effective way to create significant acoustic displacement. The major result of this is that, while most other pieces of technology have become smaller and smaller, loudspeakers that can realistically produce the full audio band remain relatively large devices.
Dr Jack Oclee-Brown took his MEng degree in acoustical engineering and then his PhD at the Institute of Sound and Vibration Research, University of Southampton. Since 2004, he has been with KEF Audio, where he holds the position of Head of Acoustics. He is currently working on loudspeaker modelling, the development of software tools to aid loudspeaker design, and transducer design.