Graphical Processor Units (GPUs) are integral to PC graphics cards and capable of producing photorealistic 3D graphics for today's resource-hungry gamers. GPUs can also be used to create music more realistic than any other synthesized sounds ever produced.
This is what Chris Maynard, a computer scientist from the Edinburgh Parallel Computing Centre (EPCC) claims will move along sound synthesis techniques that have remained stagnant for decades.
Maynard worked with Stefan Bilbao of the Edinburgh School of Music to recreate the musical sounds of bars, plates and 3D acoustic spaces.
Listen to sound of a gong This sound is entirely synthesized by GPUs. By solving the equations which describe the physical properties of the gong, the sound is computed accurately. Does it sound like the real thing?
They believe they can recreate the sounds of musical instruments and even invent new types of instruments. These new classes of virtual instruments will be more complex and richer than anything ever invented in the real world.
"We're still listening to sounds produced by algorithms developed in the 1960s and 1970s. There's a huge unexplored world of sound out there," said Maynard.
GPUs best at mimicking sound propagation
To accurately recreate musical sounds Maynard needed to update musical synthesis code so it could run 100 times faster than it was capable of doing. Accurate sound synthesis can only be achieved with a technique called 'time stepping'. Approximations using numerical methods - partial differential equations - are required to understand what rules govern changes to a system and how this propagates through space and time, for example, how the sound travels from a metal plate that has been struck with a hammer.
In these equations continuous space is replaced with grid points and continuous time is replaced with time steps. "The more points, the smaller the gaps, the better the approximation, but this means more computation," Maynard said.
This algorithm can only efficiently run on hardware with processor chips that are closely connected because values need to be updated between neighboring grid points frequently. GPUs were selected as the ideal hardware, as certain parallel problems can be computed faster on GPU architectures, compared to their cousins, Central Processing Units (CPUs), which process operations sequentially.
Modelling the instrument, not sampling the sounds
"We showed one can use GPUs to significantly accelerate the synthesis code for a particular example. This proof of concept was very important in obtaining substantial funding to embark on a much more ambitious project to use GPUs to synthesize sound for much larger and more complex systems," said Maynard.
In order to recreate high-quality sound from a computer, physical modelling synthesis is deemed the most realistic approach, but it's also the hardest. It simulates what sound is by mathematically describing the physical properties of musical instruments.
This is far more accurate and less computationally intensive than using samples, for example, when modelling acoustic properties, according to the researchers.
New classes of virtual instruments
"We're not just looking to reproduce sounds of existing instruments, but to introduce new classes of virtual instruments, which have a natural acoustic character, but which are not rooted to any real world instrument," said Maynard.
"Take the xylophone ... this is a collection of bars. Imagine that you could connect these together in various ways - so that by hitting one bar, you could elicit a sound from another, or perhaps all. It is the connections which allow for a huge range of possible sounds - far beyond anything which could be done with actual bits of wood and metal," Bilbao said.
Maynard is now working with Bilbao on rendering traditional instruments including cymbals, gongs, and electromechanical keyboards such as the Clavinet and Rhodes electric piano. They even plan to emulate large acoustic spaces such as concert halls. Maynard said, "a synchronous architecture is required." High-performance computers installed with plenty of GPUs would be suited for this task.
Starting in 2012, they plan to further develop the algorithms and sounds produced to be able to make virtual music in real-time.