A magazine where the digital world meets the real world.
On the web
- Home
- Browse by date
- Browse by topic
- Enter the maze
- Follow our blog
- Follow us on Twitter
- Resources for teachers
- Subscribe
In print
What is cs4fn?
- About us
- Contact us
- Partners
- Privacy and cookies
- Copyright and contributions
- Links to other fun sites
- Complete our questionnaire, give us feedback
Search:
Moving forward by modelling the past...
Digital audio has brought the ease of manipulating sounds to everyone with a computer, but it seems musicians want things from the past not the future: the sound of the first 'Fender Stratocaster' guitar, the 'pre-EB Stingray' bass guitar, the 'Fairchild 670 compressor' and the 'Pultec EQP-1A' for example all make aficionados drool.
There is a huge market for recreating the sound of these classics and the most affordable way is using software rather than rebuilding the instruments. Two promising ways are described here. One way, 'physical modelling', is to build a mathematical model in software based on the hardware counterpart. This can de done in varying amounts of detail and sophistication ranging from modelling a section of a hardware device down to modelling the components used. When building the physical models the researchers measure how the original alters the signal of various audio sources. After this they create a mathematical formula to use in the software that best matches the performance of the actual hardware. It is important that they also do listening tests to fine tune the model as it is the 'sound' of the original that the user wants and not just a matching graph!
Compared to the original, creating an instrument in software with physical modelling has some distinct advantages: you are not limited to the number of pieces of kit you own, just click to add another plugin. Furthermore, software is usually a lot cheaper than hardware and needs none of the latter's hands on maintenance (well ok, so there might be an update occasionally to install!). There is one drawback to the modelled version in software though. Because the developers are always trying to match the characteristics of the original there always comes a point where the software designers say "that sounds close enough", but it's never 100% exact.
There is a second method that has been used by the company Focusrite called 'dynamic convolution' and a slightly different version by the company Acustica-Audio. It is based on the idea of taking an acoustical fingerprint of a room - extracting the acoustics of a room so that virtual versions of it can be created (see 'Sing in the Albert Hall'). Dynamic convolution takes a fingerprint not of the room but of the gear in question for the different combinations of knob position and input level. As there are lots of combinations this leads to a huge amount of information being stored. Often only a few positions are recorded to save time and file size, but the more combinations stored the more accurate the result. Settings between those stored can also be reproduced by combining those from the knob positions on either side. There are at least 44100 samples for one second of CD quality sound and each distinct sample can use a different one of those virtually created settings. This method means that things like distortion and other characteristics of audio hardware can be reproduced better. Computers can only process so much information at once though so the accuracy of the fingerprints sampled, which in theory are perfect, has to be cut down to a point where it is deemed that people can't hear the difference. This is called 'truncation'. So like the modelling method there is also a point where the software is as close as possible, but it will never be 100 per cent perfect.
Which method is best? There's no clear answer. Dynamic convolution is the most likely to deliver the closest sound to the hardware as measurements from the exact unit are used by the software, but it can be very CPU-intensive to use - it takes lots of computing power. That's why Focusrite sell it in a piece of hardware rather than as software to run on your own computer. On the other hand, if done well, using physical modelling can come extremely close to the real deal and very often doesn't use much computing power at all so lots of virtual hardware can be used at once. Either way people always seem impressed and excited at a software version of a hardware classic, even though they have probably never heard (nor could afford!) the original which means they won't know if it's not 100 per cent perfect anyway!