Musical Signal Models for Audio Rendering
Friday, February 2, 2001 - 12:45pm - 2:00pm
Julius Smith (Stanford University)
This talk will summarize several lines of research going on in the field of music/audio signal processing that are applicable to audio compression and data reduction. While the techniques were motivated originally by the desire for realistic virtual musical instruments (including the human voice), the resulting rendering models may be efficiently transmitted to a receiver as a specialized decoder in software form which is then played by a very sparse data stream. In most cases, there is also a straightforward tradeoff between rendering quality and computational expense at the receiver. Since all models are built from well behaved audio signal processing components, the distortion at low complexity levels tends to be of a high level character, sounding more like a different instrument or performance than a distorted waveform.