music editing / decomposition

Alan Kay Alan.Kay at squeakland.org
Thu Sep 18 18:04:02 UTC 2003


Andy Moorer's (THX, NoNoise, etc.) Stanford thesis in the late 60s or 
early 70s showed how to use comb filters to track a single instrument 
in the middle of other instruments. Stuff quite a bit better could be 
done today (and I'd be surprised if more and better "disentangling" 
stuff hasn't been done in the last 25 to 30 years). However, the 
combination of the cochlea and the brain is pretty amazing even so. 
The key was thought to be (and I think it still is) that it isn't 
really about harmonics per se, but about phase spaces (where "FM 
timbre" generation is an interesting analogy), and the physiological 
mechanisms are so acute because these mechanisms are the ones used to 
do "cocktail party listening" where one can home in on and follow a 
single conversation amongst many, and switch conversations at will.

Cheers,

Alan

At 5:40 PM +0100 9/18/03, Gary McGovern wrote:
>Thanks for all those links and resources. I'm more than surprised this
>isn't more evolved. I thought if the human ear could selectively pick out a
>track in a composition / song then that track has its own unique state and
>behavior and it would just be a matter of defining the state and behavior
>and identifying those tracks electronically.
>
>Thanks again.
>
>Gary
>
>>>  From: [...] Gary McGovern
>>>  I've been told it doesn't exists but I can't believe it.
>>
>>I'd be much more surprised if it *did* exist - most particularly for MP3.
>>
>>>  I've looking for software that will take an MP3 or similar and break it
>>>  down into individual tracks (drums,guitar synth etc) and a score.
>>
>>It's been done for single instruments; IIRC there's a hardware device out
>>there that will take a single, pitched note from eg. a guitar or flute and
>>turn the data into a MIDI stream.  That's feasible - although even in this
>>case it gets confused about fundamental/harmonic on occasions.  Take the
>>same guitar [bit difficult on a flute :-)] and play chords on it, and the
>>problem gets harder.  Consider, for example, a power chord of root, fifth
>>and octave.  The frequencies present in that get very confusing, as fifth
>is
>>1.5x root.  Again, an analysis system that was expecting guitar might be
>>able to deal with this if the harmonics had the expected strengths; but a
>>more general system might reckon it was listening to an oboe with very
>>little of the fundamental, for example (the overtones on an oboe are
>>actually higher amplitude than the fundamental).
>>
>>That outlines the problems for one instrument.  Now consider the problems
>>with more than one.  For example, I play in a folk band.  We have two
>>melodeon [accordion] players, each playing a melodeon with four reeds per
>>note (so four sets of slightly detuned harmonics per note) via chorus
>units
>>(which detune again, so maybe 8-12 detuned sources per melodeon).  One can
>>get some information from relative strengths of harmonics and, by
>>implication, stereo positioning - but it's hard.  Much, though not all,
>>modern music is processed in this way to thicken the sound; it makes life
>>more difficult.  Classical orchestral music has similar issues because of
>>multiple players of the parts, although something like a string quartet
>>might be a good starter for analysis.
>>
>>MP3 makes all of this worse because it removes information.  Stereo
>imaging
>>blurs; harmonic strengths change; phase information is lost.  Actually,
>CD's
>>pretty bad for that too on the high frequencies - ask Ian Piumarta about
>the
>>difference between good vinyl on a Linn turntable and a CD!  Ideally, one
>>needs more information rather than less.
>>
>>Having said all of this, there's plenty of work in the field.  An
>>interesting starting point might be:
>>
>>http://www.aigeek.com/geek/music/
>>
>>		- Peter
>>
>>


-- 



More information about the Squeak-dev mailing list