Analog vs. DSP Time effects

Started by modsquad, May 11, 2007, 10:26:54 AM

Previous topic - Next topic

modsquad

A question occurred to me when driving in to work today.  I know from my experience with my DSP2101 (programming nightmare) that the digital distortions suck unless you absolutely have to have an over processed sound.  However I was thinking that for effects such as flanging, echo, chorus, reverb, etc.  that really use "retiming", cycle, etc. digital processing is really a convenient and more flexible method for producing these effects.

Am I missing something or for the non-distortion, gain based effects there is a big difference in sound vs. using digital processing.  One of the things I like is having built in eq, mixers, etc. in the DSP unit.   What I don't like is programming the algorithms in the darn thing.  Just wanted to get your opinions on it.  I am trying to determine whether I want to use the effects loop in it or not.
"Chuck Norris sleeps with a night light, not because he is afraid of the dark but because the dark is afraid of him"

Mark Hammer

Roger Mayer, in his interview for Dave Hunter's book on effects pedals, makes an interesting point that analog-vs-digital comparisons show up primarily during the decay portion of a note's lifespan.  His contention is that, where analog devices take a snapshot of the signal with what amounts to essentially "infinite resolution", no matter what the actual absolute amplitude of the signal is, analog-to-digital conversion has fewer bits of resolution to allot to the signal at lower amplitudes.  I don't know all that much about the state of a/d conversion these days, but I gather that reduction in resolution is much less of a problem when delta modulation is (where what is encoded is the change between sample N and sample N+1).  Given that we've been content with 44.1khz sampling at 16-bit resolution on CDs for a while now, I imagine that drops in resolution during quiter parts of an echo decay are less of an issue when one is dealing with 24-bit resolution at 96khz sampling rates.

Digital distortion IS hard to do, partly because the transformation of the signal is not as straightforward as what happens with time delay and time modulation.  Those latetr transformations are mapped onto the signal regardless of the moment-to-moment changes in signal level and spectral properties, the same way they are in the analog domain.  Distortion is one of those things where the properties of the "transformation" imposed on the signal depends on: a) the moment-to-moment amplitude shifts, b) the spectral content, c) the "support" for handling those shifts on the part  of the semiconductors , power supply, and other aspects.  In effect, there is far more to "describe" in any algorithm of distortion than there is with respect to simple time shifting or even filtering.

One of the other differences I keep harping on is an almost incidental one.  Years ago, it was either Len Feldman or Julian Hirsch, or one of the other regulars in Stereo Review, who commented in response to the many complaints about digital recordings that were cropping up around 1980-1982.  Many audiophiles complained that digital recordings sounded "harsh", "brittle", "sterile".  In part, I imagine some of it was sour grapes, but Feldman or whomever made an interesting point that we were witnessing the growing pains of an engineering culture that was simply misapplying what they had been required to do in the analog context.  The point he made was that the whole approach to mic-ing and EQ-ing had evolved in the context of doing whatever was required to "cut through" on analog tape.  When the greater dynamic range and extended bandwidth of digital recording was being used, those same mic-ing and EQ-ing techniques would result in exaggerated treble, even though they might produce "balanced" treble when analog tape was used.  In other words, he was satisfied that the challenge was one of adapting to the new technology, rather than improving the technology in some manner.  The technology was ready for US, but we weren't ready for IT.

I sensed a similar sort of "getting used to it" in the transition to digital from analog.  As I am maybe a little too fond of pointing out, in the real world, reflected sound rarely, if ever, has the same bandwidth as the original signal.  Longer delays and later reflections are much duller because they have bounced off farther and more surfaces, and those surfaces have absorbed the weakest parts of the spectrum - the treble.  Through happenstance rather than deliberate sonic design, the filtering needed to keep noise and clock whine out of analog delays imposes a certain amount of dulling that ends up mimicking the real world a little more effectively.  Because a digital code can be held in memory without error, full original bandwidth can be preserved at even the longest delay times in digital systems.  Historically, it seemed that digital devices aimed for maximum bandwidth (oops, almost typed in "nadwidth" there! :icon_wink: ), because, after all THAT's what higher fidelity is....in the world of direct sound.  "Fidelity" and faithfully reproducing an auditory phenomenon (the sound of physical space) are not exactly the same thing, though, and many of the same criticisms of digital delays (compared to analog delays) have arisen through what I think was a process similar to what was described earlier about digital vs analog recording.  That is, there was a failure to adapt the approach to design such that the same aural result could be, or was, achieved using whatever advantages the new technology could be provided.

This is all the long way of saying that distortion in the digital domain is hard to do convincingly because of algorithm complexity, and time-based effects are easy to do but just as easy to bugger up in the digital domain if you don't think enough about what you're trying to do.

modsquad

Quote from: Mark Hammer on May 11, 2007, 11:18:48 AM
As I am maybe a little too fond of pointing out, in the real world, reflected sound rarely, if ever, has the same bandwidth as the original signal.  Longer delays and later reflections are much duller because they have bounced off farther and more surfaces, and those surfaces have absorbed the weakest parts of the spectrum - the treble.  Through happenstance rather than deliberate sonic design, the filtering needed to keep noise and clock whine out of analog delays imposes a certain amount of dulling that ends up mimicking the real world a little more effectively.  Because a digital code can be held in memory without error, full original bandwidth can be preserved at even the longest delay times in digital systems.  Historically, it seemed that digital devices aimed for maximum bandwidth (oops, almost typed in "nadwidth" there! :icon_wink: ), because, after all THAT's what higher fidelity is....in the world of direct sound.  "Fidelity" and faithfully reproducing an auditory phenomenon (the sound of physical space) are not exactly the same thing, though, and many of the same criticisms of digital delays (compared to analog delays) have arisen through what I think was a process similar to what was described earlier about digital vs analog recording.  That is, there was a failure to adapt the approach to design such that the same aural result could be, or was, achieved using whatever advantages the new technology could be provided.
Its interesting you say that.  My DSP2101 has an insane number of parameters on all the time based effects.  Especially the reverb and echo.  After doing some reading in Anderton's book on effects processing I see its the engineers attempting to dial in what you are saying above.   I never really fooled with them too much and left them pretty much stock.   Digital sure makes it easy to get umpteenth stage phasers and choruses though.  I fully understand what you are saying about digital distortion.  I suppose that's why the DSP2101 has two 12AX7s in the preamp section to allow for tube overdrive if you need it.

I am assuming the same theories, etc. apply to compression also.  Thanks Mark, great thoughts.  Probably will wipe the dust off the ole rack unit and play with the effects loop and using the digital time effects.
"Chuck Norris sleeps with a night light, not because he is afraid of the dark but because the dark is afraid of him"

A.S.P.

a vote for "resolution"-companders in digital delays?
Analogue Signal Processing

Mark Hammer

We should probably distinguish between "echo" and "reverb".  Echo was essentially the first digital effect, going back to those Deltalab Effectrons and early A/DA rack units, etc.  There, we had no real "processing" in the sense of transforming the signal in some manner, merely sampling as accurate as you could get it, shifting the samples along, and playing the samples back.  Digital reverb is a far more complex process, and involves not only delaying and playing back, but transforming/processing the various reflections so as to mimic the multiple reflective surfaces and reflections that happen in a real physical space.  I suppose there are always Q&D ways of doing it, but when something says "Cathedral" or "Hallway" among its reverb settings/programs, there are some built-in assumptions about how the signal decays in a qualitative sense, not just what the longest delay and decay times are.  Creating emulations of different physical spaces (including their reflective properties - marble walls and floors vs carpet and drywall) requires multiple parameters to be adjusted.

It's a bit like drawing.  If all you want to do is show that someone HAS hair, then cartoon drawings with a marker may be more than sufficient.  If you want to present a sense of the texture of their hair, then you need to have more drawing technique and tools at your disposal.  Straightforward repeat-type echo is like marker-drawn cartoons or caricatures, where 3 or 4 parameters (time, repeats, mix) is sufficient to create the impression that there IS space.  Just what sort of space requires more complex adjustment of many more parameters.

Just as an aside, if you look at the thread on DD-3 mods, you will see descriptions of different means of adjusting the tonal properties of the wet signal.  One of the things I like about those mods, whether in a simple digital pedal or an analog delay, is that they start to move in the direction of being able to create the feel of different spaces by altering the quality of the reflections/repeats.  Not exactly Lexicon reverb, but more complex and nuanced than 3-knob echo.

QSQCaito

Awesome explanation Mark.

It seems to me that, as it has been said, digital sounds too perfect(in some cases). And it's not what people look up at these days.. Were everything is labelled "vintage"

Maybe there could be a mix of both.. Make your perfect delay and then with "analog components" make it sound as you wish.. is that possible?
D.A.C

David

Quote from: QSQCaito on May 11, 2007, 12:42:28 PM
Awesome explanation Mark.

It seems to me that, as it has been said, digital sounds too perfect(in some cases). And it's not what people look up at these days.. Were everything is labelled "vintage"

Maybe there could be a mix of both.. Make your perfect delay and then with "analog components" make it sound as you wish.. is that possible?

Absolutely!  Scott Swartz's PT-80 is a digital soundalike of the analog AD-80 delay.

A.S.P.

Analogue Signal Processing

modsquad

I think where Mark nails it is with the explanation that digital sounds too "perfect" because people where recording and using the digital effects in the same way they did analog.  Its  a different beast.  Everything now is digitally recorded to CD (okay I know, most everything).  However engineers are not using the same techniques that Mark mentions anymore, so you get a somewhat more "natural" sound.

Also, take for example the time delay effects in the DSP have a lot of parameters that can be adjusted.  I think this is due to the fact that guitarists, engineers, etc. wanted a less "sterile" sound.  This allows us to "dial in" what we want to hear.   Of course throw that all out when talking digital distortion.
"Chuck Norris sleeps with a night light, not because he is afraid of the dark but because the dark is afraid of him"

Paul Perry (Frostwave)

Quote from: A.S.P. on May 11, 2007, 12:28:30 PM
a vote for "resolution"-companders in digital delays?
That's the 'idea of the day'!
I guess we could design an external box with analog compander that acts as a 'wrapper' for the digital fx core!
And there are bound to be some artifacts that (with luck!) will be a + as well :icon_wink:

dirk

Quote from: Mark Hammer on May 11, 2007, 11:18:48 AM
This is all the long way of saying that distortion in the digital domain is hard to do convincingly because of algorithm complexity, and time-based effects are easy to do but just as easy to bugger up in the digital domain if you don't think enough about what you're trying to do.

Echo, delay, reverb, eqing, and mixing are all lineair processes. DSP's excel in lineair prosessing, it's just adding and subtracting. To do this correctly you only need more bitdepth, 32bit floating point for example.

Distortion, tube emulation and compression are all non lineair processes. This is very difficult to do in DSP's because with non lineair prosessing extra frequencies are created and they cause aliasing. To prevent these aliasing frequencies you have to upsample a lot. This eats DSP power fast. Ofcause you also have to have the extra bitdepth aswell.

rove

Quote from: Mark Hammer on May 11, 2007, 11:18:48 AM
One of the other differences I keep harping on is an almost incidental one.  Years ago, it was either Len Feldman or Julian Hirsch, or one of the other regulars in Stereo Review, who commented in response to the many complaints about digital recordings that were cropping up around 1980-1982.  Many audiophiles complained that digital recordings sounded "harsh", "brittle", "sterile".  In part, I imagine some of it was sour grapes, but Feldman or whomever made an interesting point that we were witnessing the growing pains of an engineering culture that was simply misapplying what they had been required to do in the analog context.  The point he made was that the whole approach to mic-ing and EQ-ing had evolved in the context of doing whatever was required to "cut through" on analog tape.  When the greater dynamic range and extended bandwidth of digital recording was being used, those same mic-ing and EQ-ing techniques would result in exaggerated treble, even though they might produce "balanced" treble when analog tape was used.  In other words, he was satisfied that the challenge was one of adapting to the new technology, rather than improving the technology in some manner.  The technology was ready for US, but we weren't ready for IT.
Being a recording engineer with limited experience with analog tape and extensive experience with a variety of digital formats I find that this is still often the case.  I get comments on my recordings that they sound so great for digital or I can't believe its not tape, but the reality is that digital (at its best) simply takes what you put in and spits it back out.  But people still mix for tape in the digital domain and it can sound awful, because the bits don't need to be cut through any magnetic particles with skewed frequency response  ;)

A.S.P.

yes the bits don`t have to cut through: they already are chopped sound!
Analogue Signal Processing