Comparison analog <-> digital distortion

Started by carrejans, March 09, 2009, 10:04:49 AM

Previous topic - Next topic

JDoyle

This is my own subjective, no-proof-whatsoever opinion, but I believe that the act of sampling a signal, no matter hom many times per second, along with the tiny but real time it takes a processor to 'work' on the signal, leads to an inorganic sound that seems seperate or 'independent' of the entire 'system' of the guitarist+FX+amp combination.

I think having something react in real time, to your ENTIRE signal, is vital to an organic feeling and sounding rig.

We all know that everything matters, and it all interacts, so in my mind cleaving out the guitar line, turning it into ones and zeros, sending that through a formula and reconstituting the result as the guitar signal, takes away the 'soul' and deprives your entire 'system' of the natural and real time changes that occur throughout.

Again, my opinion...

Jay Doyle

drk

I think digital effects don't necessarily need to sound exactly like analog ones. I just think of them as something different, with it's own characteristics.

Mark Hammer

In science, one of the basic principles is that the more articulately and realistically you can describe a phenomenon in detail, the closer you can get to explaining it, and eventually predicting and controlling it. So, if I know in detail how a cancer cell behaves under various circumstances, the better able I become to explain why it behaves that way, and devise means to influence it.

Distortion is often portrayed in terms of static waveforms.  But as I keep reminding people, a guitar is not a steady state signal generator.  So, the tone is a product of the interaction between the moment-to-moment properties of the signal, and what happens to the circuit properties, as a function of that ever-changing signal.  And its not just the current signal level, relative to some diode forward voltage.  It is the current signal level relative to the last 2-3 milliseconds, and what that might or might not do to the battery, the behaviour of electrolytic caps, etc., PLUS the spectral content of the signal.  And remember as well that thicker strings have more output than thinner ones.

So, while comments regarding the use of "cheap DSP" chips are marginally applicable here, it is not so much the cheapness of the chip as much as it is the extent to which the chip is able to apply realistic algorithms in real time in a realistic way.  That requires speed, and yes resolution, but the world's most expensive 48-bit DSP allocated exclusively to producing distortion digitally will get you nowhere fast unless the algorithms are realistic and comprehensively descriptive of what normally happens to the signal in the analog/real world.  remember, that is what one is trying to emulate in the digital domain.  Once upon a time, the limitations really WERE in the technology itself, with 8-bit processing at 16khz sample rates and such.  Even if you knew exactly what happened to the signal, moment to moment, the technology couldn't productively use that descriptive/algorithmic information.  At this point, however, the limitations are really in the human mind and the algorithmic description/depiction of what happens when guitar signmal produce distortion.  The chips themselves are more than fast and precise enough to do the job. After all, they can reproduce already-recorded guitar distortion to our satisfaction.

What people dislike most about digital distortion, I think, is not the tone produced.  You can get some really nice tones.  Rather it is the way that the tone produced is, or feels, less responsive to the dynamics and shifting properties of one's playing.  In other words, you can't wring as many different sounds out of digital distortion with your fingers and pick as you can out of an analog distortion.

So, ultimately, what is missing is a complete description for the DSP chip of what to do when you encounter X, Y, and Z in the player's picking.  Case in point, there are often requests for identifying what distortion box was used on various 60's hits.  People try this one and that one and nothing ever seems to come close.  All unit-to-unit variation aside (and that is not inconsiderable), they forget that many of those hits were played by studio musicians on big body jazz guitars using floating bridges and medium-to-heavy gauge strings.  The signal hitting the fuzzbox was entirely different, especially in terms of its attack-vs-decay-phase properties, so the behaviour of the circuit was also different.  Are current digital models articulate enough to produce such nuanced perfamcne?  My hunch is "Not yet".

Again, this is not to say it is impossible.  The challenge really IS in the describing rather than the chips now.  If you want better digital distortion, you need to spend more time thinking about, and mapping, the properties of analog distortion under a wide range of conditions.  As it happens, most experts tend to be unable to describe how they do something, so we've been hampered in the task of creating the appropriate algorithms because we have been hampered in our ability to describe the phenomenon itself.  It's not just what shows up on the scope screen.  It's what you DID to produce that.

doc_drop

I played digital sims, both hardware and software for years. I recently built some ROG amp pedals as I was starting to get into this hobby. It is like night and day. The pedals beg me to play them, they just sound and feel good. The sims on the other hand beg me to tweak them, hoping to get a good sound. I feel like an idiot for not learning how to build years ago!

I do think it comes down to 2 things. The anolog stuff is just more responsive to picking, guitar volume, etc. And, not having any latency at all really, really helps it to feel, uh, immediate. I swear my playing has improved a bunch from the shear pleasure of playing through pedals and not my DAW.

As far as the math, I'll take your guy's word for it...

iaresee

I think you need to be careful about what it is your talking about: simulating distortion or simulating an amplifer-speaker system (that may or may not be overdriving at any part of it's signal chain).

There are things you can do in the digital domain that can't readily be done in the analog domain. Some things can't be done at all in the analog domain. And there are companies now taking advantage of this to great effect. For example the Source Audio SoundBlox Multiwave Guitar Distortion -- that's an all-digital distortion box that does a number of things analog has a hard time with (or can't do at all). Specifically the multi-band distortion (can be done in analog, but is harder -- you can't build perfect filters in analog, and incremental improvements come at great expense). And their foldback distortion algorithm.

Here's a great paper they released on the Multiwave approach to digital distortion: http://www.sourceaudio.net/whitepapers/multiwave_distortion.pdf

If all you're concerned with is mimicking analog sounds in the digital domain you're only seeing a very small piece of what can be done with digital distortion. The guys at Source Audio, they "get it". Digital gives you new possibilities and they're exploring those, my opinion here, to great effect. Why make digital reproduce what's already done in the analog domain? That's so short sighted really. You've got other companies out there doing nice bit -reduction and sampling-type distortion. Totally unique, digital domain sounds. Onwards and upwards!

Cliff Schecht

Quote from: Mark Hammer on March 10, 2009, 01:16:22 PM
In science, one of the basic principles is that the more articulately and realistically you can describe a phenomenon in detail, the closer you can get to explaining it, and eventually predicting and controlling it. So, if I know in detail how a cancer cell behaves under various circumstances, the better able I become to explain why it behaves that way, and devise means to influence it.

Distortion is often portrayed in terms of static waveforms.  But as I keep reminding people, a guitar is not a steady state signal generator.  So, the tone is a product of the interaction between the moment-to-moment properties of the signal, and what happens to the circuit properties, as a function of that ever-changing signal.  And its not just the current signal level, relative to some diode forward voltage.  It is the current signal level relative to the last 2-3 milliseconds, and what that might or might not do to the battery, the behaviour of electrolytic caps, etc., PLUS the spectral content of the signal.  And remember as well that thicker strings have more output than thinner ones.

So, while comments regarding the use of "cheap DSP" chips are marginally applicable here, it is not so much the cheapness of the chip as much as it is the extent to which the chip is able to apply realistic algorithms in real time in a realistic way.  That requires speed, and yes resolution, but the world's most expensive 48-bit DSP allocated exclusively to producing distortion digitally will get you nowhere fast unless the algorithms are realistic and comprehensively descriptive of what normally happens to the signal in the analog/real world.  remember, that is what one is trying to emulate in the digital domain.  Once upon a time, the limitations really WERE in the technology itself, with 8-bit processing at 16khz sample rates and such.  Even if you knew exactly what happened to the signal, moment to moment, the technology couldn't productively use that descriptive/algorithmic information.  At this point, however, the limitations are really in the human mind and the algorithmic description/depiction of what happens when guitar signmal produce distortion.  The chips themselves are more than fast and precise enough to do the job. After all, they can reproduce already-recorded guitar distortion to our satisfaction.

What people dislike most about digital distortion, I think, is not the tone produced.  You can get some really nice tones.  Rather it is the way that the tone produced is, or feels, less responsive to the dynamics and shifting properties of one's playing.  In other words, you can't wring as many different sounds out of digital distortion with your fingers and pick as you can out of an analog distortion.

So, ultimately, what is missing is a complete description for the DSP chip of what to do when you encounter X, Y, and Z in the player's picking.  Case in point, there are often requests for identifying what distortion box was used on various 60's hits.  People try this one and that one and nothing ever seems to come close.  All unit-to-unit variation aside (and that is not inconsiderable), they forget that many of those hits were played by studio musicians on big body jazz guitars using floating bridges and medium-to-heavy gauge strings.  The signal hitting the fuzzbox was entirely different, especially in terms of its attack-vs-decay-phase properties, so the behaviour of the circuit was also different.  Are current digital models articulate enough to produce such nuanced perfamcne?  My hunch is "Not yet".

Again, this is not to say it is impossible.  The challenge really IS in the describing rather than the chips now.  If you want better digital distortion, you need to spend more time thinking about, and mapping, the properties of analog distortion under a wide range of conditions.  As it happens, most experts tend to be unable to describe how they do something, so we've been hampered in the task of creating the appropriate algorithms because we have been hampered in our ability to describe the phenomenon itself.  It's not just what shows up on the scope screen.  It's what you DID to produce that.
Mark, I'm glad you wrote up a big long response. I was about to do the same but was dreading how long my response would be. This topic has so much breadth to cover that it could easily fill up a typical engineering text. Still, I'd like to tack onto what you've already stated.

Doing "real-time" digital signal processing (in quotes because it's always at least a sample behind) for something as dynamic as a distortion is no simple task (duh!). As you've already stated, it's all about the algorithm used. I've seen some great attempts and some not so great ones. One of my favorite bad attempts was one I read about in a students paper who did a multi-fx unit as their senior project. Their distortion algorithm was literally "if x is greater than 2, out = 2, if x is less than -2, out = -2". Essentially, they were doing a static hard clip, which I never heard but guarantee it sounded TERRIBLE!

The real programmers know that in order to properly model an analog-type distortion, you have to understand and model everything in order to obtain the non-linear distortion characteristic you are after. For diode based distortion stuff, it's about accurately modeling the I-V curve of whatever particular diode you like as well as the resistance and capacitance that shifts as you change the signal levels in the device. With a transistor device, you have to design your model according to a set of equations (and limits) that define the operation of a transistor. The tricky part here is modeling the dynamic changes that occur within a transistor as you use a small wiggle at the base to get a much larger one at the collector (and the steady-state parameters as well) so that everything sounds and acts natural. If you don't model the natural exponential nature of a transistor device, for example, you start running into problems like unnatural sounding decay. Plus one has to factor in all of the components surrounding a transistor, these obviously play a vital role in the final sound heard at the output of your distortion box.

Sigh... There's a lot to write about this topic and not enough time!! I've got a 40 page paper due Thursday :icon_eek:..

Processaurus

Quote from: JDoyle on March 10, 2009, 12:37:36 PM
This is my own subjective, no-proof-whatsoever opinion, but I believe that the act of sampling a signal, no matter hom many times per second, along with the tiny but real time it takes a processor to 'work' on the signal, leads to an inorganic sound that seems seperate or 'independent' of the entire 'system' of the guitarist+FX+amp combination.

I think having something react in real time, to your ENTIRE signal, is vital to an organic feeling and sounding rig.


An interesting fact: each foot your ear is away from your speaker is going to add ~1ms delay between playing and hearing the note.  Analog latency...

Cliff Schecht

Quote from: Processaurus on March 11, 2009, 12:16:26 AM
Quote from: JDoyle on March 10, 2009, 12:37:36 PM
This is my own subjective, no-proof-whatsoever opinion, but I believe that the act of sampling a signal, no matter hom many times per second, along with the tiny but real time it takes a processor to 'work' on the signal, leads to an inorganic sound that seems seperate or 'independent' of the entire 'system' of the guitarist+FX+amp combination.

I think having something react in real time, to your ENTIRE signal, is vital to an organic feeling and sounding rig.


An interesting fact: each foot your ear is away from your speaker is going to add ~1ms delay between playing and hearing the note.  Analog latency...

Plus you have amplitude and phase differences because of part (and speaker) tolerances :D.

Lurco

Quote from: Cliff Schecht on March 11, 2009, 12:40:56 AM
Quote from: Processaurus on March 11, 2009, 12:16:26 AM
Quote from: JDoyle on March 10, 2009, 12:37:36 PM
This is my own subjective, no-proof-whatsoever opinion, but I believe that the act of sampling a signal, no matter hom many times per second, along with the tiny but real time it takes a processor to 'work' on the signal, leads to an inorganic sound that seems seperate or 'independent' of the entire 'system' of the guitarist+FX+amp combination.

I think having something react in real time, to your ENTIRE signal, is vital to an organic feeling and sounding rig.


An interesting fact: each foot your ear is away from your speaker is going to add ~1ms delay between playing and hearing the note.  Analog latency...

Plus you have amplitude and phase differences because of part (and speaker) tolerances :D.

those (speaker) tolerances as well as the x feet distance do add in a digital simulation set up too!

carrejans

Quote from: Mark Hammer on March 10, 2009, 01:16:22 PM
In science, one of the basic principles is that the more articulately and realistically you can describe a phenomenon in detail, the closer you can get to explaining it, and eventually predicting and controlling it. So, if I know in detail how a cancer cell behaves under various circumstances, the better able I become to explain why it behaves that way, and devise means to influence it.

Distortion is often portrayed in terms of static waveforms.  But as I keep reminding people, a guitar is not a steady state signal generator.  So, the tone is a product of the interaction between the moment-to-moment properties of the signal, and what happens to the circuit properties, as a function of that ever-changing signal.  And its not just the current signal level, relative to some diode forward voltage.  It is the current signal level relative to the last 2-3 milliseconds, and what that might or might not do to the battery, the behaviour of electrolytic caps, etc., PLUS the spectral content of the signal.  And remember as well that thicker strings have more output than thinner ones.

So, while comments regarding the use of "cheap DSP" chips are marginally applicable here, it is not so much the cheapness of the chip as much as it is the extent to which the chip is able to apply realistic algorithms in real time in a realistic way.  That requires speed, and yes resolution, but the world's most expensive 48-bit DSP allocated exclusively to producing distortion digitally will get you nowhere fast unless the algorithms are realistic and comprehensively descriptive of what normally happens to the signal in the analog/real world.  remember, that is what one is trying to emulate in the digital domain.  Once upon a time, the limitations really WERE in the technology itself, with 8-bit processing at 16khz sample rates and such.  Even if you knew exactly what happened to the signal, moment to moment, the technology couldn't productively use that descriptive/algorithmic information.  At this point, however, the limitations are really in the human mind and the algorithmic description/depiction of what happens when guitar signmal produce distortion.  The chips themselves are more than fast and precise enough to do the job. After all, they can reproduce already-recorded guitar distortion to our satisfaction.

What people dislike most about digital distortion, I think, is not the tone produced.  You can get some really nice tones.  Rather it is the way that the tone produced is, or feels, less responsive to the dynamics and shifting properties of one's playing.  In other words, you can't wring as many different sounds out of digital distortion with your fingers and pick as you can out of an analog distortion.

So, ultimately, what is missing is a complete description for the DSP chip of what to do when you encounter X, Y, and Z in the player's picking.  Case in point, there are often requests for identifying what distortion box was used on various 60's hits.  People try this one and that one and nothing ever seems to come close.  All unit-to-unit variation aside (and that is not inconsiderable), they forget that many of those hits were played by studio musicians on big body jazz guitars using floating bridges and medium-to-heavy gauge strings.  The signal hitting the fuzzbox was entirely different, especially in terms of its attack-vs-decay-phase properties, so the behaviour of the circuit was also different.  Are current digital models articulate enough to produce such nuanced perfamcne?  My hunch is "Not yet".

Again, this is not to say it is impossible.  The challenge really IS in the describing rather than the chips now.  If you want better digital distortion, you need to spend more time thinking about, and mapping, the properties of analog distortion under a wide range of conditions.  As it happens, most experts tend to be unable to describe how they do something, so we've been hampered in the task of creating the appropriate algorithms because we have been hampered in our ability to describe the phenomenon itself.  It's not just what shows up on the scope screen.  It's what you DID to produce that.

Thank you Mark for this explanation. It really got me thinking. And that's what I want. ;-)

carrejans

Quote from: iaresee on March 10, 2009, 03:00:00 PM
I think you need to be careful about what it is your talking about: simulating distortion or simulating an amplifer-speaker system (that may or may not be overdriving at any part of it's signal chain).

There are things you can do in the digital domain that can't readily be done in the analog domain. Some things can't be done at all in the analog domain. And there are companies now taking advantage of this to great effect. For example the Source Audio SoundBlox Multiwave Guitar Distortion -- that's an all-digital distortion box that does a number of things analog has a hard time with (or can't do at all). Specifically the multi-band distortion (can be done in analog, but is harder -- you can't build perfect filters in analog, and incremental improvements come at great expense). And their foldback distortion algorithm.

Here's a great paper they released on the Multiwave approach to digital distortion: http://www.sourceaudio.net/whitepapers/multiwave_distortion.pdf

If all you're concerned with is mimicking analog sounds in the digital domain you're only seeing a very small piece of what can be done with digital distortion. The guys at Source Audio, they "get it". Digital gives you new possibilities and they're exploring those, my opinion here, to great effect. Why make digital reproduce what's already done in the analog domain? That's so short sighted really. You've got other companies out there doing nice bit -reduction and sampling-type distortion. Totally unique, digital domain sounds. Onwards and upwards!

For my Master's Thesis, I am currently designing a (multi)-effects processor for hexaphonic pickups. In this way I also reduce the intermodulation distortion. I think it might even give better results than the distortion box from Source Audio.
Thanks for the link; I never heard of them.


carrejans

Quote from: Cliff Schecht on March 10, 2009, 07:39:47 PM
Sigh... There's a lot to write about this topic and not enough time!! I've got a 40 page paper due Thursday :icon_eek:..

Thanks for contributing to this thread. And good luck with the paper; it can be hard sometime.  :P

g-sus

Check out: http://www.simulanalog.org/

There are some white papers and a Guitarsuite VST plugin available based on their research :)

Mark Hammer

Quote from: carrejans on March 11, 2009, 06:28:19 AM
Thank you Mark for this explanation. It really got me thinking. And that's what I want. ;-)
My pleasure.

What lead me to this was some reading I had done years ago on the history of recording.  It seems that when Thomas Edison was doing the county fair circuit with his wax cylinder recording machine (where you would pay money to record your voice, or that of your girlfriend...presumably to impress your girlfriend), reporters couldn't say enough about how "lifelike" the recordings were.  Of course, we laugh at that claim now, but at the time people were simply impressed by the very fact of a replayable recording.  With time, the novelty wore off, at which point we can assume that some people started being able to say "Well, you know, it doesn't sound exactly like the person's voice in real life".  At which point, we can safely assume that other people started going "Hmmm, well just exactly what IS the difference between these recordings and 'real life', and what could we do to nudge the recorded voice closer in that direction?".

And from there we have the steady progression of audio history, as we discover stuff like bandwidth, group delay, intermodulation, and a whole host of parameters of both the signal itself and the technology used to process that signal.  So, for instance, there IS no crossover distortion in "real life", but there IS such a thing in the technology used to produce/reproduce sound, and it interferes with the "realism" of sound.  Similarly, aliasing is an artifact introduced by the use of clock-based sampling.  It does not exist in the acoustic world, but when we try and emulate the acoustic world, we go "Geez, you know that doesn't sound quite right", at which point we recognize that what lower sampling rates and lower-resolution sampling introduce to the signal interferes with it sounding "life-like".  So, we embark upon improving the sampling technology to the point where it does not interfere with what the human ear needs to hear in order to eliminate audible differences between the real acoustic world and pathetic attempts to mimic it.

That's what I mean by this relationship between description, explanation, prediction, and control.  First, you characterize what something IS.  Then you say WHY it is that way.  Then you can say when it will be like this or that, at which point you can make it be like this or that.  But it all starts with careful observation and thorough description.

As for hex distortion, and in the spirit of what I said above, let us consider what does and doesn't take place when a circuit that can introduce harmonic content is applied to a multi-string, vs single string, signal source.  You've mentioned intermodulation, but let's go beyond that.  Note that the sum total signal of the normal pickup is composite of all the contributions that individual strings make to overall signal amplitude, whereas the clipping threshold of the normal distortion circuit is fixed.  What this means is that the extent of clipping is a function of not only how hard the string is plucked, but how many strings are strummed and which ones.  Thicker strings will move one inexorably closer to any clipping threshold, and if an unwound string comes along for the ride, then the harmonic changes applied to the signal get applied to it as well.  This is, in fact, the very basis for the TS-808 design and the mid-hump.  It was an attempt to compensate for the "extra push" provided by wound strings that would introduce disproportionate clipping across the fretboard.  By introducing a disadvantage to the wound strings through filtering, the intent was to produce relatively equivalent clipping from low to high E.

But all of that is a) in the analog world, not digital, and b) in the monophonic world, not hex.  So the question immediately comes up "How could I sidestep the intermodulation that occurs with a monophonic signal, but at the same time mimic the additive effect that strumming 3 wound strings normally has on a distortion box?".  In other words, "Smoke on the Water" should sound more distorted played on the E, A, and D strings than played on the D, G, and B strings.  What this suggests is that six parallel non-interacting processing pathways is not necessarly the route to go.  There needs to be some "meta" data which allows the parameters of the individual processing paths to be summed in intelligent ways.

There.  I thnk I just made you brew another pot of much stronger coffee. :icon_lol:  You're going to be up late tonight!

Caferacernoc

"Again, this is not to say it is impossible.  The challenge really IS in the describing rather than the chips now.  If you want better digital distortion, you need to spend more time thinking about, and mapping, the properties of analog distortion under a wide range of conditions.  As it happens, most experts tend to be unable to describe how they do something, so we've been hampered in the task of creating the appropriate algorithms because we have been hampered in our ability to describe the phenomenon itself.  It's not just what shows up on the scope screen.  It's what you DID to produce that."

Bingo. It's the same reason we can't get jfets or mosfets to sound exactly like tubes. Or a silicon fuzz to be the same as a germanium one. We don't know exactly why the germ does what it does. Is it frequency response, leakage, both? So it makes it hard to simulate something we can't accurately describe.
Tube amps seem to have softer distortion than transistor stomp boxes. Tubes run at higher voltages. Higher voltages increase headroom, not soft overdrive. Simulate diode to ground clipping on PSpice and it looks beautifully round and soft yet comes up way short in real life to our ears....
You got it Mark, we aren't measuring and accurately describing what we are trying to emulate.

Caferacernoc

Quote from: JDoyle on March 10, 2009, 12:37:36 PM
This is my own subjective, no-proof-whatsoever opinion, but I believe that the act of sampling a signal, no matter hom many times per second, along with the tiny but real time it takes a processor to 'work' on the signal, leads to an inorganic sound that seems seperate or 'independent' of the entire 'system' of the guitarist+FX+amp combination.

I think having something react in real time, to your ENTIRE signal, is vital to an organic feeling and sounding rig.

We all know that everything matters, and it all interacts, so in my mind cleaving out the guitar line, turning it into ones and zeros, sending that through a formula and reconstituting the result as the guitar signal, takes away the 'soul' and deprives your entire 'system' of the natural and real time changes that occur throughout.

Again, my opinion...

Jay Doyle

Agreed. I'm kind of an audiophile. Most high end stereo guys and gals think digital still has problems. Most don't think a $100 DVD player cuts the mustard. Especially the AD and DA converters. Heck, most don't think a $1000 DVD is really all that good! So, I would seriously question the quality of the converters in a POD or my computers soundcard. And I can't believe all these so called analog delays on the market with the digital chip and a low pass to simulate the analog warmth. No thanks. I would prefer to NOT run my guitar signal through a $5 AD and DA converter.

MikeH

I boils down to; "Can you make a more life-like sculpture of my face using legos or clay?"



The real answer is, if the legos were small enough you couldn't tell the difference.  But they don't make 'audio legos' that small... yet.
"Sounds like a Fab Metal to me." -DougH

iaresee

Quote from: MikeH on March 11, 2009, 03:47:28 PM
The real answer is, if the legos were small enough you couldn't tell the difference.  But they don't make 'audio legos' that small... yet.

Good analogy. But I think they do. Sampling at 96 kHz with 24 bits of quantization is more than adequate for our lame ears. It's just not available in consumer audio formats yet.

Cliff Schecht

Quote from: Caferacernoc on March 11, 2009, 11:38:22 AM
Quote from: JDoyle on March 10, 2009, 12:37:36 PM
This is my own subjective, no-proof-whatsoever opinion, but I believe that the act of sampling a signal, no matter hom many times per second, along with the tiny but real time it takes a processor to 'work' on the signal, leads to an inorganic sound that seems seperate or 'independent' of the entire 'system' of the guitarist+FX+amp combination.

I think having something react in real time, to your ENTIRE signal, is vital to an organic feeling and sounding rig.

We all know that everything matters, and it all interacts, so in my mind cleaving out the guitar line, turning it into ones and zeros, sending that through a formula and reconstituting the result as the guitar signal, takes away the 'soul' and deprives your entire 'system' of the natural and real time changes that occur throughout.

Again, my opinion...

Jay Doyle

Agreed. I'm kind of an audiophile. Most high end stereo guys and gals think digital still has problems. Most don't think a $100 DVD player cuts the mustard. Especially the AD and DA converters. Heck, most don't think a $1000 DVD is really all that good! So, I would seriously question the quality of the converters in a POD or my computers soundcard. And I can't believe all these so called analog delays on the market with the digital chip and a low pass to simulate the analog warmth. No thanks. I would prefer to NOT run my guitar signal through a $5 AD and DA converter.

I would prefer to run my guitar signal through a $5 A/D and D/A, because for that price you can get a stereo audio codec with 24bit/96kHz DAC and ADC's. If you're complaining about sound quality with one of these parts then it's the software, not the hardware (or badly designed hardware).

Example $5 audio codec IC: http://focus.ti.com/lit/ds/symlink/tlv320aic23b.pdf