Digital/Analog Delay

Started by Khas Evets, April 06, 2005, 04:38:55 PM

Previous topic - Next topic

Khas Evets

People seem to prefer analog delays to digital for their warmth and character. I've never built one because the delay times are a little to short. Sure you could cascade BBD's, but that gets expensive.

I'm not sure if what we like about analog is the high frequency roll-off or the 'sound' of BBD's. The PT-80 addresses the high frequency roll-off issue but not the BBD sound.

So what if you combined digital and analog? Start with a digital line for ample delay time and finish it off with a 4096 (or shorter) BBD. Kind of like a zoom lens with opictal and digital focal lengths. I know this increases part count and complexity, but it's probably cheaper than stringing up BBD's to get over a second of delay. Just a thought.


:wink:  :wink:  :wink:

Peter Snowberg

The Force is very strong in this one. ;)
Eschew paradigm obfuscation


I'd goooo....analog. Just play the same thing over and over again. lol. now THAT is DIY. (literally)


Make it quick, before you start seeing it in stores.

You could vary the clock to the short analog delay line to decide how "analog" it gets.  That would be a pretty cool knob.

G Kresge

Let me know if you actually build it, I'd like to buy one!


Sounds like a good idea to me.


Correct me if I'm wrong, but I can remember reading that the Ibanez Digital Delay uses circuitry to add some analog distortion to the delayed signal?

Apart from that, sounds like it could work, provided you can make the analog/digital combination cheaper than just chaining several BBDs. Shouldn't be too hard, since BBDs are about $8 a piece.


Sounds like something fun to try :)

As a side note, some people prefer digital delays live because they don't have that rolloff that causes some analog delay to get lost in the overall mix. A "analog-ness" pot would be great to experiment with this (but label it Mojo!)
Disclaimer: I actually don't have an analog delay, so I can't verify this.


Very smart.  With a compansion path out of your run of the mill AD80 or DMM, a PT2399 and a MN3205 (or even 3207 if you don't really care about adding that much extra delay time) you could get some nice sounds.


Mark Hammer


As I detailed a little while back, folks like Roger Mayer have proposed that the audible difference between analog and digital delay often stems from the relative impact of "infinite resolution" (analog domain) vs "underquantized sampling" (digital domain) on the decay of delayed signals.  As Mayer proposes, once you pass the initial attack transients and the signal starts to decline, it is coded into fewer bits (unless, I suppose one uses, what is it, "delta modulation"?, but none of the all-in-one delay chips do that).  The result is that digital has more quantization error introduced during the decay phase.  He suggests that when "golden ears" types comment on analog/digital differences, their complaints are about the way notes die out with digital.

All of this suggests to me that placing a digital delay BEFORE an analog BBD immediately eradicates any advantage from a hybrid by presenting the BBD with a signal that has already been "corrupted" by quantization.  That is, all the infinite resolution in the world in a BBD means bupkes if you feed it a pre-quantized signal.

On the other hand, going straight into a BBD and feeding the output of the BBD to a Princeton chip or similar, can still lose the advantages of analog as well because the signal still gets quantized by the digital chip.

Twer I, the ideal solution would be to have the following:

1) Analog chip first, then digital, with a tap point in between.
2) For shorter delays, analog only.  When the BBD is "tapped out" (I'm not Catholic, but if I were, I'd muscle my way to the front of the line and confess this linguistic sin to the Pope's corpse right now! :lol: ), it then gets fed to the digital chip for added delay time.  There would tend to be either a blank spot in the delay range due to the limits on minimum delay possible from the digital chip, or else some overlap.  For instance, if you couldn't achieve a digital delay shorter than 30msec, an MN3005 would take you out to 350msec, and for anything 380msec or more you'd use an additional digital chip, OR, you'd have the choice of setting the BBD to less than its full delay, and adding some fixed amount of digital delay on top of that.
3) MUCH more LP filtering for the digital chip than the analog one.  IN nature, longer delays have more high end loss anyways, so not only would you eliminate some of the effect of quantization error from the digital chip, but you'd produce more natural sounding echoes too.

Of course something not raised yet is the issue of having two separate HF clocks in the same circuit.  The "heterodyning thing" has come up before, and while it is not insurmountable, it requires careful planning.

So, I repeat, for me the answer is "Nah".  I think the advantages would be few in comparison to the disadvantages or constraints.

Khas Evets

That's what I was concerned about. Whenever you do a A/D D/A, you pick up the negative artifacts of digital. I was thinking the analog chip after might soften it a little.


Hats off to you, Mark, I wasn't thinking of teh digital actually being worse in that manner.  However, if you were using some sort of a nice 24bit 44.1khz system, like in a DD-20 (as if any of us could pull that off), and put that through an analog delay, this would be more of an ideal situation I believe.


Mark Hammer

Quote from: ExpAnonColin... if you were using some sort of a nice 24bit 44.1khz system, like in a DD-20 (as if any of us could pull that off), and put that through an analog delay, this would be more of an ideal situation I believe.


Well that's just the thing.  With a 24-bit/96khz system, such as found in higher end soundcards, I imagine there would be plenty of resolution "left over" once one reaches lower signal amplitudes.  Sixteen-bit width is not all THAT shabby, given that traditional CDs are recorded at 14 bit width and 44.1khz.  

As some sort of DIY project, though, one is thinking in terms of available chips like the Princeton series, and those babies are nowhere near that sort of sample rate or resolution.  Indeed, much of the decaying "tail" of a delayed note is likely going to be coded at 10 bits or less by one of those chips, so if Roger Mayer is correct (about analog-vs-digital differences), he is correct in spades in this case.

Quote from: KhasEvets... I was thinking the analog chip after might soften it a little.

Ah, but what is it that does the "softening"?  A big chunk of that is the lowpass filtering required to keep noise under control with a BBD.  Maybe all you REALLY want is something like a standard PT2399 or PT2396 circuit with improved and more flexible filtering capabilities.  If you know me, you'll know I advocate adding LPF in the regen loop for more natural-sounding repeats.  Maybe that's all you want.

Incidentally, something that has never really been discussed openly is just what sort of delay times people actually use.  I imagine there is some cogniscence of it somewhere since things like Zooms and other digital mini-multifx often incorporate a couple of preset delay times.  On the other hand, if you were offered the option to have delay-time presets produced to your specs, how many different delay times would suffice, and what would they be?  For instance, are people completely happy with something that simply produces slapback, something that produces discretely audible repeats in the 300-600msec range and something thatgives you 3 sec of sampling time?  I'm just curious about the most actively used portin of the range of delay times available.

Because of limitations of BBDs, most of us whose taste for delay was forged in the era of the Memory Man and DM-2, grew up thinking in terms of less than half a second.  So, uh, supposing someone groomed on the 350msec ceiling gets their hands on a simple digital chip (I bought a couple of HT8955s a couple years ago and they can squeeze out 800msec), do they actually USE the zone between 350msec and 800msec?  Or do people approach their delay pedals like their fuzzes, with a couple of preferred settings?

Khas Evets

I should have been clear with the term softening. I meant by taking the digital 'zipper' and running it through an analog chip, there could be some smoothing of the wave.

I agree with you on delay times, there are many examples of delay times slightly over 350ms (U2 and Pink Floyd). When you start approaching 1 second though, the examples start to taper off (Fripp and Queen come to mind).

Peter Snowberg

I’m going to answer a most definite “yeah”. ;)

There is a great deal of misunderstanding in the discussion so far when it comes to digital. I see apples getting compared to dump-trucks (much less oranges) and then lots of discussion based on the erroneous comparison.

Being an embedded systems person professionally, I feel like this is one of the few places where I can step into a discussion around here as an "expert". :lol: Please excuse my hubris. ;)

Like a gifted art student, digital is often misunderstood.

We don’t stick to 741 opamps for everything and base all opamp discussions around 741 parameters as if nothing else existed. Let’s not do the same with digital.

Digital was applied in products before cost and available technology were able to allow for it to be done at a comparable or superior level to analog, so it got a bad wrap right from the start. Bean counters at effects companies added to this greatly because its the object of an effects company to sell pedals and almost nobody would buy a $2800, 350ms delay.

In the digital world there are often several ways to get something done, just as there is in the analog world.

There are two distinct type of digital delays found in effects land. One uses memory which is as wide as the A/D & D/A, while the other uses a single bit (ever seen 1 bit D/A on a nice CD Player?) and a super high sample rate.

The first type is called "successive approximation" because the process involves trying different output values until one which is very close to the actual value is found. It uses a fast counter attached to a D/A and a comparator to do this.

The second type is called "Delta-Sigma" or "Sigma-Delta". I think Motorola trademarks one name, so the other order is also in use. The Sigma stands for time and the Delta stands for change. The sample rate is increased by many times. 32 times is a good value to think in terms of as an effects level minimum. 64 or 128 times is better still. The digital sample width is reduced to one bit. A zero means "less voltage" and a one means "more voltage". To put out 50% voltage, a sigma-delta converter output will look like a square wave at the sample frequency/2. That might sound awful, but the sample frequency might be 1MHz so a simple RC filter is able to remove the digital artifacts to the point where they're a couple hundred dB below the signal content (!) :shock:.  

When you use successive approximation analog to digital converters you are subject to the limitations that are inherent in the method and in the technologies available to make that method available to you, the circuit designer. The Delta-Sigma technology has problems of its own, but it has enormous advantages and the disadvantages are not what you might think.

One big disadvantage of the SA style is that you usually need tons of device pins, which is expensive. The integrated chips try to solve that by bringing the RAM internal. Now they can even benefit by using RAM that is wider than commonly available chips. The ones with the integrated RAM tend to leave out some detail that allows for easy comparisons between chips. :D. The HT8955 is (was) an oddball in that it used SA converters, but then shifts the data serially into a 1 bit wide DRAM, which has more address bits to compensate for fewer data bits and is cheap in big amounts.

Luckily, the PT2395 is a currently manufactured chip that uses external DRAM and Sigma-Delta technology converters. Matsushita used to make the M50195P too. One feature of chips that use sigma-delta is that they all seem to use an external comparator in the sigma-delta modulator. Usually from what I've seen this is a LM311 or similar. I don't know what process limitation keeps that function from happening on the same chip. You’ll also see this in digital delays and flangers produced by Maxon/Ibanez.


Enough of this rambling......

In short... sigma-delta chips can beat the pants off SA chips.

SA chips (less than lets say 18 bits wide) work very poorly for subtle information.  

Narrow SA chips like 8 or 10 bit units are especially poor.

Sigma-Delta has wonderful performance where SA falls down.

Sigma-Delta is cheap and easy now that chips are fast and memory is cheap. This has not always been the case!

The sound of a BBD based delay comes from multiple places. There is the processing inherent in the BBD, there are the LP filters before and after the delay, and there is the companding that most BBD designs use. The companding is often forgotten about, I assume because people will just say that it’s a symmetrical process, but only in a perfect world. ;)

A dial to add a variable amount of BBD processing to a digital sigma-delta delay is a great idea.... especially since the BBD can be clocked at whatever rate is desired to limit bandwidth, and now that parameter is exclusive of the delay time :D!

Heterodyning isn’t an issue because the clocks are separated by a VERY wide range. :D

I think the ideal would be a sigma-delta digital delay followed by a short BBD with independent clocking. The digital delay in front allows for the bulk of the delay time as well as perfect clean echo, and you can also have digital feedback which gives perfect loops if you want that. If the feedback is switchable between digital-only, and analog taps both before and after the BBD, while the output is selectable between pre and post BBD, you get one heck of a delay unit.  8)

Adding a variable frequency lowpass to the feedback loop is also an idea.

How many people here have tried adding a little compression to the feedback loop of a digital echo? ;)

Now add modulation to the digital and analog delay clocks and sea-sickness here I come! :P

OK…. Now I’m out of coffee. :mrgreen:
Eschew paradigm obfuscation


Whew!!  There's a post that's going into my personal reference archives.  Great piece, Peter!

Word on the PT2395 - here's a link to some work a synth compatriot has been doing with it:

Worth checking out if you haven't seen it before.


Mark Hammer

Well that IS a great post Peter.  I'm glad I ticked you off enough to write it. :wink:   I stand corrected.

So many specs are similar between the Holtek and Princeton chips that many, myself included, assumed that Holtek had simply sold the dies to Princeton and Princeton was essentially making a "slightly" improved HT8955 in the form of the PT2399.  It comes as a complete (and very pleasant) surprise to me that the PT chips use delta.  (Incidentally, delta-sigma is not all THAT new.  Read the interview with Steven St. Croix, concerning the Marshall Time Modulator in issue #2 of DEVICE....from 1979;

Were my assumptions about quantization error in the PT2395/6/9 accurate generalizations from the HT8955, I still think many of my assertions would have been reasonable.  It is not due to the *fact* of digital that I recommended placing it second in line, but the *quality* of the specific exemplar.  Given that the quality is less of an issue than I thought, that changes things.

Still, just how would a BBD placed after a digital path provide softening, *apart from* the filtering, expanding, etc.?

I'm pleased to hear that the heterodyning of the two clocks is much less of a problem than I thought.  I gather, the fact that neither clock is being modulated also helps out.  The topic of heterodyning originally came up during the TZF frenzy of 2004, when it was made clear that sweeping the clock driving one MN3007 above and below the fixed clock of another MN3007 was going to invite trouble.  Since the two clocks in the proposed hybrid delay are miles apart and neither is expected to be going higher and lower, that helps out a lot.

Your point about "clean" loops is well taken.  The fact of the matter IS that  every time you pass the signal back through those same 4096 tiny FETs and caps, the signal is degraded a bit.  Were the longest loops forced to pass through the BBD on the way to the digital path, every iteration would have to be degraded in this manner.  Moreover, if it were the case that one would have to push the BBD to close to its limits before adding the digital delay time on top of that, the degradation would be even worse.  (Although clocking an MN3005 fast enough to yield a max delay of 200msec would yield a decent analog slapback, lots of bandwidth, and minimal degradation.  Tacking on 800msec of digital on top would still get you a full second.)

On the other hand, I guess the test of Roger Mayer's assertions is to try it both ways (D before A and A before D) and see if his claims about qualitative differences in the decay portion hold up.  Personally, I've never put it to the test myself, but it sounded plausible, and its not like Mayer has no expêrtise himself.

P.S.: If you have expertise in an area never be afraid to use it or declare it.  So much in audio is based on hype, bullshit, and rumour, that ANY empirical basis for argumentation is welcomed like a breath of fresh air.


Excellent info, Peter.  What about the EM-5's M65831?

I always thought it was more of a PT style chip.



I don't know if I completely agree with the "smoothing out" of the digital zipper with an analog chip, but I do think the idea you have proposed is a good one and you should try it.  Don't take Mark's "Nah" seriously until you verify it.

I would highly recommend some sort of higher end A/D D/A converter, at least 16 bits, though a 32 bit converter would be sweet--this practically takes quantization error out of the picture with 192 dB dynamic range.  You're lucky if you have a noise voltage output of less than 90 dBV, so this means you could have good reproduction of signal (and analog noise as well) down to, say, -140dBV up to +40 dBV---or you're probably looking at a line signal level at about 0dBu (~750 mV), or round up and say 0 dBV (1V), accurate well below the noise floor.

Using fast sampling rates and lots of RAM, I think you could very safely make an analog feedback loop to generate the delay, and add the BBD into it for the analog sound.  Just don't try to use the BBD to "repair" digital artifacts--it won't work.

I'm pretty sure you could buy a DSP for about $5 that would do what you need.  It may not be more cost effective to do it right (with high end digital audio AD DA, processor), but it will be more mojo effective when the digital factor has less influence on the analog sound.

If you use a regular digital cheap chip, then certainly use a compander scheme.  You need to get as much amplitude out of the lower level signals as possible for lower error.
tr.v. trans·mog·ri·fied, trans·mog·ri·fy·ing, trans·mog·ri·fies To change into a different shape or form, especially one that is fantastic or bizarre.