MN3005 Max Delay Time Discrepancies

Started by drummer4gc, August 07, 2016, 12:46:28 PM

Previous topic - Next topic

drummer4gc

Hi, I'm re-calibrating a DOD FX90 and am curious about the max delay time I should set it up for. In various music forums and manuals for analog delays with this chip, I consistently see 300ms as the max delay suggested. However, the MN3005 data sheet specifies 204.8 ms as the max delay time for the chip. What gives? Is it just based on how much degradation of signal is acceptable to us, or does it have to do with additional filtering in our delay circuits that allows us to push it further than the data sheet suggests? Just trying to figure out where the discrepancy comes from so I can make a decision about how to set the clock in this FX90.

Thanks!

Mark Hammer

MN3005s can deliver as much as 400msec delay time, assuming one is willing to live with VERY modest bandwidth.  The tradeoff is that the more bandwidth you want, the shorter a max delay time you have to live with.

You can set any relevant trimmers for more delay time, but the unit has fixed lowpass filters, and there is a chance that the rolloff of those filters might be higher than the clock rate at max delay, resulting in an irritatingly audible clock whine.

My advice would be to set the delay-time pot for max, and adjust the clock trimmer for the point just below where the whine becomes audible.  That should get you the best tradeoff between delay time and bandwidth.

PRR

> 204.8 ms as the max delay time

That's at 10KHz clock.

Which means <5KHz audio filtering so it don't whine.

You can clock slower, but you must keep moving your high-cut filter down so the clock leakage doesn't melt tweeters or ears. At 400mS you are down to <2.5KHz audio bandwidth.

3KHz is poor telephone quality. Most users will want more. 5KHz is "good" for full-range reverberation.

But sure, clock it slower and longer and lower-KHz audio if you wish.
  • SUPPORTER

Mark Hammer

If my memory of the late 70s is accurate, it was not uncommon to find that analog delay pedals would whine audibly and annoyingly at longest delays.  I don't know if that was simply a matter of poor calibration or trimmer drift, or if the manufacturers figured "Meh, if they don't like it they can always turn down the treble on the amp", but it happened.

Paul is correct about what sort of bandwidth might be considered reasonable.  However, the "warmth" of analog delays that some rave about is really a side-effect of the sharp lowpass filtering required when aiming to squeeze 330 or more msec out of a 4096-stage chip.  Once manufacturers started incorporating that sort of filtering into digital delays, people stopped complaining about their tone and lack of warmth.

So, a 2.5khz rolloff is not such a terrible thing.  Depends what you're going for.

PRR

> some rave about is really a side-effect of the sharp lowpass

When I said "most users", present company is excepted.

Speech/singing and some other musical sounds fall flat if sibilants don't reverberate.

Guitar often works great with more high-cut filtering than speech usually wants.

The datasheet "assumes" general purpose reverbing. They didn't want to promise more delay and have users complain about dullness. (They also perhaps felt, if you need more, you can buy More Chips.)

You can pick your own compromises.

If you go WAY below 10KHz clocking, leakage effects reduce signal strength. There's a graph. At 10KHz it's insignificant. At 1KHz you could notice a real drop; of course the sharp 500Hz filter to reduce 1KHz whine would chop more than half of the audio away too.
  • SUPPORTER

Mark Hammer

Those teensy capacitors inside the BBD are leaky.  Yes, the switching FETs inside the chip prevent the caps from draining off, just like they do in a sample-and-hold circuit.  But S&H circuits get to pick the caps they use to hold onto the sample, and there's only one of them.  BBDs use whatever capacitor material can be built into a chip, and the assumption is that one will keep that sample moving along before it has a chance to drain off very much.  I also gather the draining that occurs when hung onto too long (the outcome of a low clock frequency), multiplied by the number of stages in the chip, multiplied by however many times a recirculated signal passes through the chip, is what results in the degraded audio quality of the 4th and 5th repeats (and subsequent as well) at max delay.

drummer4gc

Many thanks to the two of you, that helps clear some things up. I appreciate the clarification that the data sheet specs are not necessarily taking into account that treble roll-off for guitar is going to be more tolerable/aesthetically pleasing than with vocals.

Couple other questions:

The FX90 manual states that the delayed signal bandwidth is 1.8KHZ maximum, so fairly low compared to the figures both of you threw out. Does that correspond with the low pass filters in the circuit? Here it is (sorry for large pic):



I see two repeated circuit blocks of 27k Rs with various Cs, but am not sure how to calculate the frequency corner.

Also, am I correct in understanding you both that there are two separate reasons for signal degradation of the wet signal - low pass filtering limiting bandwidth, and leakage within the chip that is exacerbated by longer delay times? Meaning, all things the same, a circuit clocked for longer delays would have poorer signal quality EVEN if the clock noise remains adequately filtered?

Thank you!

PRR

> bandwidth is 1.8KHZ maximum

Seems fair.

A *Perfect* filter can be half the clock rate.

There are no perfect filters.

If they are pushing to ~5KHz clock(??), 2.5K perfect filtering, then 1.8KHz in a practical filter is a reasonable compromise between a little more bandwidth and a much more complicated fussy filter. Also the ringiness of sharp filters.

The leakage effects are probably unimportant within the audio band.

If you want a GOOD delay, today you use a ADC, RAM, and DAC to get CD-quality audio at any delay from milliSeconds to decades. BBDs "should" be utterly obsolete. However there doesn't seem to be any single-chip no-frills no-programming audio delay as attractive as these old chips. And some of their flaws are marginally musical.
  • SUPPORTER

Mark Hammer

Thinking about delays has changed over time.  Initially, analog BBDs gave us delay with some limited degree of fidelity, and we were grateful for what we could get.  It was a whole lot easier to lug around, and certainly cheaper, than an Echoplex.  There was pretty well zero maintenance.

Gradually, as we began to think of various uses for them, the reduced bandwidth started to be noticeable.  Companding could address the noise and dynamic-range limitations to some extent, but we began to notice their limits.  Digital delay existed in concept but was way out of reach for a great many.  The "Build a digital delay" project in a 1980 issue of Polyphony I have was laughably complex.  Easily over 20 chips and requiring a 1 amp supply, and still not capable of doing what a 50-cent PT2399 can with as much fidelity.

Initially, when digital delay finally did become a reality for studios, the push was for maximum bandwidth and dynamic range.  But here's the thing.  In the real world, if one was to set up in a parking garage or the Taj Mahal or the Grand Canyon, none of the reflections would be full bandwidth.  It may be different in a hall made of polished marble, but in most reverberant spaces, the reflecting surfaces are imperfect such that reflections lose both bandwidth and energy.  What they don't get is more distorted.

As was mentioned earlier in this thread, one of the things that has tended to draw musicians back to analog delays is simply a byproduct of that technology's limitations.  The lowpass filtering required to reduce the stair-steppiness and aliasing, and keep audible clock whine out of the audio output, unintentionally had the effect of more closely mimicking what happens in the natural world when an echo is produced in a reverberant space - it's duller than the original.  In fact, it is the diminished high-frequency content that allows one to mentally/perceptually push the repeats to the perceptual background and place the real-time signal to the foreground.  The compensation for weaknesses in BBD technology turned out to be its strength.

Three weaknesses still remained, however: noise, distortion, and delay time.  The tone of analog, and receding high-end, was desirable, but there were limits to how much time delay one could get, there was always a certain amount of hiss, and sound quality degraded after passing through 20,000 or so BBD stages (think 5 repeats through a single 4096 stage device).    The adaptation was to use the digital domain to achieve the noise, time, and signal-quality goals, but employ analog or digital filtering to achieve the more realistic bandwidth-trimming goals.  And, as one can see in the success of so many digital units (not the least of which is Strymon's considerable success), the mimicking of the tonal qualities of analog, with the virtues of digital technology, has been very successful.

12Bass

I'm not sure how closely low passed BBD delays mimic the sound of natural echoes or reverberation.  Perhaps the roll-off from the LPF somewhat resembles what happens in real environments.  However, when I hear BBD delays, I don't think they sound much like the "real thing" (there are modern DSP units which can do a much better job at recreating the sound of real spaces).  Nor does it seem that musicians prefer them because they sound particularly realistic.  Rather, it seems to me that what makes them desirable for many is the euphonic warm/low fidelity sound of the repeats which blends nicely with the original signal.

That said, I tend to prefer the clarity of wider bandwidth reproduction.  A 2.5kHz LPF is too muffled sounding for my taste.  So, that means using more BBDs and a higher clock rate if longer delay times are desired.
It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring. - Carl Sagan

Mark Hammer

#10
They obviously don't mimic natural reverberation very well, otherwise we would have little need for reverb units.  But the more limited bandwidth of the delayed signal and repeats allows it to be nudged "further back" in the perceptual field when they are set for more than a single repeat.  For me, it is comparable to the way in which blurred background made the foreground characters easier to follow in a simulated "3-D" landscape, when Nintendo 64 came out.  In a sense, the dulled repeats produced a shallower "depth of field", such that the player finds it easier to allocate attention to what they're playing, without being distracted by peripheral events (i.e., the repeats).

That doesn't make it better than digital, since there is far more flexibility in current digital ambience effects.  I'm just trying to explicate why many playersprefer analog delays even when other things are available and inexpensive.

I usually stick a single stage of additional LPF in the feedback loop such that repeats get progressively duller.  To my ears it sounds more natural, more diffuse, and more like "natural reverberation" (though admittedly NOT the same thing, just more in that direction).  It also counteracts the grittiness that otherwise tends to accumulate after succumbing to the leakage of yet another 4096 teensy caps with each repeat.

12Bass

Obviously a bit of a tangential discussion.... 

However, my impression is that part of the charm of BBD delay effects is the loss of fidelity in the repeats, which includes rolling off the highs and increasing amounts of distortion as the signal decays.  My father (RIP) never could understand why guitarists wanted to run their electric guitars through distortion boxes.  I tried to explain that clipping changed the waveform into something more interesting, but he just saw it as some sort of abomination.  When used by Robert Fripp, distortion almost transforms a guitar signal into a violin tone.  The point is that distortion can be desirable and musically useful in some cases.  Most digital delays tend to be rather clean in comparison, providing more technically accurate reproduction, which leads some to call them "sterile" and "boring"....

BTW, not trying to argue per se... just offering another take.
It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring. - Carl Sagan

Mark Hammer

...and happily treated as such.

I've probably mentioned it many times before, but in the interview with Roger Mayer in the otherwise sub-par Guitar Effects Pedals book from Dave Hunter, Mayer makes the assertion (which is eminently debatable) that the distinction between analog and digital delays really lies in the decay.  His contention is that BBDs retain essentially infinite resolution at all points, while digital units effectively have fewer bits of resolution to code the signal with once it has dropped down to the last few wisps.

The strength of that assertion and its pertinence would depend on the type of encoding, and the starting resolution.  As I understand it, if one was using delta modulation, where the sample to sample change in signal amplitude is what is encoded, then resolution is not deleteriously affected during the tail of the decay.  Similarly, if one is using something like 24-bit (or higher) encoding, then even if the signal has trailed off considerably, one still has plenty of bits left to encode with.  My sense is that the Mayer interview may have been conducted when 16-bit resolution was more standard, and having maybe 10 bits to code the last note gasps with posed a more audible problem.

12Bass

Quote from: Mark Hammer on August 10, 2016, 11:56:47 AM
...and happily treated as such.

I've probably mentioned it many times before, but in the interview with Roger Mayer in the otherwise sub-par Guitar Effects Pedals book from Dave Hunter, Mayer makes the assertion (which is eminently debatable) that the distinction between analog and digital delays really lies in the decay.  His contention is that BBDs retain essentially infinite resolution at all points, while digital units effectively have fewer bits of resolution to code the signal with once it has dropped down to the last few wisps.

The strength of that assertion and its pertinence would depend on the type of encoding, and the starting resolution.  As I understand it, if one was using delta modulation, where the sample to sample change in signal amplitude is what is encoded, then resolution is not deleteriously affected during the tail of the decay.  Similarly, if one is using something like 24-bit (or higher) encoding, then even if the signal has trailed off considerably, one still has plenty of bits left to encode with.  My sense is that the Mayer interview may have been conducted when 16-bit resolution was more standard, and having maybe 10 bits to code the last note gasps with posed a more audible problem.

I'm not sure if that is quite an accurate explanation of the difference in performance between analog and digital delays.  It seems to trade on the mistaken notion that the limited number of quantization levels in a digital sampling system leads to a stair-stepped (limited resolution) analog output, and that higher bit depth decreases the size of the stair-steps and thus increases the resolution of the reconstructed waveform.  From what I gather, rather than greater resolution, a higher bit depth correlates with a larger signal-to-noise ratio and increased dynamic range; conversely, as bit depth decreases the noise floor increases.  Contrary to popular belief, it follows that as long as the signal is above the noise floor, 24-bit sampling does not provide a more accurate representation of an analog signal than 16-bit.  The 16-bit system can just as perfectly reproduce an analog sine wave as a 24-bit system, as long as the signal is above the noise floor of the 16-bit system (or even below it, if dither is used).  The same principle follows for lower bit depths. 

This video offers an excellent explanation of AD/DA and the myth of the stair-step: https://www.youtube.com/watch?v=cIQ9IXSUzuM

Further, in my experience with analog and digital delays, the generation loss in analog delays is much greater, with increasing amounts of noise and distortion on each repeat, with the result becoming almost unrecognizable after a handful of iterations.  This is contrary to Mayer's claim above, that analog BBD sampling offers superior resolution.  Further, with modern digital delays, hundreds of iterations are possible before audible degradation occurs.  Not sure how older digital sampling systems compare with BBDs, though BBD systems generally have signal-to-noise ratios which are well below 16-bit digital (~90dB).  12-bit SNR (~70dB) would be closer to BBD performance. 
It is far better to grasp the universe as it really is than to persist in delusion, however satisfying and reassuring. - Carl Sagan