Delay rate: DSP v. non-DSP methods

Started by parmalee, October 14, 2023, 11:53:01 PM

Previous topic - Next topic

parmalee

With tape, analog and digital delays, the delay rate or time is varied by changing the playback clock speed.  Consequently, with longer delay times the signal quality will be more degraded and, of course, the pitch will shift as the rate is adjusted.

With many DSP methods for delay I have noticed that there is a tendency to achieve longer or shorter delays by lengthening or shortening the circular buffer.  Consequently, there will be no further signal degradation with longer delays and there will be no pitch shift while varying delay rate--unless, of course, one implements a pitch shift via other means.

Is there a practical or logistical reason for employing this method with DSP?

cordobes

#1
You are confused, there is no such thing as digital delay versus DSP delay, both are digital (as well as tape vs analog: tape delay and analog BBD-based delays both work in the analog domain).
The method of achieving z^-n delay samples is what varies, and as you mentioned it can be by varying the acquisition or synthesis frequency, the write or read pointer, or a combination of both.
Everything will depend on the architecture on which it is implemented, and the sound you want to achieve: a low sampling frequency can cause artifacts that are pleasant for some people.
However, in my point of view, today it makes no sense to vary the sampling or synthesis frequency, most of the ADC converters work with frequencies and bits that are optimal, and most of the control devices, be it uC, FPGA, etc, also have enough frequencies to allow implementing and adding features like wow&flutter, saturation, equalization, degradation, etc.
How those details are implemented is another matter, and there is a lot of debate about "where" sounds best. For example, there are "real" modulated delays, and delays with modulation processed before or after the delay line.
And all our yesterdays have lighted fools the way to dusty death.
Out, out, brief candle! Life's but a walking shadow.

parmalee

Quote from: cordobes on October 15, 2023, 08:34:43 AMYou are confused, there is no such thing as digital delay versus DSP delay, both are digital (as well as tape vs analog: tape delay and analog BBD-based delays both work in the analog domain).

Not really confused, more a matter of simply addressing within the context of diy, where "digital delay" generally implies the pt2399 or pt2395 chips, say, wherein you, the diy-er, are not writing the code, as opposed to employing microcontrollers, say, wherein you are writing the code.


QuoteHowever, in my point of view, today it makes no sense to vary the sampling or synthesis frequency, most of the ADC converters work with frequencies and bits that are optimal, and most of the control devices, be it uC, FPGA, etc, also have enough frequencies to allow implementing and adding features like wow&flutter, saturation, equalization, degradation, etc.
How those details are implemented is another matter, and there is a lot of debate about "where" sounds best. For example, there are "real" modulated delays, and delays with modulation processed before or after the delay line.

Yes, this is what I was getting at.  I suppose by varying the acquisition/playback frequency as means for varying delay rate, one is limiting oneself with respect to what one can do as far as the "where" (with respect to... well, what you listed), yes?

cordobes

DSP algorithms can be executed by software or implemented by hardware (ASIC), as is the case with the PT2399 or the famous unobtainium PT2395. Both are digital delays.
But I understand your point.
Moving sampling or synthesis frequencies may require knowing your hardware very well, and I don't think that everyone starting out in the DIY world knows how to incorporate or wants to deal with analog filtering or emphasis blocks from the get-go.
There are countless posts asking how to lower the quantization noise of the PT2399, which despite using ΔΣ, was not intended to be used as a delay. In my experience, the best I could get, many years ago, was chaining 3 or 4 chips, lowering the value of the integrator cap, using a current mirror to control all the PTs with a single digital potentiometer, and designing awareness of all analog apparatus around the chips, including grounds routing, etc. It is not something I really want to get into again, especially in an era where it is already possible to simulate the linear or non-linear characteristics of a system with great precision, and there are complete solutions, both in audio codec and in processing of the sampled data.
But if we talk about flanger, one of the most used solutions (in time domain) was (is?) the Doppler effect, and precise management of the synthesis frequencies is necessary through modulation and crossfading of 2 or more pointers.
The PT2399 is probably already ready to enter that hall of fame where the SAD1024, CA3080, etc. are also found.
My two cents.
And all our yesterdays have lighted fools the way to dusty death.
Out, out, brief candle! Life's but a walking shadow.

potul

Quote from: parmalee on October 14, 2023, 11:53:01 PMand there will be no pitch shift while varying delay rate--unless, of course, one implements a pitch shift via other means.

This is not necessarely true. If you move your read pointer continuously, you will get the same pitch shifting artifacts. If you jump from one read position to another in one single step you don't, but you might get clicks and pops.
Regarding your question... I think it's a matter of what is practical to implement in each case. If you have the choice, you will get better quality by changing the length of the circular buffer, or changing the read pointer position. This can be achieved in the case of a tape delay by moving physically the head, or have multiple read heads and chose. And it can be easily implemented when coding. But in a hardware chip-based implementation (BBD, PTXXX) this is harder, so you end up using the sampling frequency trick.