What is the output voltage electrical guitar to be used in development pre-amps?

Started by peterv999, January 03, 2010, 11:58:42 AM

Previous topic - Next topic

peterv999

I know that you guys are developing pre-amps to get the guitar output to a level that's perfect for further processing. I've seen a wiki on this where they report 100mV and sometimes even 1Volt of signal. What is the value that's being used by the people developing this part of electronics?

Thanks,

-Peter

cloudscapes

depends on your voltage reference you set on your ADC.

I set it at a little less than 5v, though I'm not sure if it's the "right" thing to do
~~~~~~~~~~~~~~~~~~~~~~
{DIY blog}
{www.dronecloud.org}

PRR

Various pickups, players, styles, etc, peak 200mV to over 1V. Most of the classic inputs will overload at 1V-2V, and this has been acceptable.

You want enough gain to bring 20mV up to maximum clean power. This allows soft fingering to come out loud, and allows overdrive when desired.

Some mega-fuzz amps have input sensitivity below 1mV.
  • SUPPORTER

peterv999

Quote from: PRR on January 03, 2010, 07:42:13 PM
Various pickups, players, styles, etc, peak 200mV to over 1V. Most of the classic inputs will overload at 1V-2V, and this has been acceptable.

You want enough gain to bring 20mV up to maximum clean power. This allows soft fingering to come out loud, and allows overdrive when desired.

Some mega-fuzz amps have input sensitivity below 1mV.

Thanks!

PRR

I didn't really click that this was about an ADC conversion.

I'm very behind-the-times, someone with recent knowlege should jump on my mistakes.

A classic DAC takes a 0V-5V input (though there are many variations).

Since guitar pickup level apparently never exceeds 1.5V peak (or it is acceptable to clip at 1.5V), 3V peak-peak, it would seem you just AC-couple and bias-up to 2.5VDC and the resulting 1V-4V swing fits nice in the 0V-5V range. Setting the reference a bit lower like 0V-3V seems to buy several dB better resolution. But resolution is a tricky thing. And also a ADC will clip more abruptly than a vacuum-tube input. I'd lean to the higher reference, just to be sure.

How deep? Well, guitar pickups evolved around 12AX7 tubes. At Fender-like bias the max input is about 1.5V on positive peaks, call it 1V RMS. The noise level is potentially as low as 2 microvolts RMS. Taking some computational liberties, this is a 114dB dynamic range, we get 6dB per bit, we need at least 19 bits conversion accuracy.

We can buy 16-bit and "24-bit" converters.

Using 16-bit conversion, it appears we may lose something. In fact we won't require 20mV and 1V sensitivity in a single passage. A 2uV to 20mV range is only 80dB or 14 bits, so a 16-bit converter serves well up to ~~100mV, and an attenuator can turn-down stronger inputs.

Using a "24-bit" converter, really giving 20 useable bits (tho I have seen "24 bit" hardware which simply filled the bottom 8 bits with zeros), we can scale so that we don't need a control before the ADC. However soft strumming may come out blank in the top 4 or 5 bits, look weak in the digital representation. Natch we can run a multiply over the samples and get them to "normal" level.

Reminder that even if 0V-5V DAC is the right "size" for strong guitar, you usually won't run guitar pickup right to a DAC pin. The DAC input is a switch and some capacitance, twitching at the sample rate. It needs a "solid" signal source, typically a strong fast opamp. It should have a low-pass (though we aint likely to have much to alias against). And guitars are traditionally loaded with a very high impedance, higher than may be optimum for some fast opamps. Of course there may now be ADCs with suitable buffers built in.

Just looking what a "audio processor" is today, I see the TAS3204. It has a 76-bit core with a bloomin 8051 supervisor. (I remember when 8051 was as big as your thumb and cost $19.) Input buffers and lo-pass are built in! But the impedance is 20K, so you want some hi-Z buffer before it. It claims 100dB dynamic range below 2V RMS, so it must be more than 16-bit conversion, and the input "noise" is near 20uV. For "perfect" result, I would want a gain, maybe variable, in front. Maybe X2 and X10, though some compromise might cover all practical situations.
  • SUPPORTER

ExpAnonColin

Interesting discussion, but some points:

1) I doubt that tube amps really have 2 uV RMS noise level - that is insanely tiny, there must be more noise in the system.
2) Regardless, humans hardly have a hearing range better than 96 dB - so a 115 or even bigger SNR is so big we can treat the noise as inaudible.  A 16-bit ADC will render noise inaudble as well (6 dB per bit -> 96 dB)
3) The smartest thing is to "soft clip" the guitar input so that above 1.5 V peak, it clips to 0-5V - that way you won't get ADC clipping, which is potentially ugly, AND you are using the full dynamic range of the ADC (eg, if you clipped to 1-4V, you'd miss some of the dynamic range and the SNR would potentially be audible after the system was turned up the same amount).
4) You can't assume the 24 bit converter is giving only 20 useable bits - yes, there will be noise in the system, but an LSB at 24 bits and a 5V range is 3 uV, which is greater than your assumption for the amp! :)  I do agree, however, that 24 bits is overkill, and is generally used to make people erroneously think products are better.
5) Low-pass filtering at half the sampling rate should always be done, because any noise above that will get aliased back multiple times, and your system will get very noisy.  You should always assume noise of arbitrary frequency, because white noise (in the analog domain) has energy at all frequencies up to infinity!

Not trying to be a pain in the butt, just trying to contribute to this useful discussion re: how you should process the guitar in the analog domain before DSP.

-Colin

peterv999


MoltenVoltage

Quote from: ExpAnonColin on June 19, 2010, 01:07:55 AM

3) The smartest thing is to "soft clip" the guitar input so that above 1.5 V peak, it clips to 0-5V - that way you won't get ADC clipping, which is potentially ugly, AND you are using the full dynamic range of the ADC (eg, if you clipped to 1-4V, you'd miss some of the dynamic range and the SNR would potentially be audible after the system was turned up the same amount).

You make a lot of good points, but it seems to me the "smartest" thing to do is dynamically compress the signal rather than clip it.

Unless I misunderstand what you mean, "soft" clipping still alters the signal, rather than limiting its dynamic range while keeping the fundamental waveform in tact.

The signal could then be mathematically expanded as necessary inside the processor, particularly if you had feedback going from the compression circuit to the processor to alert it to the level of compression that was applied at a given moment.
MoltenVoltage.com for PedalSync audio control chips - make programmable and MIDI-controlled analog pedals!

R.G.

Quote from: MoltenVoltage on July 07, 2010, 03:11:39 PM
You make a lot of good points, but it seems to me the "smartest" thing to do is dynamically compress the signal rather than clip it.
Compression and companding has been the way I've solved other problems, but I've had problems getting a compressor to not cause a digital "splat" in front of an A-D because the attack time - even very short! - was too long to keep the compressor from letting through the initial attack and causing the A-D to get overloaded. It seems to be really tricky to do this well, while at the same time keeping the signal level out of the compressor up to keep digital noise down.
R.G.

In response to the questions in the forum - PCB Layout for Musical Effects is available from The Book Patch. Search "PCB Layout" and it ought to appear.

ExpAnonColin

Indeed, you'd have to make a compressor (or really, a limiter!) with a time delay to ensure you didn't get the "splat" RG talks about.  Furthermore, both compression and clipping are nonlinear operations, and IMO people sort of expect guitar units to overdrive when pushed to hard, as opposed to compress.

MoltenVoltage

RG was saying his compressor attack was not fast enough to prevent the initial part of the audio signal from getting past the compressor and causing the A/D to hit the ceiling.

All you need is a predictive compressor and the problem is solved...

But seriously, you could use the fastest possible attack then digitally ignore the parts that hit the ceiling and interpolate the values on either side, again, using feedback from the compressor to calculate the level of compression applied to the signal.  Done correctly, this type of compression and expansion should provide a nearly linear result.

Admittedly it would fudge the signal slightly, but it would be far less destructive that the blunt-force clipping method Colin suggests.

While I agree guitarists generally expect distortion when they play hard, it's because that's what's usually happens, not because that's what should happen.

With this method, the player shouldn't hear distortion or compression.
MoltenVoltage.com for PedalSync audio control chips - make programmable and MIDI-controlled analog pedals!

ExpAnonColin

I'm going to go out on a limb here, but I believe that most guitarists would prefer a soft clip (not "brute force clipping") to a compressor.  In order to implement lookahead, you need delay anyways - making the compressor digital sort of defeats the problem, so you will get some kind of "pop" on the attack, which will be very noticeable and potentially damage the logic circuitry.  You do NOT want more than V+ going into a logic chip.  All that being said, I do agree that your idea is smart, and probably ideal in terms of minimal noticeable change in the signal... but I think it's pretty complicated, and I can't think of a time I've seen something like that implemented!  It's always been a preamp with some degree of soft clipping (the "softer" the better :))

-Colin

R.G.

Quote from: MoltenVoltage on July 12, 2010, 09:34:41 AM
RG was saying his compressor attack was not fast enough to prevent the initial part of the audio signal from getting past the compressor and causing the A/D to hit the ceiling.

I was a little vague, I realize. What I should have said is "No compressor I've tried - and I've tried a lot of them - is fast enough with any tweaking I can figure out to suppress digital overload splatting distortion at the A-D if the signal level is not also turned way down into the A-D."

That obviously negates a lot of the advantage of compressing.  :icon_lol:

QuoteAll you need is a predictive compressor and the problem is solved...
Yeah.  :icon_biggrin: I've been looking for one of those for quite a while now. Lemme know if you find one.  :icon_biggrin:
Quote
But seriously, you could use the fastest possible attack then digitally ignore the parts that hit the ceiling and interpolate the values on either side, again, using feedback from the compressor to calculate the level of compression applied to the signal.  Done correctly, this type of compression and expansion should provide a nearly linear result.
That would indeed be serious if you wanted to also make it unnoticeable. It can be done, all right, but I'm not sure that would fit into a real-time DSP that will fit into a pedal in terms of cost.

I'm by no means suggesting that this can't be done - it obviously could, as could any number of other tricks to defeat the pick attack. Just that it's going to take some high speed DSP work, and some tricky programming, and both those start adding up dollars.
R.G.

In response to the questions in the forum - PCB Layout for Musical Effects is available from The Book Patch. Search "PCB Layout" and it ought to appear.

MoltenVoltage

A simpler programming solution that wouldn't require interpolation, and therefore might work better in real-time (i.e. microseconds of delay), would be to use a sample playback routine when the limiter is hitting the ceiling.

Because:
1) The limit would usually get hit when the signal is not at a zero crossing; and
2) You never want to cut off a signal other than at a zero crossing,

If a short amount of the signal were continually digitally recorded, while the limiter is hitting the ceiling, you could play back the most recent signal backwards.  The reason to go backwards is that you know you are starting from the same voltage, so the transition would be seamless.  Once the limiter is no longer engaged, you wouldn't switch back to the "real" signal until the sample playback hit a zero crossing.

This obviously assumes the peaks are transient as a result of a very fast compressor attack, so the sample would appear very briefly.

It also seems like the math for reconstructing the signal based on the compressor would not be that complicated if the compression ratios were limited to multiples of 2 and the compressor jumped between those values.
MoltenVoltage.com for PedalSync audio control chips - make programmable and MIDI-controlled analog pedals!

PRR

>> All you need is a predictive compressor
> Lemme know if you find one.


Back when I was recording live audio (albeit with microphones) for spare change, I used the predictive comp/limiter a lot.

Source. Gain control set so the anticipated peaks will not slam the ADC. With 16 bits I did not like to peak 20dB down, but I didn't try to push much over -10dBfs because live music has surprises.

Just "capture sound unharmed".

With a minimum of complicated machinery to lock-up or do a wrong thing. In live concerts there is no Take 2. In studio tracking you have Take 47, but if that is the best track you don't it crapped by mechanical contraptions.

Then in the computer/DAW, Look-Ahead compression is trivial. When a very wide-range performance had to be squished for web-listening, I ran some very long look-ahead (7 seconds) to boost the soft passage yet ramp-down the soft passage and keep some sense of dynamic when the LOUD part came in.

> "No compressor I've tried .... is fast enough

Splat is not new. The earliest limiters were invented for optical (film) sound because W.E. patented a good light modulator and RCA had to work-around with ribbons. Which have advantages, but clang badly at 101% modulation. The state of the art was still crude; still the goal is to have attack time shorter than ribbon speed.

In disk-cutting they tended to 15KC-20KC low-pass and 50uS attack times. And as soon as tape arrived, look-ahead (more for groove-space than for limiting, but everything interacts).

Many UK and Aus FM limiters used a ~~50uS delay (a few coils and caps) with a 50uS attack for "splatt-proof" limiting.

It aint too easy to get 50uS attack. In hollow-state days it took a 10 Watt power amp to generate the control voltage.

Many-many-many limiters don't try for clean 50uS response. Many popular ones (including most US broadcast limiters) just clipped the first milliSecond.

Also ducking for every small artifact reduces average level. (2-constant limiters do better.)

Many-many current market limiters are based on THAT Corp chips with RMS response. There's no "attack time" and you are sure to get a blip. The fancy ones include a clipper.

Pure clipping is not an awful thing, if short. If you use ribbon-valve or other technology which clangs far in excess of the actual offense, use a clipper. My audio CD recorders just flat-topped, no excitement. If short, quite inaudible.

But with over-16-bit ADCs available, there should not be any real problem. With 20-bit actual ADC range, -20dBfs is 100dB above quantization noise. While a really good tube can do analog better, few musical situations (and I suspect no guitar situation) needs the full dynamic range of a tube. Center your electric dynamic range within a 20-bit ADC's range, capture without injury, THEN do something with it.

Even in live monitoring, a smart digital processor can do 50uS look-ahead compression better than an analog pre-limiter.
  • SUPPORTER