News:

SMF for DIYStompboxes.com!

Main Menu

Digital audio effects kit

Started by markseel, August 25, 2022, 06:20:44 PM

Previous topic - Next topic

markseel

A digital audio effects kit is coming soon.  PCB without case should be around $125. Based on XMOS XUF208 cpu/dsp.  The free SDK library (object code - not source) for the board implements all Usb stuff, audio flow, midi flow, pots and footswitch handling, etc.  The effects designer need only to write effect/algorithm in C.  The board is 65mm X 60mm and has two 1/4" TRS jacks for stereo audio, two 1/4" jacks for optical MIDI input and output, one 9v jack (150 mA), one USB-C jack for 8 channel usb audio and usb midi. Also has Bluetooth radio on board for effects control via phone/tablet/laptop. Effects can be controlled via analog MIDI, USB MIDI, and BLE/GATT (all implemented in the SDK). Has three pots and one footswitch for effects adjustment and enabling/disabling. Stereo input is 1Vrms (2.8v pk-pk), 1 meg-ohm impedance, 113 dB dynamic range. Output is 1.2Vrms, 1k ohm, and 112 dB dynamic range. Analog and usb audio at 48/96/192 kHz.  More details soon on www.3Degreesaudio.com, and Instagram @3degreesaudio. 

Sweetalk

Great news!, looking forward to test it!

niektb

Aah the XUF208! bit of a steep learning curve but has some really neat C extensions! Used it in a commercial HiFi DAC for Control and USB Audio. I do think you may have under-dimensioned your 9V a bit though :) (I do recall I drew 500~600mA from the lab PSU... I did use only LDOs though (you know, HiFi ;)) so it may be alright  :D

Digital Larry

Quote from: markseel on August 25, 2022, 06:20:44 PM
The effects designer need only to write effect/algorithm in C.
I was afraid of that!  Nice to see anyway!

DL
Digital Larry
Want to quickly design your own effects patches for the Spin FV-1 DSP chip?
https://github.com/HolyCityAudio/SpinCAD-Designer

markseel

Power consumption: 1.0V core supply is a switching DC/DC convertering 9V to 1V at about 80% efficiently so current draw for the core (600mA at 1V) should be around 85mA at 9V.

Learning curve: The effects developer won't have direct access to XMOS resources or infrastructure related to control and audio data flow, peripheral control, wireless, etc. Writing an effect only requires implementing the effect algorithm DSP code using 'C' (not XC) and provided dsp library functions.

Sweetalk

Quote from: markseel on August 26, 2022, 05:26:13 PM
Power consumption: 1.0V core supply is a switching DC/DC convertering 9V to 1V at about 80% efficiently so current draw for the core (600mA at 1V) should be around 85mA at 9V.

Learning curve: The effects developer won't have direct access to XMOS resources or infrastructure related to control and audio data flow, peripheral control, wireless, etc. Writing an effect only requires implementing the effect algorithm DSP code using 'C' (not XC) and provided dsp library functions.


Sounds really nice, I'm really interested and waiting for further details

pruttelherrie

Nice! I assume this is a continuation/successor of the FlexFX platform?

Quote from: markseel on August 25, 2022, 06:20:44 PM
two 1/4" TRS jacks for stereo audio

Are there possibilities to get quad analog outs?

markseel

Hi yes it's sort of a successor.  I don't think I can do quad outputs on this revision.

markseel

Here's a simple saturating preamp controlled by three 'knobs'.  The audio samples are up-sampled by 3 before applying saturation in order help manage the potential aliasing artifacts that are caused by non-linear processing.  The audio is up-sampled, high-pass filtered, limited, saturated, low-pass filtered, and finally down-sampled.  The filter coefficient creation as well as the DSP processing functions are all included in the DSP library that's part of the SDK.


#include "xio.h"
#include "dsp.h"

product_name_string   = "Preamp Example";

usb_audio_output_name = NULL; // Not using USB audio
usb_audio_input_name  = NULL; // Not using USB audio
usb_midi_output_name  = NULL; // Not using USB audio
usb_midi_input_name   = NULL; // Not using USB audio

audio_sample_rate = 48000; // Audio sampling frequency is 48 kHz
usb_output_chan_count = 0; // Not using USB audio
usb_input_chan_count  = 0; // Not using USB audio

// The control task is called at a rate of 1000 Hz and should be used to implement DSP algorithm
// property data resulting from changes in the algorithm settings that are sent to this function.
// DSP properties can be sent to DSP threads (by setting the DSP property ID to non-zero) at any
// time. It's OK to use floating point math here since this thread is not a real-time audio thread.
// The settings array represent 32 'knobs' that can be controlled using external potentiometers,
// or MIDI CC commands. A setting value ranges from 0 to 127.

void xio_control( int property[6], byte settings[32] )
{
static int prop_id = 1;
property[0] = prop; // Set the property ID

if( prop_id == 1 ) // Low-cut filter set by knob #1
{
float FS = 3.0 * 48000.0; // Filter is applied to the upsampled data stream
// Frequency ranges from 30 Hz to 330 kHz
make_highpass( &property[1], FS, frequency/FS, 0.707 ); // Filter Q = 0.707
}
if( prop_id == 2 ) // Preamp gain set by knob #2
{
float gain = 1.0 + (settings[1] / 128.0) * 7.0; // Gain ranges from 1 to 8
property[1] = FQ( gain ); // Send Q28 gain value to DSP audio processing
}
if( prop_id == 3 ) // High-cut filter set by knob #3
{
float FS = 3.0 * 48000.0; // Filter is applied to the upsampled data stream
// Frequency ranges from 3000 Hz to 13 kHz
float frequency = 3000.0 + 10000.0 * (settings[1] / 128.0);
make_lowpass( &property[1], FS, frequency/FS, 0.707 ); // Filter Q = 0.707
}
if( ++prop_id > 3 ) prop_id = 1;
}

// Audio Processing Threads. These functions all run sumulatenously (in parallel) and they are all
// executed once for each audio sample. A processing function can NOT share data with any other
// audio processing 'xio_process' functions or the 'xio_control'!

// Shareing data between processing functions must occur using the 'samples' array that's passed
// from one processing function to another.
//
// All DSP processing must be performed using fixed point math - do not use floating point math
// since these are real-time audio processing functions and floating point operations will cause
// the audio subsystem to stall and disrupt audio flow to USB and the ADC/DAC.
//
// NOTE: IIR, FIR, and BiQuad coeff and state data *must* be declared non-static global!

// Audio processing thread #1: Process samples from the ADC and USB, send results to thread #2
// Samples 0..7 are the incomping ADC and outgoing DAC audio channels.
// Samples 8..15 are the incoming and outgoing USB audio channels.
// Samples 16..31 are samples form thread 5 (wrapped around) and are suitable for persisting or
//   sharing sample data accross all threads.

// Filter coefficients and state data

int locut_cc[5] = { FQ(1.0),0,0,0,0 }; // Default to no filtering
int locut_ss[5] = { 0,      0,0,0,0 }; // Initial sample history is all zero's
int hicut_cc[5] = { FQ(1.0),0,0,0,0 }; // Default to no filtering
int hicut_ss[5] = { 0,      0,0,0,0 }; // Initial sample history is all zero's

void xio_process1( int samples[32], const int property[6] )
{
int gain = 1.0;

if( property[0] == 1 ) memcpy( locut_cc, &property[1], sizeof(locut_cc) );
if( property[0] == 2 ) gain = property[1];
if( property[0] == 3 ) memcpy( locut_cc, &property[1], sizeof(locut_cc) );

int xx[3]; // Temporary storage for the upsampled audio for this audio cycle.

dsp_ups( 3, xx, &samples[0] ); // Upsample the input by 3.

// Apply high-pass filter to the upsampled sample stream.
xx[0] = dsp_iir( xx[0], locut_cc, locut_ss );
xx[1] = dsp_iir( xx[1], locut_cc, locut_ss );
xx[2] = dsp_iir( xx[2], locut_cc, locut_ss );

// Apply some gain and limit the samples to -1.0 ... +1.0.
// Note: Limiting is is required by the SAT function below.
xx[0] = dsp_lim( xx[0] * preamp_gain );
xx[1] = dsp_lim( xx[1] * preamp_gain );
xx[2] = dsp_lim( xx[2] * preamp_gain );

// Apply a gentle 3rd order polynomial saturation.
// Note: Do this 3, 4, or 5 times in a two saturate more aggressively.
xx[0] = dsp_sat( xx[0] );
xx[1] = dsp_sat( xx[1] );
xx[2] = dsp_sat( xx[2] );

// Apply low-pass filter to the upsampled sample stream.
xx[0] = dsp_iir( xx[0], hicut_cc, hicut_ss );
xx[1] = dsp_iir( xx[1], hicut_cc, hicut_ss );
xx[2] = dsp_iir( xx[2], hicut_cc, hicut_ss );

samples[0] = dsp_dns( 3, xx ); // Downsample by 3 and send to output.
}

vigilante397

Awesome, super excited for this!
  • SUPPORTER
"Some people love music the way other people love chocolate. Some of us love music the way other people love oxygen."

www.sushiboxfx.com

markseel

Here's an example for a eight delay line FDN (feedback delay network).  It's a fairly complicated example but then for a FDN reverb with modulation of each delay line (to increase the echo density on up to creating moving/warbly reverberation) it's not too bad.  Normally you would feed this FDN with a diffused version of the input - I can add a diffusion example later.  Also; each result of the feedback matrix should be low-pass filtered adjust the color of the reverberation but that can be added easily enough.


// Eight delay lines for the FDN reverb.
int delays[8][4096] = { {0},{0},{0},{0},{0},{0},{0},{0} };

void xio_process2( int samples[32], const int property[6] )
{
    int inputL = samples[0], inputR = samples[1];

    static int blend = 0; if( property[0] == 4 ) blend = property[1];
    static int regen = 0; if( property[0] == 4 ) regen = property[2];
    static int damp  = 0; if( property[0] == 4 ) damp  = property[3];
    static int rate  = 0; if( property[0] == 4 ) rate  = property[4];
    static int depth = 0; if( property[0] == 4 ) depth = property[5];

    static int time[8] = { 0,0,0,0,0,0,0,0 }; // One LFO time variable for each delay line.
    static int lfo [8] = { 0,0,0,0,0,0,0,0 }; // One LFO result for each delay line.
   
    // FDN reverb delay line lengths, prime values for lengths :-)
    static int length[8] = {
        FQ( 2283.0/4096.0, 2511.0/4096.0, 2777.0/4096.0, 2983.0/4096.0,
        FQ( 3299.0/4096.0, 3451.0/4096.0, 3875.0/4096.0, 4037.0/4096.0,
    };

    // Manage the eight LFO's that are used to modulate the delay lines.
   
    for( int ii = 0; ii < 8; ++ii )
    {
    time[ii] += rate; // Update LFO time variable;
    lfo[ii] = dsp_sin( time[ii] ); // Get sin(tt), ranges from -1 to +1.
    lfo[ii] = dsp_sca( lfo[ii], FQ(0.0), FQ(1.0) ); // Scale LFO to range from 0 to 1.
    }

    static int offset = 0; offset = (offset-1) & 4095;
    static int feedback[8] = { 0,0,0,0,0,0,0,0 }; // Feedback for each delay line.
    static int data[8] = { 0,0,0,0,0,0,0 }; // Samples from each delay line.

    // Manage the update of each delay line, obtain delayed sample from each
   
    for( int ii = 0; ii < 8; ++ii )
    {
    int xx = feedback[ii] * regen; // Feedback of the previous delay output.
    xx += (inputL + inputR ) / 2; // Sum the stereo ins for a single input.
    dsp_ins( delays[ii], 4096, offset, xx ); // Insert next audio sample.
   
    // The index into each delay ranges from 0.0 to 1.0 which covers the whole delay
    // line.  We want the actual index to be the sum of the delay line length plus the
    // slight variation due to modulating that index by an LFO.
    //
    // Example:
    // index = (2283 + 400 * depth) / 4096 --- max modulation is 400 samples.
    // index = length[0] + (400/4096) * depth
    int index = length[ii] + dsp_mul( depth, FQ(400.0/4096.0) );
    data[ii] = dsp_get( delays[ii], 4096, offset, index ); // Get the delayed sample.
    }
   
    // Combine all of the delay line samples using the Hadamard mixing matrix.
    // Scale the result to obtain unity gain.
    // Google FDN reverb for information on how these work

    feedback[0] = dsp_mul( FQ(0.35), data[0]+data[1]+data[2]+data[3]+data[4]+data[5]+data[6]+data[7] );
    feedback[1] = dsp_mul( FQ(0.35), data[0]-data[1]+data[2]-data[3]+data[4]-data[5]+data[6]-data[7] );
    feedback[2] = dsp_mul( FQ(0.35), data[0]+data[1]-data[2]-data[3]+data[4]+data[5]-data[6]-data[7] );
    feedback[3] = dsp_mul( FQ(0.35), data[0]-data[1]-data[2]+data[3]+data[4]-data[5]-data[6]+data[7] );
    feedback[4] = dsp_mul( FQ(0.35), data[0]+data[1]+data[2]+data[3]-data[4]-data[5]-data[6]-data[7] );
    feedback[5] = dsp_mul( FQ(0.35), data[0]-data[1]+data[2]-data[3]-data[4]+data[5]-data[6]+data[7] );
    feedback[6] = dsp_mul( FQ(0.35), data[0]+data[1]-data[2]-data[3]-data[4]-data[5]+data[6]+data[7] );
    feedback[7] = dsp_mul( FQ(0.35), data[0]-data[1]-data[2]+data[3]-data[4]+data[5]+data[6]-data[7] );
   
    int outputL = feedback[0], outputR = feedback[4]; // Stereo output!
   
    samples[0] = dsp_lin( blend, inputR, outputL ); // Right output, blend dry with wet
    samples[1] = dsp_lin( blend, inputL, outputR ); // Left output, blend dry with wet
}