Digitizing multichannel CV with a MAX11300

I’m a bit green and finding my way around basic EE so please go easy on me, I am here to learn :slight_smile:
I have an idea for a project that converts many channels of CV from Euro sources into a realtime OSC data stream. From what I have found the MAX11300 looks pretty amazing; up to 20 ADC channels, SPI interface, and good native input ranges. I would like to use this with a Teensy 4.1 and use its onboard NIC to spit out OSC over the network.
Some questions:
-How many channels can I sample simultaneously with the Teensy? I don’t plan to have a sample rate that goes into audio too much, maybe 100-400Hz.
-It has switchable modes to handle bipolar and unipolar voltages, what would the best strategy to switch between the two, multiplexed hardware switches on the panel or set in software manually/autodetect? I would use the 0-10 and +/-5 range options. I will need some scaling to handle +/-10v in bipolar mode and a way to have input protection that works for both.

Frankly, the MAX11300 doesn’t seem to do more than the ADC/DAC section built into any modern microcontroller + some op-amps (which you might need anyway).

The most important question: what resolution do you need?


  1. maybe the ADC built into the Teensy is good enough for what you want to do.
  2. if you don’t mind losing a bit or two of resolution, the bipolar/unipolar thing is not an issue – just put an op-amp with a gain of 0.5 (if you go with the MAX11300) or 0.15 (if you go with the Teensy’s built-in ADC), and a full +/- 10V is now acceptable.

The goal is to convert the incoming CV to a FP16 value that is scaled and offset in software to get a float value between 0-1.

This is not the resolution I’m talking about… If the input voltage changes by steps of 1mV, should you be able to measure this? What kind of noise are you willing to tolerate?

Doing some quick research, I think I need .153mV LSB for 16bit resolution at 10v input range.
Regarding noise, lower jitter and smoother quantization is better I would guess.
The end OSC values will be used to control parameters in a real time rendered video environment such as XYZ spatial position and other things related to motion graphics running at 60 fps max if that helps any…

The MAX11300 has a 12-bit ADC, so it won’t match your precision requirements. 0.153mV is a fairly tiny voltage, I have never designed personally anything that precise. If we were talking about acquiring note CVs, it would mean being sensitive to a change of about 1/5 of a cent!

Another way to look at it:

  • If your CV is 1.24V (or 1.26V) and your device reads it in both cases at 1.25V, how bad will your application suffer?
  • What kind of SNR will you expect on your input signals? If it’s 70dB, 12 bits are good enough - all the remaining bits will be noise anyway. Hint: an actual Eurorack system is noisy.

It is possible I may be asking for too much precision then.
I think 1mV might be fine given your example, what is common to read CV into Mutable modules for parameters that are not 1v/oct?
I am beginning to understand what you are asking about noise. The inputs are almost certainly all going to be from Eurorack. The idea is to send LFOs, sequenced CV, random voltages and triggers/gates out to OSC to give a more organic feel to animations by massaging the parameters so it’s more about the character of the signal than the absolute precision if that makes sense.
An example would be sending the outputs of Marbles into a motion graphics app and have that data give life in real time to the flow speed, ripple amount and bubble triggers in an underwater visual effect instead of key-framing those parameters on a timeline.

Another way to look at it - You use your CV to animate the X, Y position of an object in a 4K image. Would it matter to get sub-pixel accuracy? If not, then you need less than 4000 values, so a 12-bit resolution is enough.

Many popular modules (Rings, Elements, Clouds…) use the noisy 12-bit ADC built into the microcontroller, whose 2 lowest bits are trash, so it’s effectively closer to 10-bit resolution. But the readouts are filtered, and further in the processing chain they are linearly interpolated.

1 Like

Sub pixel accuracy is not necessary so lets go with 12-bits.
So that being the case, I could get away with using 12 onboard ADCs sampling at 12 bits, do software scaling and interpolation, run a TCP/IP stack and basic web server (for OSC config) on a Teensy 4.1 with high enough sample rate?
I think going along the lines of how you handle CV inputs might make the most sense.

I think so!

It doesn’t look like a very CPU-intensive task.

Your graphics run at 60 fps so there’s no need to know the values of the CV at a faster sample rate than 60 Hz. Obviously, you can sample at a higher rate (say 240 or 480 Hz), and average (or any kind of digital low-pass filter) several adjacent samples to eliminate noise and get a better resolution.

Well that sounds a lot better then! I would do as high sample rate as I could get away with and still be able to reduce noise with averaging, so 480Hz sounds nice as there is a possibility my video FPS could be as high as 120 (for VR goggles as an example)
Now to handle unipolar and bipolar Eurorack voltages going into a Teensy analog input, do I need to still do a physical switch that changes bias and gain for the buffer op amp or can this be done in software and ‘auto-sense’ the incoming voltage type?

From an electrical point of view, there’s no difference in the handling of a unipolar or bipolar voltage. You could design an input circuit that accepts, for example, a range of -10V to +10V, and configure things in software so that 0.0 is mapped to 0V (or -10V, or -5V) and 1.0 is mapped to 10V (or 5V).

1 Like

Great, thank you for sharing your wisdom and experience, I especially appreciate your understanding of the video end of things that I am trying to bridge together- it really helps as some folks I’ve been asking don’t quite get it.
I will take a further look into buffering input circuits and get back to you if I have any questions.
To confirm a few things:
-100k input impedance is proper for Eurorack.
-I need a 0.33 gain for the Teensy ADC input to handle a 10v range.
-Anything else? Other input protections I am not thinking of?

0.33 gain if your range is 10V (for example -5V to +5V, or 0V to 10V), or 0.166 gain if your range is 20V (-10V to 10V).

If your 100k input resistor goes to the virtual ground of an op-amp, a voltage in the kV range (ESD) will result in currents in the 10mA range, this is tiny and nothing will be damaged.

Once you decide on your sample rate, you can adjust the cutoff frequency of the filter in your input stage.

disclaimer, i have only extremely basic understanding of electronics. and it feels blasphemous to try and offer a different opinion than someone who obvisouly mastered all this.

that being said, i had pretty good experiences with the max11300 dev board. i hooked it up to an axoloti board (stm32f4 based music MCU) and it worked like a charm. the big advantage compared to MCU’s “onboard” codec is that you get 20 pins than can be configured as either DAC or ADC via SPI, per pin. and also the voltage range can be configured per pin in convenient eurorack compatible ranges, 0/5v, -5/+5v, 0/10v.

yes, the resolution is rather low at 12bits but it was good enough to get 1v/oct accurate enough to control a VCO. granted i only tested this over a range of maybe 4-5 octaves and also only using my ears as a measuring tool. i did the maths at some point, iirc it came down to maybe 3ct per step/bit?
i don’t think i actually ever measured the noise but i am pretty confident that it will be better than your run-of-the-mill MCU codec.

but yah. the big advantage here being that you actually don’t ned any amping/attenuator circuitry. there’s software implementation for most MCU worlds (ie axoloti, teensy) and using the dev board it is kind of fool proof to set up, not needing to design any PCBs or even basic circuits (unless you want input/reverse/overload protection, that is, hahaha)