Eurorack firmware code questions


#1

Hey,
When I try to dive into the firmware of one of the MI modules, there’s always things that feel outside the realm of my current understanding, and I haven’t really honed in on the C++/embedded firmware resources to get the answer when those pop up like I have with, say, JavaScript. So I have a hard time making progress with ideas I have for alternate firmwares.

So, I’m starting this thread as a place to get answers to two questions:

  1. How does a specific block of code work?
  2. How does someone who is a part of the C++/embedded device community find answers on their own when they come across things they don’t know or understand. Links to things in this forum and outside of it are welcome!

And a note, my personal goal with this particular thread is learning/understanding, not as much applying what is learned to write new code.

So first question is this block from Warps’ ui.cc:

if (modulator_->bypass()) {
    uint16_t red = system_clock.milliseconds() & 4095;
    uint16_t green = (system_clock.milliseconds() + 1333) & 4095;
    uint16_t blue = (system_clock.milliseconds() + 2667) & 4095;
    green = green < 2048 ? green : 4095 - green;
    red = red < 2048 ? red : 4095 - red;
    blue = blue < 2048 ? blue : 4095 - blue;
    leds_.set_osc(255, 255);
    leds_.set_main(red >> 3, green >> 3, blue >> 3);
  }
  1. What’s the bypass() conditional about? Is it saying, "don’t do this led stuff if the audio is being processed on this “thread”.

  2. How do clock milliseconds map to rgb colors of the LED? My assumption is something like PWM dimming of an led (or three separate channels of an led?).


#2
  1. Yes. The conditional is true if the result is not equal to 0. That means, if the modulator is bypassed, the block of code is executed.
  2. system_clock.milliseconds() returns a counter that steadily increases each millisecond. The modulator operation constrains it into the range of 0 - 4095 (12bit). Each colour is offset b a third of that range. The conditional stuff afterwards ((condition)?yesCode:noCode) make the colours ramp up and down in a 11bit range. Finally the values are shifted to 8bits which is the format expected by the function that sets the led to the desired value.

Regarding things I don’t understand: usually I simply Google my error codes and get a helpful answer. If you come from java script, it’s important to understand the build stages, especially the difference between a compiler error and a linker error. Also, on an embedded platform there is no such thing as a thread, at least not until you use some sort of operating system like chibios or freertos. There is only a processor, running a single instruction at a time, interrupted by interrupt handlers (ISR) on certain hardware events (e.g. new data arrived in a serial port, timer events, etc.)

Understanding someone else’s code is usually a matter of digging in and finding human-readable variable names or comments that give you and idea of what’s happening here. That not an different from any other code, really.

Things written in all capital letters are usually preprocessor definitions. In the world of embedded hardware, those will often contain fixed addresses that control hardware functions of the processor. E.g. there are certain addresses allocated to different hardware modules inside the processor (peripherals) like serial interfaces, timers, AD/DA converters, etc. Writing to those changes the behaviour of that peripheral or starts some sort of event like an AD conversion. Googling those names will usually give you many hints to what the do. The best place to look is the programming reference manual of the processor. Hose are often 1000+ pages long but very clearly structured and contain all information you need to understand the functions tied to those specific addresses.


#3

cool that all makes a lot of sense, except why they need to oscillate. I need to dive in and understand the leds_ “object” (or whatever it is in C++) more and I think that will probably become clearer.

Also, on an embedded platform there is no such thing as a thread, at least not until you use some sort of operating system like chibios or freertos. There is only a processor, running a single instruction at a time, interrupted by interrupted handlers (ISR) on certain hardware events (e.g. new data arrived in a serial port, timer events, etc.)

Yes, I think this makes sense.

Looking around, it looks like we have UI::doEvents() and UI::poll() as the two main “looping handlers” (doEvents for user interaction, poll for things that should happen at a regular interval and are not based on user input, discounting what mode things are in and what not).

The other UI:: functions seem to be more about handling specific cases of things, or just things that need to be named spaced to the UI package for code structuring reasons.


#4

Very interesting, this makes complete sense but would have never been a place I’d go look without you telling me.


#5

Polling typically means to check for changes. You poll if you don’t have an interrupt handler available for the thing you want to check, or if that thing is not so time critical that an ISR would be necessary. UI stuff doesn’t have to be super tight in its timing (if you poll at a 1ms rate that would still feel instantaneous to a user). Polling allows to to do those things without blocking other important tasks such as rendering chucks if audio or managing data transmissions.

Many processors will have priorities in the ISRs so that one ISR can be interrupted if there is an event for a more critical ISR. But why deal with the hassle if polling is enough? By polling the UI events you can be sure your audio rendering ISR will be the highest priority possible and gets the full attention when needed.


#6

Wow, that’s a lot of stuff to cover! Let’s start from the beginning…

bypass()

The bypass() conditional has nothing to do with threading.

modulator_ is the class in which all the DSP code (for example computing the ring modulation between two signals) is contained, along with the state of all the DSP related things (for example: the active internal oscillator waveform).

What we’re checking here is if the DSP engine is in a special “bypass” mode. In this hidden mode, the two audio inputs are passed as is to the outputs, for purposes of factory testing. Another “testy” thing happening in this mode: the big knob glows in red, green, blue successively (for checking that the 3 components of the LED are all working).

milliseconds()

milliseconds() returns the number of milliseconds elapsed since the module started. It is manually incremented by a call to system_clock.Tick() (thus, something should be responsible for calling system_clock.Tick(); 1000 times a second, we’ll soon see what it is). milliseconds() & 4095 is a value that will ramp from 0 to 4095, at the rate of 1 increment per millisecond.

So the first 3 lines are generating ramps counting between 0 and 4095, with a 120° phase difference between them (well, 117°… sloppy me knew the value of 4000 / 3, not of 4096 / 3, so I just typed 1333 and 2667 :smiley: ).

These ramps are then folded into triangles, counting from 0 to 2047 then down to 0. Divide by 8 (shift right by 3) to get values between 0 and 255. Then you get the R, G, B components to write to the main tricolor LED. We also set the bicolor LED to full brightness on both the R and G component.

UI::Poll() vs UI::DoEvents()

Ui::Poll() contains all the code that is called on a regular basis to scan the state of buttons, encoders (and sometimes pots) and transmit information to the LEDs and displays.

Who calls Ui::Poll()? Either the interrupt handler for a 1kHz timer, called SysTick_Handler(), and defined in the main module file (for example, check rings.cc : https://github.com/pichenettes/modules/blob/master/rings/rings.cc#L68). This low level timer is a facility offered by all ARM CPUs - and is often used by RTOS for context switching.

Or the caller of Ui::Poll() is the interrupt handler that is invoked whenever a buffer of samples is ready to written to/read from the codec. That’s the case in warps.cc, check the FillBuffer routine: https://github.com/pichenettes/modules/blob/master/warps/warps.cc#L63

Anecdote time: Warps’ buffer size used to be 96 samples, with a sample rate of 96kHz, so FillBuffer() was called once per millisecond, my preferred rate for UI polling. I later reduced the buffer size to 60 samples to improve latency a bit, which means that Ui::Poll is now invoked at a rate of 1.6kHz, so everything that relies on milliseconds() is actually running at 1.6x the expected speed :slight_smile: Doesn’t really matter, but it would have been more correct to move Ui::Poll() to a SysTick_Handler() to get the right rate.

Back to Ui::Poll()… Since it is called 1000 times a second, it shouldn’t attempt to do anything that takes more than 1ms. Otherwise… stack overflow (if it always takes more than its allotted time) or audio glitches (if it occasionally takes more than expected)!

So whenever we detect a button press in Ui::Poll(), we just shove an event in a message queue (the Event object stores everything we need to know: what kind of control? which control? for how long has it been pressed?) and we just get out. We don’t attempt to do the actual thing the button press should do (like changing the state of the module, saving things, sending a SysEx over MIDI…), because it might actually take more than 1ms to do this thing!

When is it going to be handled?

Whenever the processor is done with FillBuffer() (which contains the bulk of the audio processing code), and more generally whenever there is no interrupt to service, it returns back in the main loop. You can think of this loop as what the CPU is left to do whenever all the higher priority stuff is done (LED blinking? buttons checked? codec fed with audio? now let’s check for UI stuff to do!). This is in this loop that we call ui.DoEvents(). It checks for unread messages in the event loop, and call the required methods to handle button presses or other types of events. With this organization of the execution time, the code handling the button presses is free to take as long as it needs: if necessary, it might be interrupted multiple times whenever something more useful needs to be done (like feeding audio to the codec).

Debate: polling vs interrupts for UI stuff

This code organization is not the only possible approach. For example, it is possible to ask the MCU to trigger an interrupt whenever there’s a change on the voltage at the pin hooked to the button. So we don’t have to check the state of the button continuously - we only get a notification when its state changes. Sounds cool, why am I not using that?

  1. Easy debouncing. My preferred debouncing technique (shove the read bits in a shift register, wait for a full string of 00000000 or 11111111 to decide that the button has reached a stable state) works better when the button is read continuously. Debouncing in ISRs… you need to start measuring intervals between calls to the ISR…
  2. Habit from desktop synth projects with many buttons, in which the buttons are all read by shift registers. In this case case the state of all buttons is streamed bit by bit to a single CPU pin, and a voltage change at this pin doesn’t mean anything about the state of a specific button.
  3. There are sometimes limitations on the number of interrupts one can subscribe to, or on the MCU pins that can receive interrupts. Not relying on interrupts gives me more freedom for deciding which signal is connected to what. Since buttons are not critical traces (low speed signals, no special function required), they tend to be assigned to whatever pins are left after all the important peripherals are hooked to the processor.
  4. When stuff happens in too many interrupt handlers, it gets hard to troubleshoot audio glitches/unregistered button presses. On all my modules, I can assume that Ui::Poll() runs in constant time and uses about 0.2% of the CPU. Then the only thing I have to check for is that FillBuffer() runs in less than 99% of its allotted time, and we’re left with a tiny bit of CPU for Ui::DoEvents(). You can often find something called PROFILE_INTERRUPT in my projects. When enabled, this toggles a processor pin at the beginning of FillBuffer() and at its end. I can monitor this pin with my scope, the period of the waveform is the latency (buffer duration), and the pulse width gives me the CPU time consumed by the audio processing code. I can zoom on the falling edge and see if there are combinations of settings that increase the CPU use. I’ve shown this to a couple of people coding plug-ins or audio software, they were all insanely jealous :slight_smile:

#7

Ah! So the code block I originally posted is all about specific behavior for this testing mode. And the ramping/ms stuff is all about getting the three colors to show sequentially for testing purposes. (Note to self: next time I see thing->function(), I should go check out thing's definition and not hand-wave off what I think function() might mean…that would have gotten me closer to understanding).

Gotta look that up but makes sense.

UI::Poll vs UI::DoEvents…

All of that makes good sense and was along the lines of what I was thinking in terms of priority. Lots of details here that I hand waved out.

Use of event broadcasting & passing an object to do stuff is a common pattern in the AngularJS code I write/work with regularly. Note that, at least from personal experience on that side of things, this passing is done to handle the event in a more “appropriate”/structured place (contrived example: a specific JS service for receiving websocket events and broadcasting them to the right place, and some JS code specifically for a percentage bar’s ui is the one that handles the message), rather than for timing purposes…but I think the pattern is pretty much the same!

polling vs interrupts for UI stuff

Yep. I assume it is fairly easy to tell if the Poll code is overrunning it’s timing. It can be pretty much isolated, if you’re not doing a lot of imperative stuff within the Poll loop and you set things to your DSP-bypassed mode (removing the presence of other processes/possible weirdnesses that could be going on?

When enabled, this toggles a processor pin at the beginning of FillBuffer() and at its end. I can monitor this pin with my scope, the period of the waveform is the latency (buffer duration), and the pulse width gives me the CPU time consumed by the audio processing code. I can zoom on the falling edge and see if there are combinations of settings that increase the CPU use.

Very neat! Can this pin be accessed on a manufactured module somehow?


#8

Can this pin be accessed on a manufactured module somehow?

Let’s see where PROFILE_INTERRUPT is used in Warps’ code.

Let’s search for where TIC and TOC are defined…

Let’s identify which pin they affect:

So it’s on pin 9 of GPIO port A… Let’s check the schematics to see what this pin is connected to…

A signal named TX, mmm… what is connected to that? Where is it located on the board?

Found it!

Note that normally, this pin is used for another purpose: send serial data to a computer during factory testing. The simple protocol used to communicate with the factory testing machine is implemented in Ui::HandleFactoryTestingRequest(). (The factory testing program is a linux PyGTK GUI that prompts the operator to turn knobs, push buttons, or send signals to the inputs, and verifies that the module correctly registers all these actions…).


#9

Okay nice!

Let me take it one step further.

If I was able to solder up a 3.5mm TS jack -> female connectors for the <TX pin (tip) and GND (sleeve), pop the 3.5mm jack into the audio in of my computer, and fire up some free oscilloscope, I could power on my module and watch the pulse in the oscilloscope get larger as I, say, put audio rate modulations in the algo/timbre jacks?

EDIT: provided I make PROFILE_INTERRUPT defined in the firmware running on my module


#10

This output is disabled on the production firmware. You need to compile the firmware with PROFILE_INTERRUPT defined - only then it’ll work!

The connections you describe will work. You’ll notice a longer pulse for some algorithms that are more computationally expensive to process. Within an algorithm, modulating TIMBRE has no effect on the CPU. Audio rate modulations don’t make things worse - but I can see your thinking behind it. In the past I’ve written “memoized” code that looks like this:

if (cv_has_changed) {
  // Recompute some complicated expression involving the CV value
} else {
  // Use the value from the previous call
}

It’s a bad idea for two reasons:

  • In the worst case, you are actually using more CPU than with the dumb code that recomputes the expression every time (a fraction is used to track if the CV has changed).
  • It’s more difficult to monitor CPU usage during testing, because it becomes context dependent.

In the end, it doesn’t matter if the module uses 50% or 98% of the CPU when the CVs are idle.


#11

Wow, interesting! Thanks for sharing all this info, I learned a lot.


#12

@pichenettes I have done some looking into the FTDI adapter mentioned in the hacking resource section of warps, and I think it may also be able to handle grabbing the signal needed to see the profile interrupt pulse.

That being said, I’m not sure of the throughput of the FTDI -> USB connection, nor am I sure if the android ide will “recognize” warps (so that I can use it’s serial plotter).

It also seems like the RX and TX pins on the FTDI adapter are switched from Warps’ pins (screenshot above), so some sort of flip-flopping of those would need to be done (and not a direct connection of the adapter). Side note–might be useful to update the hacking resources on the site so that this is clear to people going down this route for updating.

^ seems like settings this all up might be more trouble than it’s worth to explore at this stage, but it’s super interesting and would be useful should I get to the point where I need to test performance of the audio buffer (and at that point maybe I should buy/borrow a real oscilloscope!). Also my 3.5mm TS cable solution presumed that there’s a free oscilloscope for macOS/iOS audio inputs, but I can’t find one.


I’m thinking the best next steps in terms of learning would probably be to go the JTAG route so that I can flash the module with updated firmwares quickly to test out code changes and see what they do. Does that sound right?


#13

I would agree that a JTAG adapter is incredibly helpful if you plan to dive deeper. The olimex ones are supported straight away. Take a look at the makefiles, where the default programming adapter is defined. That gives you a hint as to which 9nes will work without changes.


#14

I’ve never tried using a FTDI USB<->serial to just read the state of a digital pin. A general purpose I/O swiss army knife is the Bus Pirate. I would be surprised if it doesn’t have decent software support.

It’s normal that something labelled TX on one side is labelled RX on the other (just like you plug the MIDI out of a sequencer to the MIDI in of a synth). This adapter is the exactly the one I use when reading/writing serial data to Warps.


#15

Ah yeah, makes sense.


It seems like the bus pirate is pretty awesome. It can act as a JTAG interface, an FTDI interface, and you can get what looks like a useful probe set to hook to multi-meter leads and test things. I think this is the direction I will go, thanks for the suggestion!


#16

Note that after some more research, the bus pirate 2x5 pinout may not match the JTAG pinout exactly…it isn’t immediately clear from googling what exactly these things map to:


The bus pirate manual also mentions that the bus pirate is not meant to “replace” a proper JTAG debugger in terms of speeds.

From some reading, it seems like the goal for tools like the olimex JTAG debugger is so that you can “step through” code running on the embedded using GDB. Do y’all do stuff like that?


#17

The connector on the bus pirate is not a JTAG connector. It’s a kind of multi-purpose thing… there must be adaptors to JTAG.

I sometimes use the “step through” feature allowed by my JTAG debugger… but quite rarely… DSP code is not really prone to this kind of debugging techniques.