Poly Chain Details?

Ok it seems I am the first one too curious…

The polychain is obviously coming along well (soundcloud), are there any beans to spill on this one?

In my mind these two things would be the ultimate:

  1. The slave units somehow send tweaks back to the master so you can leave a different page selected on each unit and enjoy lcd labelled knob-per-function luxury!
  2. Multi-timbrality could be a can of worms… But perhaps a simple ‘unchain’ mode where the master simple sends program changes and notes (with some note or velocity filter parameters) to the slaves ? The Shruthi-1 hybrid drum machine would then be born :slight_smile:
  3. Unison. For huge-ness.

I don’t know if such things are possible. But a six voice, 24 knob version would be a beast of a synth…Just imagine.

I think I would build mine with a passive summing mixer to combine them and a tasty diy preamp for make up gain and even more warmth. Yum.

1. I don’t see any obvious way to make it happen. The communication, for the moment, is from master to a chain of slave ; so each slave’s IN and OUT is taken. As for the master, it’s IN is connected to the master keyboard, and it’s OUT is connected to the first slave. So if you want a slave to report back to the master, you need a MIDI merge box that will merge the OUT of each slave with the master keyboard. And this wouldn’t solve the problem at all since you’ll see knob changes messages travelling around the loop. A solution to that would be to “watermark” MIDI messages so that a unit recognizes it’s receiving some data it has sent and a loop is forming - but MIDI is pretty sparse/compact and I don’t see how to do this watermarking without a loss of resolution.

2. Multitimbrality is trivial… Simply assign a different MIDI channel to each unit or chain of units.

3. Unison is trivial… Simply chain the units with the “full thru” mode and make them listen to the same channel. There are indeed 2 “thru” modes on the Shruthi - one which forwards everything, and one which forwards everything + add parameter sync commands. The former is good for “dual” modes ; the later is good for unison.

Regarding 1, here’s another broken idea: Assemble 3 or 4 “control panel” boards without any voice board (only the UI and MIDI matters here), send them to a MIDI merger, then send that to a polychain. So you can keep each of the voice-less control panel board on a different page, tweak them, and get them to send NRPNs for parameter update. The drawback: when you change page on these control boards, you won’t see the updated parameters. But as an extra set of knobs it could work!

regarding watermarking

@smrl: this won’t work. The solution described here is to make sure that the last device in the chain won’t forward parameter sync commands. This is not what we want if we want a parameter sync between all devices. If device 2 in a 4 devices chain is touched, we want this to be forwarded to device 3, device 4, and then back to 1. So we can’t simply say “nothing will go past device 4”. If you look at MIDIlink, there’s always a sense of directionality - information flows from left to right but never from right to left.

OK, I see. However, I don’t see why this idea can’t be extended… If they were connected in a loop, then it doesn’t matter which is 1st, which is 2nd, and which is 3rd, etc. They’d each uniquely watermark their data. You’d have multiple loops and each device would be aware of its position in the loop, and hence aware of which messages to forward and which not to (or it could simply determine that the parameters that it’s receiving originated from itself and discard). You’d also need a way to merge the MIDI data into the loop. This could apply, I believe, both to the note data and the device parameter data.

The numbering/interpreting schema could perhaps also be used to determine where in the polyphony stack the devices were located?

Make sense? Still way off base?

I’m not saying this is a great idea. Seems pretty inelegant to me, and easy to mis-configure, but I’m just throwing this in for the sake of discussion…

Ah, great ideas!

Do you think it would be practical to save the midi channel and poly chain mode per preset rather than globally? That way they could be switched between instantly live, how slick…

It wasn’t until I daydreamed about it today that I realised there are a number if things to consider with polychaining notes, much more than ‘play and filter the first note you get’.

Does the logic know how many are chained and each unit count to take it’s turn? For 6 in a chain I guess each unit would need to play one note then pass 5 on to make sure the oldest note is the one cut. Actually maybe it would be different to that in a non loop… More daydreaming needed:)

The watermarking looks very advanced even feasible to this newb, but I could understand non-chainers would not like the extra stuff in the stream…It might get weird if bytes get dropped from gushing midi.
I wonder, if they were looped (midi out of last merged to midi in of first), could you stop the nrpn loop by having each shruthi kill nrpn’s at the input for any parameter on the selected page? The idea is that each unit would be set on a different page anyway, so the change should go through them all and stop where it started…?

I’ll also keep mulling over some integrated note filtering to allow multitimbrality on a simple midi keyboard (even one in the same case- it would be a beast of a synth, remember). But I’m not sure if this would be a bit complex for a simple rocking synth. No harm in mulling though :slight_smile:

The connected mainboards is a cool idea too, though for dual/unison/multi capabilities the master would probablly need to be super super smart? It sounds like polychaining at all requires some decent smarts as it is…

Cheers for humouring my ideas guys.

The “do not forward if it matches the parameters I currently have” does not work. Devices can be in different states when the chain is formed ; and it is not because a device already stores the right parameters that other devices downstream do not need the update.

In the midibox case, since the information flows only in one direction, there’s only one mark to add - whether the message originates from the chain or from the outside. This can be done by inserting start/stop messages inbetween, which are rather cheap to send (1 byte). In the case of a loop of parameter updates, the information to add is more complex, since each NRPN must be tagged by the id of the device from which it originates. Doing so in the least significant bits of the parameter value causes a loss of resolution. The only way I see is to prefix each NRPN by a special CC message indicating from which device the NRPN that follows originates. Not only this will eat some of the MIDI bandwidth and add more latency (an extra 3 bytes message before every parameter sync message will add a 1ms delay!), but the Shrut(h)i are really tight in terms of MIDI processing. Indeed, the major difference between my stuff and the Midibox devices is that they only handle MIDI, while I have 90% of the CPU (or probably more) eaten by waveform synthesis. So I’d rather keep the amount of MIDI information I emit and handle minimal.

@Adam Jones.

Polychaining is not implemented with a “play one note out of n and forward the rest” the way you describe it. It’s the kind of things that are easier to explain with pseudo-code than with words, but I’ll give it a try anyway…

The first unit in the chain (let’s say a chain of 6) has a 6-voices polyphonic voice allocator. It’s really the same gizmo as in a polysynth. It stores a linked list of 6 voices. Each voice has a numeric id ; an active/inactive status ; and the MIDI number of the note currently played. The voices in the list are sorted by least recently use. When you trigger a note, it searches for the least recently inactive voice ; and if not available, steals the least recently active voice. The active/inactive status is updated, and the list is reordered. If the id of the voice that was picked is 0, the note is triggered locally, otherwise, the MIDI note on message goes through.
When you release a key, the master searches for the id of the voice playing this note ; marks the voice as inactive, and moves it at the tail of the list. If the id of the voice that played the note is 0, it also triggers the release phase of the envelopes and marks the voice as inactive. If it is not 0, the MIDI note off message is forwarded.

The second unit in the chain works with the same data structure, but with 5 voices…

The last unit in the chain directly obeys to any note on/note off it receives. It doesn’t even go through the note stack (in normal monophonic mode, there’s a note stack so if you hold C, play E, then release E, it goes back to C).

There’s one more complicated bit: the algorithm tries to reallocate a retriggered note to the same unit that played it previously - so it behaves more like a piano or marimba than a cello (where the same note can be played on different strings). There are heated debates about this voice allocation strategy, let’s not enter into that. People smart enough to rant about those kinds of details will surely know which 6 lines of code to comment to remove this behavior.

So basically, it’s really the same algorithm as in a polysynth, but recursive (if stuff should happen on voice 0, do it locally, if not, forward it to a n-1 voices polysynth). Still, because the first unit in the chain knows the state of the world for the 6 voices, it doesn’t look like a “distributed” thing. Indeed, if you think about it, the voices linked lists are really the same for each unit, with one voice removed at each step in the chain.

Thanks for explaining all that to a non coder, your paitence is appreciated!

I should clarify my nrpn idea above:
When you have a page selected on the Shruthi, those 4 parameters shown on the screen only are filtered out from the midi in before they are processed, just to stop the loop…?
Much simpler idea from a simple mind:)

I agree that tagging all nrpn’s per unit would end up heavy on the processor, let’s not weigh it down with that stuff. The added latency might ruin an 8 voice anyway.

Also I for one am very glad about same note repeats not stealing voices. I can understand the arguments for selective stacking when playing, but in practice I find I want to be able to repeat notes without clicks and pops. Good one.

As I say, you’re very paitent to take questions/comments as you go from the likes of me, many thanks!

Is the note filtered from the output in the last member in the chain?

The last unit in the chain doesn’t forward the notes it plays. However, all notes which are not played on the reception channel are still passed through the chain - if all units are set to receive on channel 1, the chain behaves like a thru chain for channels 2-16.

I see what you mean with the pages. However, this implementation requires a merger at the input of the chain - and I don’t want the default implementation of polychaining to work like that. What you suggest is indeed a solution to a different problem - having multiple units collaboratively working on the same patch without any master or ordering in the chain. I’d rather keep that for a different project (m control panel modules x n voice cards).

Yes, good points I must admit.

I didn’t think of the other midi channels. In fact any stray midi string that didn’t meet filtering rules would end up spiralling out of control I suppose, not exactly ‘robust’!

And midi mergers, summing mixers, it starts to sound like a hack rather than a feature. Which is cool too, but if I want it then it should be up to me. Anyone have any tips on where to find a good C++ introduction? I know how an if statement works in excel…? ducks

A master unit with voice cards does sound like the best way to do it properly, if that’s an option in the future:) That would be very special indeed.

In such a case perhaps it would be even cooler to have a supersized hackme section instead? 32 analog input pin headers on the mainboard (or an optional input module) so extra knobs, buttons, CVs and gate signals could be put to good use is my visualisation. 32 AD converters etc would cost, but less than 7 extra mainboards I would guess.

A diy midi board and midi merger would be a way to get close, to be fair. That will do me if nobody else is as knob obsessed to make it worthwhile.

But oh how easy it is to get ahead of oneself with ideas when one has the bliss of not really understanding the work needed to implement. No doubt even a simple C++ link will shut me right up. :slight_smile:

I should add that I think the existing interface is a very good design (by the looks of it). Knob per function is really a lavish luxury, but one that I have found really bonds me to a synth.

If you want to do embedded systems programming, you should probably look at C first. Most C++ teaching material is not suitable for embedded applications. In fact, almost everything that is in C++ and not C (OO, polymorphism, inheritance, the STL - all the things that teaching material focus on) is likely to cause unwanted bloat on embedded systems. I use C++ for some specific features (namespaces, data visibility and constness discipline, templates for macros/code generation) but I really know what I am doing. “Textbook” C++ is a no-no on small embedded systems and is likely to hit you back. So you should probably learn C first. I have only read 3 programming books so I have no idea what I could recommend you. The classic C books are likely to be about unix programming, so you’d rather pick something about embedded systems.

You’ll also have to pick a target/platform, which means deciding on AVR vs PIC . The Arduino boards provide you with a cheap AVR development board and an easy to use development environment. I am not sure I would recommend this for someone who seriously wants to learn, though - there are so many “wrong” things with Arduino that I don’t want to start…

Regarding the “hackme” section, it was there to expose the unused I/O pins. The ATMega328p has only 1 ADC, which is multiplexed on 6 channels. If you want more analog inputs, you probably won’t need 32 ADC - you can use 4 of them and multiplex 8 channels for each. Still, it’ll take resources to talk to them. It would make more sense in an architecture in which there’s a processor handling the main interface I/O ; and a processor on each voice card.