I’m interested in this seventh T mode I’m seeing on GitHub:

I’m not a C++ or firmware person by any means but I’m assuming that this line means that users can’t access this mode without altering firmware in some way?

I’m interested in this seventh T mode I’m seeing on GitHub:

I’m not a C++ or firmware person by any means but I’m assuming that this line means that users can’t access this mode without altering firmware in some way?

1 Like

Not that I’m looking to do that. Just wondering if there was more to discover Just found out about the alternate modes the other day after having this module for 8 months or so which was fun!

You can only activate this by altering the code.

Thanks for confirming!

I just stumbled upon this as well and would love to know what exactly it does. Obviously it’s not activated for a reason.

As I see it, is is a 16-step Markov Chain based on these rules:

Is it supposed to model something specific or were these rules just determined heuristically?

I’m sorry to revive this old thread with questions which don’t even apply to the released firmware, I’m just wondering why this feature was left out

Yep, it’s just a bunch of heuristics to make “balanced” rhythmic patterns.

To implement a true Markov model, you’d need a large table storing the probability of playing a hit at clock tick t, for all the possible decisions taken in the past n ticks (aka “contexts”). You need long contexts if you want to learn well large-scale structures (like, a whole bar), with the drawback that you’d need a lot of training data to estimate the probabilities, a lot of memory to store everything, and then you might overfit and learn some patterns by heart. Not so great.

A different idea to drastically reduce the number of parameters of the model is to state that the probability of playing a hit at clock tick t is only influenced by several “features” extracted from the context – the code and comments are clear about what these features are.

The weight of each of these features is empirically chosen, and is influenced by the BIAS knob. I didn’t spend a lot of time tweaking these weights One could actually train this properly, it’s just a little logistic regression. But you’d still have to introduce some very arbitrary choices in how the weights are modulated by the BIAS knob.

Good luck writing the manual

1 Like

Okay, that clears it up for me, thank you.

I think it’s always interesting to see how structures like this are implemented in the real world as compared to lectures which explain them but don’t go in depth about implementation, or how one might get inspired by them to implement a different algorithm like in this case.

And in this case, if a true markov chain would be trained on say a drum-beat dataset, the results would probably not be much different than the drum mode of Marbles or the Map-based implementation of Grids, right? Although training it or even this model on a set of melodies could be interesting.

To test it out, could I just swap item #7 T_GENERATOR_MODEL_MARKOV with one of the others?

`enum TGeneratorModel {`

T_GENERATOR_MODEL_COMPLEMENTARY_BERNOULLI,

T_GENERATOR_MODEL_CLUSTERS,

T_GENERATOR_MODEL_DRUMS,

```
```T_GENERATOR_MODEL_INDEPENDENT_BERNOULLI,

T_GENERATOR_MODEL_DIVIDER,

// bgFMI – Markov in, and Three States has to sit this one out.

T_GENERATOR_MODEL_MARKOV, //Long Red Mode

T_GENERATOR_MODEL_THREE_STATES,

`};`

Yes.

1 Like