Want to make an alternative ambika voice structure

So, I kind of like the ambika voice-structure, but I want to try making my own. I know this would require a lot of work. I see some py-scripts around the repo. Is there some way to simplify making a new patch-architecture that I can make use of?

The answer is almost surely “no”.

May I know which changes you’d like to do? And which python scripts are you referring to by the way?

Sometimes things are laid out in a format for a particular reason [tm].

They may look ugly or not very uniform when reading it but make sense for performance reasons.

Intel for example took the ugly mess that is x86 and made the lovely clean Itanium, but was it as good? nope :slight_smile:

I was referring to these:


the resource generation seem like it could save me a couple of hours :slight_smile:

I want to build and architecture centered around osc1 being a modulator for osc2 and having a lot of features in o2, such as wavetable-interpolation and not so many in osc1. Kind of like a buchla-esque thing. Also, I want to have some features where osc3 is used to crossfade between o1 and o2 and other such things. Crazier sounds in general. Perhaps I will try to add these features to the existing voice structure somehow, add more operators etc.

Hmm, don’t want to offend you but if you think such changes can be done by modifying some python scripts (which btw. get generated automatically from your C++ code!), you should not even think about starting to work on it.
Sorry, but you would have to rewrite huge parts of the OS to implement your changes.
Of course this is possible (to a certain extent as there are still a few hardware limitations) but looking at your python note I’m in doubt that you will be able to start working on it.
As I said: don’t want to offend you. Just wanted to make you aware of a certain complexity here.

The “resource” python scripts generate all the lookup table data - things like envelope rates and shapes, LFO waveforms, band-limited oscillator data… It also creates a big blob with all the strings used in the user interface.

You’ll have to eventually use this at some point, but this is certainly not where to begin…

Step 1 would be to understand the existing codebase. Get familiar with all the things involved in the life of a parameter (where is it stored, how does it get displayed in the UI, how does MIDI mapping work for it, how does it get transmitted to the voicecard, how it is used in the voicecard). Befriend the ParameterManager.

Step 2 would be strip all the things you don’t need.

Step 3 would be to modify the patch data structure and the UI to suit your needs.

Step 4 would be to rework the voicecard code.

It’s tough and almost as complicated as writing a new synth from scratch.

In fact writing an new Firmware from scratch looks easier……

Personally I’d start on Shruthi-1 first, simpler architecture.

I know that only the “strings” py-file will be generated but did not want to go into that details :wink:

@Veqtor: It’s like Olivier said: similiar to write a new synth from scratch.
My advice: get familiar with what you already got and make good use of it. If you definitely want to start changing the code, start with small things like implementing a new waveform, modify the speed range of an lfo, extend the modmatrix etc. Even this will need a good portion of your time.

Oh right, that guy is my project manager at work. :wink:

pichenettes Thanks for the information! It's roughly what I expected. I know it's a lot of work. But still, with the great stuff that's already there it still saves me a lot of work. I had been toying with the idea of developing something roughly similar to ambika but given that so much is already in place, the hardware is there etc, I feel like it would be much faster to base it upon your code.Mr_Roboto
don’t worry, I have a lot of experience writing synths! :slight_smile:
check this out: https://play.google.com/store/apps/details?id=com.yellofier
Yes, it’s a PITA making a synth work well on android. doing something like this is leisure, especially with all the boring parts, midi, storage, ui, etc already implemented.

If this can bring comfort to your soul, at least you’ll get a very predictable 1ms latency.