Aliasing and noise

Hi there,

I finished the Ambika kit a few days ago. Everything seems to work quite well, I had no big troubles during assembly.
Now I realized that there is quite some audible aliasing and noise in all the waveforms. I looked at it with a spectrum analyzer, and the aliasing and noise is up to -40dB (for the “analog” waveforms). With open filter, this can produce audible beating with the actual overtones.

Is this normal? I read somewhere that the “analog” waveforms are antialiased.

I did some quick calculations: The dynamic range you can get with 8 bit is only 48dB (2^8 = 256 levels), so an amplitude of 256 means 0dB. Then an amplitude of 3 has -39dB. So I guess you can’t expect much more with 8 bit resolution?


Please post audio samples of the recordings you used for your analyses - illustrating the different problems.

The internal precision for all audio rendering is 8-bit ; so there will always be quantization noise at -48dB.


If I play a pure saw with open filter from C3 upwards, there is audible beating which changes frequency from one to the next semitone.
Its also very audible on C5. From C8 on you can hear the aliasing and noise very clearly (with headphones).

Another thing which seems strange: If I play e.g. C4, frequencies between 14kHz and 26kHz are attenuated by ca 15dB. If I play E4, the frequencies between 17kHz and 22kHz are attenuated.

I made an audio file (96kHz) where I play a pure saw from C3 to C9.
Its too big to attach it, but you can find it here:

I mean don’t get me wrong, I was just wondering if this is normal, and if I maybe expected too much from a tiny 8 bit chip without real DAC.


Ambika voice cards do have a DAC.

Hallo Jojosito
I think the problem is the bandlimited waves ( in the shruthi-synth. You can download shruthi raw-wavefiles from this website: and import raw-data to soundprogramm “Audacity”. You can see the amplitude from waves.

Greetings Rolf (sorry for my bad english).

You are clearly expecting too much.

Doing proper band-limited synthesis (minblep & co) is out of reach for the kind of cheap MCU used for Ambika. To generate a band-limited sawtooth or square, Ambika/Shruthi use wavetables. The higher the note you play, the simpler the waveform used (actually above a certain threshold a sine wave table is directly used). Because we don’t have enough ROM for one waveform per note, we use a simpler wavetable with 6 waveforms having different levels of harmonics (from sawtooth to sine); and crossfading is used (a bit similar to “zones” on sampler) to interpolate between these. A classic trick used for example on the Korg DW series. On the Shruthi/Ambika, a zone is 16 notes large. The main implication is that a trade-off has to be found for the point near the crossfade point. If you play conservatively so that no aliasing occurs at the crossfade point, you loose 30% of the higher harmonics at the non-crossfaded points. If you play aggressively so that the non-crossfaded points are maximally bright, you get very audible aliasing at the crossfade point. The trade-off I have decided on is closer to the aggressive solution.

@rolfdegen: I don’t think there’s a problem with the band-limited waveforms themselves. Some aliasing might occur at the crossfade point. To partly get rid of it, try adding an offset to the note number in this line.

Hallo pichenettes

Ah…ok. I’m understand the problem.

I wish you an happy new year. :slight_smile:

Personally, all this aliasing, noise, hiss, whatsoever adds to the magic Mutable Instruments Sound. Imagine removing all the digital dirt from a PPG Wave, the 80ies won’t sound anymore like the 80ies…

Thanks a lot for the detailed explanations. I already thought that the computational possibilities of the Atmel are quite limited.

Have you ever thought about using a DAC with variable clock-rates?


Yes, I thought about variable clock rate and here is why it won’t work:

  • The variable clock-rate approach is suitable for plain wavetable synthesis; but some of the algorithms available on the Shruthi/Anushri (FM, phase distortion, vowel - anything that is built with secondary oscillators) would not work well within the variable clock framework.
  • Assuming a maximum frequency of 4kHz, and 256 steps per wave, we are left with 10 CPU cycle / sample on a 20 MHz chip running 2 oscillators.
  • Keep in mind that with the current hardware design I need to output the signal of 2 (3 counting the sub) oscillators on a single DAC. Variable clock rate works well for one single pitched signal, not for a sum of several signals!
  • Assuming we had 3 DACs (one for each oscillator and the sub), we would still need 3 high-resolution timers. On the ATMega328p, there is only one 16-bit timer, the two other timers are 8-bit, which make them unsuitable for accurate variable-clock-style pitch control.
  • Assuming we had 3 DACs and 3 16-bit timers, we’d still run into the problem of timer interrupts stepping on each other and causing jitter. The only work around I see would be to use 3 DMA channels, with the timers set as triggers, but then there would still be collisions when writing to the DAC unless each DAC is on its own SPI port. But then there would still be the following problem to solve: on the MCUs I know (AVR XMega, STM32F), writing to a SPI DAC through a DMA channel is close to impossible because you still have to take care of lowering/raising the CS line between each word yourself (DMA+SPI is good for writing to devices in which the transaction is long… SD Card, OLED display, LED driver…). Clocking-wise, the most reliable way of talking to a DAC for audio applications is to use I2S, which is… fixed clock rate!

My take on this: variable clock rate works better with dedicated hardware - be it a bunch of 8253, eeprom, and old-school DACs; or a FPGA. Not with a general purpose microcontroller.

Ok, I think I kind of got the main points, even if I don’t understand the details. I just remembered that some synths used that approach. Of course it wont work for shruthi/ambika.
And I never used old digital synths, only vst stuff, and I have an analog monosynth. Thats of course different worlds than ambika. So I might still fall in love with the MI sound. Just have to play around some more.


After beeing so critical in the first place, I feel like I have to make some more comments. I finished my other 5 voice cards today and after a first jam I really like the sound now. It’s a totally different experience than playing it monophonic!