Anyone used a Shruthi-1 as a vocoder?

Hi all

I dont have a microphone to test this, but has anyone used the SMR4 shruthi as a vocoder?

Thanks

It’s not possible.

Dang,that settles that then , should I delete this thread then?

BTW I’m disappointed it took you 1 minute to reply :slight_smile:

Yup, there’s no vocoder mode. A vocoder splits the input signal into bands using a FFT (quite CPU intensive) and then uses another signal to modulate it (internal oscillator usually). There’s no enough CPU grunt to do both things at the same time.

Although recently a faster way to do FFT was discovered:

A vocoder filter board might be possible though. But there’s lots of vocoders around.

Ive used it as the carrier source for a vocoder with one of these. Id like to get the cv ins working with the shurthi-1s cv outs BUT TIME MAN.

maybe someday fcd72 has so many shruthis, that he can use his shruthis with BP filters to build a huge vocoder. could work, if some gates were used.

one band would be something like: bp filter in the analysing signal > sidechain in of the gate. bp filter in signal chain> gate.
ambika could be used as the synth signal :slight_smile:

FFT would be a poor choice for realizing a vocoder. You need only a handful of channels with a log spacing (constant-Q representation). Too many channels and your analysis is fine enough to capture pitch details (you don’t want that, just the spectral envelope) ; too few channels or oddly spaced channels and you can’t pick formants. So either you would use a large FFT size and would need to aggregate all the channels for the upper frequency bands (which would be wasteful, what’s the point of decomposing a signal into 1024 bands if you’re going to sum 256 of them), or you would use a small FFT size but have your vocoder sound “wrong” because the channels would be linearly spaced. I think using a bank of bandpass filters (just like the analog equivalent) implemented with one or 2 biquad sections would be more meaningful and more computationally efficient. It’s not particularly computationally intensive (got one with 9 channels to run at 44kHz on a 100 MHz ARM CPU), but certainly out of reach for the Shruthi-1 or a filter board. Not to mention that correctly generating the carrier (a handful of band-limited pulses, to get fat chords!) would take up a significant chunk of the CPU.

Too bad there’s no voice -> MIDI CC converter :slight_smile:

I could attempt using the voice modulator on my JP-8080 for that, but it’s low-res and already has a decent vocoder built-in…

afaik the FFT-based vocoders are called phasevocoders (at least in c-sound they were called like that) a classic vocoder works with a bank of filters.

A phase-vocoder has nothing to do with what electronic musicians know as vocoders :slight_smile:

Well, apart from time stretching and pitch shifting, we can “thank” the phase vocoder for things like Autotune, Lil Wayne and Kanye West. Gay fish y’all!

Isn’t auto-tune time-domain?

Oh, I forgot T-pain! Snooping around, it’s claimed it’s autocorrelation and a phase vocoder. Anywho, South Park really nailed it how Kanye West would act.

Well I wasnt looking for the super autotuned T-Pain, more like the intro to this : http://www.youtube.com/watch?v=tR6Z6Sratvg

Ooh, old school vocoder. It’s reminiscent of 1000 Knives by Sakamoto and its vocoder intro.

it cannot be achieved with a shruthi anyway

Jojjelito , exactly

The South Park guys said they had to deliberately sing badly out of tune to get the Kanye autotune effect-apparently Trey Parker’s vocal pitch was originally too good to trigger the effect :slight_smile: