Simulating tape/analogue delay in DSP


Edit: I scratched my head looking at the integral in the first answer on the music dsp list, which I didn’t understand, and now I realize it’s the same as mine with a variable substitution. I changed my notation to make it more obvious.


Edit 2: I refined my original post a lot.


This is great, I’m (we’re?) learning a lot. Thanks a bunch guys.

@TheSlowGrowth: what confused me in your model is that your two parameters are read and write speed, whereas in my “experiment” they were read/write speed and buffer length. It’s a minor difference that’s indeed fixed in Vadim Z’s answer. What you’re right about is that it’s much clearer to think about the problem in terms of analog tape, rather than BBD (I somehow missed this simple fact).

@pichenettes: lucid, as usual. Extra complication is introduced because of the discrete nature of BBDs clocks; I’ll rewrite your development with tape instead, as an exercise, and post it here for posterity.

Now I’m gonna have to implement it! Damn…


TheSlowGrowth> The question really is: What do you want to achieve?

Sorry if it was unclear from the beginning (it was also for me). What I was looking for, in the end, was exactly what all three of you provided, with minor differences: 1/ a relationship between delay time, buffer length and tape speed 2/ an explanation of how to build it, so as to be able to reconstruct it in a similar setting. I guess I’m fulfill now :slight_smile:


What I described should work for discretized tape - in short, you manage an array which stores, for each finite element of tape, the timestamp of the data recorded on it. You look at the timestamp stored at the tape element currently present at your read head, and this gives you the delay time. If the tape cannot travel backwards, this is exactly the same as for the BBD example I gave.

More generally, you can always simulate the delay device not by feeding audio to it, but by feeding it timestamps (a ramp function), and these timestamps give you the delay time to use with a more conventional variable read head delay implementation. Whether doing this is more efficient than processing audio is debatable of course, but my point is that feeding timestamps to the delay device is a good mental model for figuring out what’s happening - and it gives a good intuition about what’s the reciprocal of the primitive of the rate function that we have to evaluate to find the delay.


Got it! Thanks.


For the hungry reader, here’s an article I’ve been pointed to. It reconstructs the integral discussed above informally, starting from a different problem: determine the LFO shape needed to have a sinusoidal pitch shift modulation.


Hi there, reviving this old thread to announce that I’ve (finally) implemented this in Warps parasite v1.0, up online today. It was substantially harder than I first thought above :slight_smile: You can check out the implementation in my Github (even though that part of the code is pretty messy); don’t hesitate to ask if you want explanations on the algorithm.

Thanks all of you for the nice discussion, especially @SirPrimalform. It was a good opportunity to learn and have fun.


I had a quick look at the code. I don’t see anything shocking or pointlessly inefficient - writes at a reduced fractional sample rate, interpolated reads to get back to the full sample rate. All good.


Even with the Hermite interpolation, the sample rate reduction in the while (write_position < 1.0f) loop is going to alias. For example if sample rate is equal to 0.125f, you’ll be keeping exactly one out of eight input samples - irrespectively of the chosen interpolation.

Maybe not taking care of this was a deliberate decision but if you want to address this, a quick fix would be to apply an LPF whose cutoff tracks sample_rate on the mix.l and mix.r samples. It’s quite some work to implement a variable frequency brickwall filter, but the stmlib SVF with a resonance of 0.5 cutting at 2 x sample_rate is a quick and dirty fix for a good start, that will prevent some highs to alias…


Would an ideal implementation use enough memory for the maximum delay time at full sample rate, upsample in the write and instead downsample in the read?


If you have CPU to waste, yes you can do that :slight_smile: But then you miss the bandwidth reduction (with or without aliasing) that makes the charm of variable speed delays…


Ah, but the original purpose of this discussion was about emulating the modulation behaviour of an analogue delay, not the sound. Lots of people have done digital delays that model the sound of tape/BBD. But yes, I suppose the down-then-upsampling approach inherently emulates a BBD kind of effect if you allow a little bit of aliasing on the longer delay times.

I wonder if lack of attention to the modulation behaviour is one of the reasons digital chorus/ensemble sounds different from analogue. The pitch changes behave differently for the same modulation waveform/rate. In a digital (phase modulated) delay, a triangle wave modulation would pull the note sharp by a fixed amount on the ascent and then flat by the same amount on the descent. An analogue (frequency modulated) delay has a more complex relationship since the modulation is also affecting the ‘record rate’. Complicated!


> a quick fix would be […]

Thanks for the code review, I’ll keep this in mind (not for the Parasite though which is “done”, maybe another project). It was not a deliberate choice, more of an oversight… but I like it this way now :slight_smile:

> […] SVF with a resonance of 0.5 cutting at 2 x sample_rate

Did you mean 0.5 x sample_rate?

> I wonder if lack of attention to the modulation behaviour is one of the reasons digital chorus/ensemble sounds different from analogue

Yes, I wonder too. Did you read the interesting article I linked above (from Tom Wiltshire)? It talks about this modulation issue. As far as delay modulation goes, indeed triangle waves sound awful because their derivative is discontinuous. A similar problem occurs when using Peaks-like random waves (constantly clocked, ramps up/down to a random value). My reflex now is to use a sine-shaped random oscillator that has constant slope, i.e. is not constantly clocked (e.g. in Clouds Parasite, random_oscillator.h).


> Did you mean 0.5 x sample_rate?

Your call. It’s a trade-off between making sure the pass-band is not affected at all, at the cost of more aliasing (2 x sample_rate), versus seriously tackling the aliasing above nyquist but also attenuating highs in the pass-band (0.5 x sample_rate).