Dead Man's Catch modified firmware for Peaks


> […] by any random-looking deterministic function of a CV.

… like the random wave in Peaks and Batumi :slight_smile: they both rely on the simple PRNG in stmlib which is entirely deterministic and uses one word of state (the seed). So if you want to loop a portion of terrain, all you have to do is save the seed at the beginning of the portion, and restore it at the end of the portion!


But can you make it go backwards?


We’d have to look carefully at your PRNG. Usually the very purpose of these functions is to be difficult to invert (difficult = NP-complete), but since you’re using a simple one that just “looks” random it might be easy.


The whole point of Perlin Noise is that it ISN’T random.

It has the appearance of randomness, but is entirely repeatable- the same input values Always give the same output.

Interpolated random values aren’t Perlin Noise.



I thought the essence of Perlin Noise is that it’s fractal - it results in natural looking textures in 2D because textures of natural materials have this fractal property. So I’d argue that a sum of interpolate random values giving a scale invariant result do qualify as Perlin Noise. Whether it’s repeatable is an implementation detail, and as mqtthiqs and I have explained, repeatability is something that can be added to any random generation algorithm (Perlin or not) by suitable hacks of the PRNG.


@toneburst, do you have lit. references on what you mean by Perlin noise? All I can find is what pichenettes and I are describing. For instance would you say that the description in the link pointed by Olivier is not Perlin noise?


I found this helpful - it makes clear how Perlin noise is spatially correlated. My suspicion is that simple stacked smooth random values generated at different frequencies, as Olivier describes, are indistinguishable from true Perlin noise in 1D - in other words, a Perlin line (and the Sonic Potions module seems to be limited to 1D fractal noise?) when used for modulation or audio signal purposes. An open question is how one might use a Perlin map or terrain musically, or sonically - that isn’t clear. You can readily calculate (on the fly) a 2D map of Perlin values, but if you then read values from that map along a vector (or rather, calculate values along a vector), or a curve, then the resulting series of values looks a lot like 1D Perlin noise (I tried it using R). Reading 2 or more values along parallel or related vectors or curves seems more promising - but that presumes two or more channels of output.


Haven’t had a chance to follow the link (am out and about).

Perlin Noise was developed to allow the creation of repeatable, apparently random textures and landscapes. It’s used extensively in computer games, where its vital that the same landscape can be recreated, without the need for gigantic meshes.



I think the real question is whether Perlin noise sounds any good, or interesting - either at audio rates, or in the time domain at modulation rates. Plenty of waveforms that look good end up sounding terrible, and v-v. Perlin noise was developed to look good, or interesting, in a natural feature sort of way. That doesn’t mean it will sound good, or even be distinguishable by the ear from other smoothed random or correlated random waveforms. Probably best used for modulation, I suspect. Some experimentation is needed.


@BennelongBicyclist I don’t think it would be that interesting at audio rate. I think of it more as a controllable, repeatable source of interesting modulation curves or musical patterns.



I don’t see any point traversing a Perlin Noise field in anything other that a straight line along either the X, Y or (possibly) Z axes. This would just introduce extra calculation (of the vector/curve), without having any appreciable effect on the output.

Incidentally, Perlin Noise only ever produces a single value, based on 1 or more coordinate values (for 1, 2, 3-dimensional forms).



toneburst> Incidentally, Perlin Noise only ever produces a single value, based on 1 or more coordinate values (for 1, 2, 3-dimensional forms).

Yes, that was my point, or the source of my open question: a vector traversing a 2D or 3D Perlin map or volume produces a series of single values. Is the resulting series of values different from that produced by a 1D Perlin line? Mathematically, if the vector across the map or through the space is orthogonal or curved, then no, they aren’t the same, but in practice, they are indistinguishable to human eyes and ears.

Hence my question: how to use higher-dimensional Perlin noise (or any form of spatially correlated noise) in the modular synth context? The only thing I’ve come up with is two or more vectors traversing the same Perlin map, so you get two or more series of values which are not just auto-correlated but also serially cross-correlated with each other the degree of cross-correlation depends on the gap between the vectors, and will itself vary as they traverse the map (at some times, the parallel vectors will both be climbing the same mountain, but at other times one will be climbing a mountain while the other next to it will be descending the valley next to the mountain.

Note that when you “read” a vector on the map, you are converting spatial correlation to auto-correlation of a time series.

OK, I’ve convinced myself that that is worth trying: create semi-random noise that is both auto-correlated as well as serially cross-correlated with it’s fellow series read from the same map. When I say “read from the map”, in fact, because it is deterministic, we can just calculate the values at any map co-ordinate on-the-fly. The distance between and the angle of the vectors determines the degree of cross-correlation between them. Oh, and the Perlin maps need to wrap, so they are global maps, so that the vectors can go round and round. There are ways of doing that. Although a flat-Earth Perlin terrain that extends off into infinity in both directions is also possible (or until the phase accumulator overflows back to zero).


You can have any number of vectors scanning across the noise field in different directions.

I’ve attached an example I knocked up in GIMP.

As you can see, there’s no real advantage to moving across the field at an angle, or in a curving path. You might get some slight skewing caused by the underlying grid, but I don’t think that would be especially interesting, musically.

If each line scans across the (2D, in this case) noise-field at the same rate the blue line will create a relatively complex curve, whereas the green line will output a smoother curve.

To create a looping output, you’d just substitute the value at the beginning of the vector for the value at the end.



You’d definitely evaluate the noise value in realtime.

You could just let the coordinates keep on incrementing indefinitely. On the other hand, I’m not sure there’s much mileage in very long, but repeatable CV curves, in a musical context. Short, looped traversals over the course of a few bars or less would be much more useful, I’d have thought.

One thing that might need to be worked around is discontinuities when changing the scale of the noise (or, to look at it another way, the distance travelled across the noise field).



Ok, I understand the source of confusion a little better @toneburst. Let me try to make the points from above again.

1/ Why generate a 2D map and then project it on one dimension? Generation of the 2D map requires computing and interpolating gradients and a whole lot of complications emerge as described in the article linked by BB… and then you’re ignoring almost all of this information, and focusing on only one dimension (the short color segment). On the other hand, the algorithm Olivier and I described is strictly equivalent (produce exactly the same result), but is much simpler and computationally lightweight. I’d strongly recommend reading this article if you haven’t, which was also cited as reference by the guy from Sonic Potions.

2/ about repeatability: it seems to be important to you to have loopable segments, and you don’t seem to be happy with “random” values. Look at stmlib/utils/random.h , which is how Olivier generates “random” values. It keeps a current integer state (the “seed”), and each time you ask for a new random value, the state is simply multiplied and added to two carefully-chosen integer:

static inline uint32_t GetWord() {
rng_state_ = rng_state_ * 1664525L + 1013904223L;
return rng_state_;

This operation is completely deterministic, and from the same initial state, you’ll always get the same series of values each time you call it; the integers are just chosen so that the result “looks random” (it describes a series that is hard to predict and repeats after a long period). So if you set the seed to a value of your choice every n call to the function GetWord, you’ll get repeatedly the same sequence of n values. The function to set the seed is called Seed [*].

3/ I really don’t think you can call this Perlin noise (or fractal noise) if you’re not superposing several “octaves” of this pseudo-random. “Fractal” refers precisely to this ever-finer superposition of the same structure scaled down.

[*] careful: I just noticed a small bug in this function: it should take an uint32_t as argument, not an uint16_t as Olivier wrote. The loop won’t probably work as you expect until you fix this, because of integer truncation.


As I said, the reason to use a 2D Perlin noise map is if you want more than one channel of output, where each channel is both auto-correlated (at different scales, since it is fractal and is a mixture of octaves) and point-serially correlated (or not) with the other channels. Two parallel lines close together, read from such a map, will have a high point biserial correlation, further apart, the cross-correlation will be less. If one channel is a straight-line vector across the map, and the other is a sine along the axis of the first vector, then the cross-correlation will vary over time. There are lots of variations on that theme possible. And “read” is a metaphor - the values can be just calculated for each point on the map as needed. Of course there are other ways of creating value series that are both auto-correlated at various scales, and cross-correlated - they probably end up being mathematically similar to “reading” from vectors traversing a Perlin map.


@mqtthiqs the difference between a true 2D PN method and the one you describe is that moving the ‘scan-line’ (as above) a small distance along the opposite axis to the one being scanned across (in my example, the X axis), results in a kind of morph of the output pattern to a new set of values. The further you move from the initial position on the Y axis, the more the the values diverge.

In the method you describe, a new seed value would result in a new completely unrelated set of output values.

The other advantage of the of the PN approach is that you can move freely across the noise field at any time. With your approach, you would have recalculate the new current value iteratively if you wanted to jump forward of backwards through the series, which would be potentially very expensive for large offsets. With the 2D Perlin approach, the co putational cost is the same every lookup.



On terminology: my (albeit limited) understanding is that the pattern-generation method I describe IS single-octave Perlin Noise.

Perlin Noise is a particular kind of deterministic Gradient Noise. What you describe is gradient noise, rather than perlin noise. It’s the method of obtaining the pseudo-random values to be interpolated that determines if it’s one or the other.

Multi-octave Perlin Noise is sometimes called Fractal Noise, because it has the the property of having a degree of self-similarity on multiple scales, since each octave is a scaled-up version of a similar pattern.



In a musical context, you can think oh what I have in mind like this:

Image you ‘scan’ across the noise-field in time-quantised steps (let’s say on every 16th note trigger pulse), driven by a phase-accumumulator.

Every n pulses (say, 8), the position is reset to an initial coordinate, creating a looped sequence of values.

Pass the output through a note quantiser to a VCO.

Now, tweaking the X-offset control allow you too offset the generated notes. Offset a small amount, and you will hear a part of the original pattern. A larger X-offset will produce an entirely new pattern.

Tweaking the Y-offset a small amount will morph the pattern (look at the attached image, and imagine moving each line of numbers down or up a small amount and you’ll see what I mean). This is one of the keys to the system producing musically useful results, and it depends on using 2D perlin noise.
Larger offsets will obviously again produce an apparently-unrelated set of new note values.

Varying the phase increment of the phase-accumulator driving the traversal of the noise-field will have the effect of scaling the pattern on the X-axis.
Low phase-increment values will result in a sequence of notes that cascade up and down in a smooth way.
Higher values will produce a ‘spikier’ pattern, with larger and less predictable intervals between notes.

The second aspect of the 2D PN system, that distinguished it from the interpolated seeded random sequence generator you have in mind, is that at any point, you could reset the 3 controls to their original values, and the original sequence of notes will continue playing from the position it would have been at if you’d left the controls as they were.

You can think of it as a very large set of patterns, all playing at the same time and in sync, and the 3 controls allow you to select any pattern at any time.

The sequences of numbers represent perlin noise-field lookups driven by a counter incremented by a clock pulse.
The number in white represents the current step-index.
The value under each highlighted number would be output depending on the settings of the 3 controls.

I hope this makes sense.

I’m neither a programmer nor a mathematician, so it’s difficult for me to explain what I’m getting at, sometimes, but I have a very clear idea in my head of how this could work.



ok @toneburst, I think I got it this time! Indeed very different from what we had in mind (and from what I’ve read), but it seems musically interesting. I hope that the time-quantization you talk about will be optional though, so that we can use it as a smooth LFO by default. I have no idea, off the top of my head, how I would implement this without maintaining a map in memory.