What's the future of synthesis?

Hi guys, just wanted to share this with you:

Stanford University are currently working on a new FMHD synthesis protocol with improved sound generation properties, and a real-time property modelling editor, that allows the user to encapsulate Karplus-Strong and other physical modelling aspects with relative ease.

Said that, what’s the future of synthesis? FMHD? Granular? Better virtual analogic modelling? …??

Are there sonic possibilities not yet explored by the technologies that are accessible nowadays?

What do you think?

Personally my future of Synthesis has an analog Filter, most suspiciously designed by some french Guy…
About the Oscillators i don’t really care as long as they produce something musically useful (at least to me).

It’s all about the interface. I’ve managed to crank out more sounds in seconds with the XT thing than I could have without it.

Not that the existing interface is bad. I guess having known that more knobs and sliders is possible it spoils your perception.

Analog and digital doesn’t really matter to me. The problem with most commercial hardware boxes is they’re trying to release something with minimal costs. If their output DAC and code was rendering to 24-bit 96Khz or so then perhaps aliasing wouldn’t be so bad.

I’m assuming we’re not talking about the future of Shruthi synthesis but the future of synthesis. I agree with 6581punk that it’s about the interface. I don’t really care about analog or digital, I only care about getting sounds that are useful. I have a Kurzweil K2600XS, built in 1999, that always amazes me. Considering its age, this synth is easily more capable than anything I have tried since. I prefer it over the Korg Oasys, Yamaha Motifs or anything else I’ve tried lately. But, the VAST synthesis architecture is tricky to learn.

What I would like for an interface are increasing levels of complexity. So, level 1 is “idiot-proof”, where I could specify some aspect of the sound and the interface would adjust multiple parameters. Then we could move into more complex models and eventually end up at something like the Nord G2 interface, where there is a plethora of virtual modules waiting to be inter-connected.

The other idea I had considered was to use a bunch of iPods, each running a single module. So, instead of buying various modules for a modular synth, I end up with 20 iPods (or something like that, maybe less-expensive), with each running a module app and each connected to the other(s) somehow by virtual cables.

Just some random ideas really. I think that interfaces should be software, it’s less expensive and inherently more agile than hardware. Sound generation could be anything, I don’t really care.

Randy

I agree about the interface as well. It shouldn’t get in the way of manipulating the sound generation I think, so maybe there is a limit to how complex the sound generation can be and still be easy to control.
A customizable interface would probably be a good thing, but if you like hardware it could get tricky. Not enough knobs etc. :slight_smile:

One cool project you could check out is Din
It uses bezier curves to define the waveforms. Certainly something different.

I have so many devices where so much potential is unrealised due to poor interfaces. Software editors can help a little. But even a mouse and GUI is a limited interface, it is not tactile. I like to feel controls in my hand.

It’s why I find soft synths less than gratifying.

I only partly agree with the Interface Problem. The MicroWave for Example is a pain in the Ass to program, yet so rewarding (to me) that i don’t care about the Interface. So at last, its all about the Sound. Whats a good Interface worth if the sound the Synth produces is shitty?

On the other Hand I must admit that FM for instance would be a Kick in the Ass if there only would be a new UI to it.

But as long as the Big ones try to emulate things that could easily done with a few OpAmps (presumably because the knowledge has been lost there?) by Zillions of Transistors and many many Layers of Software i don’t expect much. In fact looking at todays range of Synths from the “Big Ones” there is nothing i would like to buy (ok, i need more Shruthis…) neither for the Sound nor for the Interface. I don’t see much Future coming from there…

It depends on the synth. When I talk about unrealised potential I’m talking Korg Wavestation. Its wave sequencing is tedious to program and I have the SR as well. Which has less controls on it than an old FM synth.

Back in the day I had the time to program sounds with such minimal interfaces and I had limited funds to get anything better (although minimal interface rack mounts were all the rage).

Controls are not just about editing either, they’re performance aids.

+1 for the beautiful yet UI wise totally fucked up Wavestation… I use it as my Masterkeyboard because i got so used to its Keyboard…

Then you have a great UI attached to an average sound engine. The JD800.

Subtractive is pretty easy to create a Ui for but FM less so. But the PreenFM is pretty good for something so small. The demos don’t really do it justice.

  • another 1 for the Wavestation, I really miss it. A resonant filter would’ve made it a totally killer synth. The Wavestation is one of the reasons I was so dissatisfied with the Yamaha S90. How could such a much newer synth have such a terrible UI compared to the older Wavestation!

I also agree that there isn’t much out there from the big guys that’s particularly thrilling. I’m really anxious to see what Kurzweil releases one of these days.

My ideal thing I think would be an 8-voice Shruthi that is 8 separate Shruthi engines controlled via USB from a software panel. I may spend some time sampling the Shruthi with the Kurzweil so I can play it polyphonically. Of course, it won’t be the same.

Here are my 2012-2030 predictions. Some of those might happen in 5 years, some others in 2030 only.

  • New Vintage. Fundamental theorem : People want their synths to make the sounds that were on the radio the year they got laid for the first time. This causes quite some inertia. Corollary: There’ll be something totally “vintage” and desirable about cheesy VAs.
  • Hyper-resolution. It took time for VAs to sound decently because anti-aliased waveform synthesis is tricky ; and because discretizing analog filters is also tricky. We’re smarter at that but I feel it’s the wrong battle and brute force will work better. I predict that increase in CPU power, the use of new computing architecture (GPU-like without the latency problems of GPUs), or the use of dedicated hardware (FPGA), will cause designers to design synth engines working at very high sample rates (say 10 MHz), or stretchable sample grids (like Spice does) to get rid of aliasing problems. I predict at least one innovation in this domain to come from Finite Element Methods. This will finally bring all the weird modular/analog stuff (audio-rate modulation, feedback…) to VAs. A lot of things will sound better because of that.
  • Component-level emulations. So far the process of programming VAs is very empirical - one pinch of sampling, one pinch of theoretical analysis of schematics to derive transfer functions, one pinch of measurements for response tables. I think this is going to be made obsolete by systems with enough computing power to run transistor-level simulations of the original circuits. This is insane brute force but it’ll happen.
  • Parameter estimation techniques from audio signals. We’ll be able to record a sound and automatically get a synth preset that approximates it. The same way we have envelope followers, we’ll have “followers” for every musically relevant parameter ; converting our samples into a bunch of automation curves for driving synths. The dream of the fairlight&synclavier - analysis&resynthesis - will happen, but with much better representations than sums of sine waves. Virtual minimoogs will be the new sinewaves ; and this will reduce the boundary between sampling and synthesis. People will sample a cello loop and alter the notes, the phrasing, without even knowing if this is achieved by messing with the original signal à la Melodyne, or by having recreated the sample with a synth model and just tweaking the parameters.
  • A synthesis-technique-independent parameter space. You were all talking about UI but you were barking at the wrong tree. No matter how many knobs and touchpanels you put on some synths, they’ll still suck because their parameter set is too vast and do not relate well to concepts understandable by a human. A Shruthi with 2 pots would be easier to edit than a DX7 with 20. There’ll be some effort to develop a kind of general purpose parameter space for sounds (with 30…40 dimensions) with the following properties: 1/ any combination of settings yields a musically interesting sound (so “random presets” are never static or chipmunks squeezed in a fax machine). 2/ a non musically-trained person can roughly explain with everyday words what each parameter does. 3/ there exists a methodology to automatically learn mappings between the parameter space of a synth and this parameter space. This might become the biggest standard in music after MIDI, and will allow portability of synth sounds from one synthesis engine to the other.
  • Physics in modulation sources. We’ll be able to use physical processes (a ball bouncing on the floor, the impact times of a stack of 1000 boxes falling on the floor, the velocity of an object thrown from a cliff) as modulation sources. The most common physical processes in music instruments (oscillation… exponential dampening) have corresponding modules in synthesizers (VCO… envelope generator), inherited from the days of analog computers. We’ll move one step beyond this. One signature sound (and maybe one music genre derived form it) will come out of the use of a physical simulation as a mod source in a synthesis process.
  • In 2030 we’ll have 10 PB storage devices, which means that all music ever recorded by mankind will fit on them - so we’ll carry around our own copies of the celestial jukebox. We’ll “retrieve” music instead of “sample” it. This might lead to new approaches to synthesis based on indexing. Say you program a drum pattern on a drum machine with the celestial jukebox loaded into it. The drum machine searches it and notices that this is a groove from a rare 70s thai funk track and suggests you to play a sample from this track instead. Or at least, to sample the groove template and apply it to your pattern.
  • Physical synthesis itself will be big, but not for musical applications. The biggest drive will come from videogames and hollywood. If we want a video of a spaceship that blows into pieces, we build a 3D model of it and run a physics simulation of the explosion, not by building a cardboard model and blowing it up. If we want the sound of the titanic hitting the iceberg, we’ll also get it through simulation rather than by getting a guy hit a saucepan with some ski boots into Kyma. Similarly, the SFX of videogames will be computed straight from the 3D models and the physics engine. For movies, we’ll do the CGI and the sound design with the same toolchain - the same way there are 3D artists and texture artists, there’ll be a new kind of guy in the team setting up mechanical properties for physical simulation and audio generation. We’ll have a new name for those pieces of software simulating reality on both the audio and visual levels.
  • Analog synthesis will still be around as a niche. All new products will be fully discrete because the LM13700 will no longer be manufactured, and THAT/CoolAudio and the likes will be out of business. Think of class D amps, switching supplies, software radios… Many things done today with analog functions will be done in the digital domain.
  • One widely popular, culturally significant instrument (the kind of instrument for which there’ll be a classical repertoire) will originate from a smartphone app.
  • Vocal synthesis will still be stuck in the uncanny valley – but led by the generation who grew up with lady gaga and autotune, our acceptance for the uncanny will be greater than ever.
  • Audio source separation won’t still be solved, and we’ll look at our effort in this domain with the same “what the hell were we thinking?” air as we look from 70s MIT-style AI trying to solve language processing.
  • MIDI will still be around.
  • One of the basic assumptions made to derive these predictions such as the availability of energy allowing continuous technological advances and the free use of electricity for musical purposes, a state of peace and prosperity in the world allowing the pursue of such futile matters, the existence of human civilization on this planet… and others I can’t name… won’t be satisfied, making all these predictions irrelevant.
1 Like

I think pichenettes may have ended this thread :slight_smile: I came here to explain all my future predictions, but all of the stuff I thought about is just a subset of Olivier’s ideas anyway. Specifically I’m most expecting to see/hear “Hyper resolution” (although I think this would see more popularity in combination with granular synthesis techniques than VAs), “Parameter estimation techniques from audio signals”, “Physics in modulation sources”, “Physical synthesis will be itself will be big…”

And yeah, MIDI will still be around, although I imagine some kind of MIDI 2.0 might be commonplace to carry the additional information utilised by more powerful techniques in synths, sequencers and other multi-media systems. It’ll be totally backwards compatible with current MIDI standards, though.

The more realistic the technology gets the more people seem to yearn for something more artificial sounding. This is at cross purposes with technology. I don’t think it’s going away anytime soon.

I think user interface is huge, maybe as much as 50% of the whole equation. I think it’s one of the things that makes the minimoog enduring. It’s just so intuitive and human. Deceptively simple and difficult to recreate with new instruments. I’m looking forward to getting my XT/Programmer working!

One of the big points of an analog thing (say a minimoog or an arp, or whatever vintage instrument or machine you can think about) UI is the very physical relationship between your fingers and the sound. It’s a purely tactile experience. I’ve often felt a strong difference between twiddling a knob that’s directly, physically modifying the parameter that I want and a knob that’s modifying some software-driven parameter (and the shruthi is unfortunately part of the second category). And please, do not speak about tactile screens or a mouse! I don’t know if it’s a matter of precision, lag or feedback (the physical resistance is part of it), but there’s really a difference lying there that gives you a different feeling about the instrument. Playing a guitar, a real piano, a rhodes or a violin or singing is a purely physical experience. Playing a minimoog is one of them. Playing a software-driven synth is generally different because of the not-so-tactile feel (due to quantification / lag or feedback, i don’t know).

I also remember viewing a video of one famous producer who explains that the things he hates about Pro Tools are its lack of touchy feeling compared to his big analog console, and also the fact that you actually need to read the track names and the parameter names to modify them : reading makes a big difference, since reading involves another region of your brain that the one used when listening to music, and sound in general. That totally makes sense, in my opinion. And that’s far more valuable than this “analog sounds better” cliché, especially when talking about mixing.

So, in my opinion, here lies the UI question:

  • you need something that is simple enough to not think about where you’re putting your fingers, or whatever you want to tweak the sound. You only want to do it absolutely instinctively, and really don’t want to read (or count) anything to achieve your goal.
  • you need something that makes the relationship between the software / hardware / whatever you use and your ears totally unconscious, or subconscious.

So, I believe that ergonomics really should play an important part in the future of synthesis, and music in general. That’s definitively one of the reasons why analog synths still are so popular. The thing is not about re-creating those classic sounds, or re-creating existing sounds. The minimoog sound once was revolutionnary, and still became an instant classic. The thing is about the way you manipulate your body to generate new shades of sound.

Well, the UI is one aspect I thought was important to mention :wink:

But imagine if you could play a sound to a synth and it would mimic the tonal qualities of it. Not sample it but produce a tone as similar as possible. Now that would be a good interface to a synth :slight_smile:

Well, my point is that trying to recreate the exact sound of something real only has a certain application and, frankly, just only goes so far. People spend their whole careers recording orchestra instruments, brass, cymbals, etc. to get every freaking nuance out of it. So you have a multi-sampled instrument and you’re using umpteen different mic’d soundsources all midimapped and, so what? You sound a little like a cymbal or a piano. Meh. When I need this stuff I want it and I have software that provides it. It just doesn’t get me out of bed in the morning. If this is the future of synthesis, to make better synthetic versions of real acoustic instruments, well that’s boring. To me.

What has always excited me and lots of other people is creating sounds that aren’t natural. They are a function of their electronics and their design. They don’t happen anywhere else. Whether digital or analog, that is still the thing that makes me happy and makes me want to spend money. New noisemakers making new noises. That’s what it’s all about.

How well you do that and how well the UI is executed is the difference between cool and classic. There are lots of the former and only a few of the latter.

software that will do an accurately and quick analysis of a recording of a symphony and recreate it with you making selections to vary individual or groups of instruments, or the recordings of birds in a forest. Real realtime morphing under $500 with a physically playable interface. Realtime morphing of scale, tuning, timbre. Better integration of video to audio. Real choirs made synthetically.

@MicMicMan: Great post! Totally agree. I’m in the middle of building a modular synth and a massive part of the appeal for me is the tactile interface and knowing if I turn that knob there it will directly influence the sound there in such a way. Cause that is its only purpose in life. I’m not a luddite, computers are great and I’m a child of the computer age, but there’s just something that doesn’t fit with using a computer to make sound for me. Using it to mix and record sounds is a different thing.