2016-08-07

The history of music and sound synthesis languages can be traced back to the Music N languages starting in the 1950s. You can trace the threads from there to a variety of languages that were developed in the 90s, including CSound, ChucK and SuperCollider. CSound could be seen as the last of the "traditional" Music N languages, which focus mainly on sound synthesis, whereas ChucK and SuperCollider add flexible tools for composition as well. (There are also a variety of graphical point-and-click languages such as Max MSP and PureData that also descend from the Music N paradigm, but I'm interested only in text-based languages for the sake of this question.)

SuperCollider was always my tool of choice. It provides a great variety of opcodes (low-level signal processing modules) and many ways to patch them together on the fly, trigger events algorithmically, and respond interactively to external signals.

However, SuperCollider is now kind of old technology. The first version came out in 1996 and version 3 was open-sourced in 2002. While there are new features since then, the core of the language and synthesis system is unchanged and remains optimised for an early 00s machine. In particular, it's distinctly set in a single-processor paradigm and can't take advantage of the parallelism provided by modern GPUs, alhough support for multiple CPU cores has been added. There are also some features of its architecture that would probably be re-thought if it was being re-designed now. (An example is the need to run the synthesis server as a separate application from the language itself, which makes sample-accurate timing very difficult to achieve, among other things.)

So I'm wondering whether there are any successors to SuperCollider and its cousins from that era, either in existence already or on the horizon, that go beyond what can be achieved with the tools listed above. The possibilities for GPU parallelism seem immense, and there have also been advances in programming language design since 2002 that could result in an even more awesome and flexible tool. In particular, virtual machines can now be almost as efficient as bare C code, which means that DSP code could be just-in-time compiled, removing the limitation of sticking to a pre-programmed set of opcodes.

Is anyone aware of any development or research in this direction? I'm starting to get back into music composition and DSP programming after a long period of being busy with other things, and it would be really awesome to have a new and exciting tool to learn, with features that go beyond what I've used before. As mentioned above, I'm talking about text-based languages for DSP programming and algorithmic composition, rather than visual patch-based systems.

To summarise, my main interest is in finding out whether there are projects that focus on cutting edge synthesis techniques, using new technology that wasn't available in the early 2000s. (But the answers that list other types of package are useful too.)

Show more