2013-10-15

Sascha, are you a musician yourself or do you have some other sort of musical background? And how did you once get started developing your very own audio DSP effects?

I started learning to play bass guitar in early 1988, when I was 16. Bass is still my main instrument, although I also play a tiny bit of 6-string, but I’d say I suck at that.

The people I played with in a band in my youth where mostly close friends I grew up with, and most of us kept on making music together when we finished school a couple of years later. I still consider that period (mid-nineties) as sort of my personal heyday, musical-wise. It’s when you think you’re doing brilliant things but the world doesn’t take notice. Anyway. Although we all started out doing Metal, we eventually did Alternative and a bit of Brit-influenced Wave Rock back then.

That was also the time when more and more affordable electronic gear came up, so apart from doing the usual rock-band lineup, we also experimented with samplers, DATs, click tracks and PCs as recording devices. While that in fact made the ‘band’ context more complex – imagine loading in a dozen disks into the E-MU on every start of the rehearsal until we equipped it with an MO drive – we soon found ourselves moving away from writing songs through jamming and more to actually “assembling” them by using a mouse pointer. In hindsight, that was really challenging. Today, the DAW world and the whole process of creating music is so much simpler and intuitive, I think.

My first “DAW” was a PC running at 233Mhz, and we used PowerTracks Pro and Micro Logic – a stripped-down version of Logic -, although the latter never clicked with me. In 1996 or 97 – can’t remember – I purchased Cubase and must have ordered right within a grace period, as I soon got a letter from Steinberg saying they now finished the long-awaited VST version and I could have it for free, if I want. WTF? I had no idea what they were talking about. But Virtual Studio Technology, that sounded like I was given the opportunity to upgrade myself to being “professional”. How flattering, you clever marketing guys. Yes, gimme the damn thing, hehe.

When VST arrived, I was blown away. I had a TSR-8 reel machine, a DA-88 and a large Allen&Heath desk within reach and was used to run the computer as a midi sequencer mainly. And now, I could do it all inside that thing. Unbelievable. Well, the biggest challenge then was finding an affordable audio card, and I bought myself one that only had S/PDif in & outputs and was developed by a German electronics magazine and sold in small amounts through a big retail store in Cologne, exclusively. 500 Deutschmarks for 16 bits on an ISA card. Wow.

The first plugin I bought was Waves Audio Track, sort of a channel strip, which was a cross-promotion offer from Steinberg back then, 1997, I guess. I can still recall its serial number by heart.

Soon, the plugin scene lifted off, and I collected everything I could, like the early mda stuff, NorthPole and other classics. As our regular band came to nothing, we gathered our stuff and ran sort of a small project studio where we recorded other bands and musicians and started using the PC as the main recording device. I upgraded the audio hardware to an Echo Darla card, but one of my mates soon brought in a Layla rack unit so that we had plenty of physical ins and outs.

You really couldn’t foresee where the audio industry would go, at least I couldn’t. I went fine with this “hybrid” setup for quite a long time, and did lots of recording and editing back then, but wasn’t even thinking of programming audio software myself at all. I had done a few semesters of EE studies, but without really committing myself much.

Then the internet came along. In 1998, I made a cut and started taking classes in Informatics. Finished in 2000, I moved far away, from West Germany, to Berlin and had my first “real” job in one of those “new economy” companies, doing web-based programming and SQL. That filled the fridge and was fun to do somehow, but wasn’t really challenging. As my classes included C, C++ and also Assembler, and I still got a copy of Microsoft’s Visual Studio, I signed up to the VST SDK one day. At first, I might have done pretty much the same thing as everybody: compile the “gain” and “delay” plugin examples and learn how it all fits together. VST was still at version 1 at that time, so there were no instruments yet, but I wasn’t interested much in those anyway, or at least I could imagine writing myself a synthesizer. What I was more interested in was how to manipulate the audio so that it could sound like a compressor or a tube device. I was really keen on dynamics processing at that time, perhaps because I always had too few of those units. I had plenty available when I was working part-time as a live-sound engineer, but back in my home studio, a cheap Alesis, dbx or Behringer was all I could afford. So why not try to program one? I basically knew how to read schematics, I knew how to solder, and I thought I knew how things should sound like, so I just started out hacking things together. Probably in the most ignorant and naive way, from today’s perspective. I had no real clue, and no serious tool set, apart from an old student’s copy of Maple and my beloved Corel 7. But there were helpful people on the internet and a growing community of people devoted to audio software, and that was perhaps the most important factor. You just weren’t alone.

For your “digitalfishphones” series you already developed two dynamic processors: A compressor and a transient shaper. How do such devices relate to another or what makes them stand out from each other (Meaning both, their application and their technical design underneath)?

Of course, I have to look at it from a today’s perspective, which is very different from the golden “digitalfishphones” days, a whole decade ago.

In hindsight, it was all a happy accident. I had only a coarse idea how stuff worked. I knew a bit of electronics, I’ve always done DIY projects since my childhood, but I was missing lots of fundamentals at the time I wrote plugins like endorphin and dominion, especially the math to do proper circuit modeling. Things went better with the fish fillets, but they’re also highly-based on empirical programming, so to say. If there’s one thing guiding me throughout all these years, it’s my sense of hearing, my listening experience and my intuition. In fact, these senses – and “skills”, if you want – are still serving as my main rules.

In former times, almost 80 to 90 percent of the time on the plugins were listening and tweaking, apart from just trying things out, even if they looked wrong on paper. Perhaps that’s the “magic” of the plugins, their interior is far from optimised, they’re not streamlined processors. They’re not just addressing one main issue, but moreover doing lots of stuff, and sometimes perhaps doing unnecessary things. For example, there is this saturation control on Blockfish. A similar thing is part of endorphin’s interior. The way I did saturation during that period was kind of common to all dfp plugins: take a low-shelf filter, feed the output to an asymmetrical clipping stage that uses a dynamic DC signal, and let the output of that clipper feed a second low-shelf filter. Both filters are a mirror-image of one another. Sometimes, this setup gets enriched by global feedback or a complete compression stage as a “nested” element. Everything that adds up to some “signature sound” seemed valid to me. I wanted to have things a bit more complex and unpredictable. I grew up with analog audio gear, I know how that sounds. And I had a feeling that there has to be more to a sound than to just serve with an algorithmic solution to a problem like “when going over the threshold, lower the audio by one-third”. And I still think the same way today, although I acquired a lot more technical background in the meantime and probably found out about the origin of “soul” in some real-life processes that interested me most. But, sure, the learning goes on.

What did you learned through your time developing audio effects for Magix/Samplitude? How did DSP designs changed over the years and what is the challenge today?

The time at Magix and the projects where I had been involved were perhaps a bit unusual, mainly because of the constellation, at least I always felt that way. I was originally hired for doing web-based development, but when they heard about the dfp plugins having caused a bit of a stir, they offered me a job in the Samplitude team and to do DSP exclusively for the Magix products. It was quite clear that my contribution was and would always be a bit different from their existing portfolio. The main DSP team consisted of “real” engineers and were into some serious stuff while I was like code punk and tweak head, trying to make the best of it. While these two approaches were communicated throughout the years, I was very aware that I need to improve my own skills, mature and become a better team player.

What I definitely learned was how to acquire new methods, work discipline, organize myself and learn how to cope with increasing demands from the market and the target audience. I soon found out that making audio software on a commercial scale must be quite different from a freeware show, as I formerly just felt like building a bunch of plugins out of curiosity and release it into the wild, without thinking much about that happens next. Suddenly, the learning process included a great deal of responsibility, reliability and facing your own mistakes. Such things always turn back to you when you least expect them, like for instance when someone inherits your code five years later and wants to do a platform port.

And a feeling of responsibility might even remain after you’ve left a company. A challenge, yes, but perhaps a personal one…

Of course, DSP designs did massively change throughout the last decade. While things were mostly about doing basic processing jobs in the earlier years, we now have hundreds of tools that specialize on a job, tools that accurately emulate a certain behaviour or signature sound, which is far beyond basic processing. In a nutshell, I’d say recent DSP work often has “soul”, in the most positive way. Not only have computers become faster, it’s obviously also the knowledge of the developers that grew. For instance, I had done a tape simulation in 2003, for Samplitude. But it wasn’t possible to do it really faithfully, considering the typical DSP power PC at that time, and also considering the time schedule for developing a full-blown emulation of all the processes involved. At least I couldn’t, at that time. It always bothered me somehow.

At u-he, I recalled a couple of ideas that were on my mind all these years but just never materialized. I basically knew how tape machines worked, but it soon turned out I had to invest almost a whole year of research and dive more deeply into the depths of magnetic recording, more than I first thought. Considering the modeling we felt we should do in the Satin plugin, it is great that current CPUs allow for more complex algorithms, process parallel as vectors, and run stuff at many times the original sampling rate, so that you can, for instance, implement a high-frequency bias oscillator within a virtual tape machine model to linearize a virtual hysteresis curve of a record-head model, including various dynamic side effects, instead of just using a polynomial and doing nonlinear waveshaping.

So, as technology allows us to invent or apply more sophisticated algorithms, I believe one challenge is putting a definite end to the hardware-vs.-software debate, since we’ve all come a long way since the beginning of computer-based recording and native signal processing, and we’re approaching a point where it’s often only a matter of workflow and ergonomics rather than sound.

Another – and perhaps equally important – challenge is the product idea itself. You can do everything in software, and you have low cost structures, compared to hardware. So, product designers and developers are tempted to put lots of stuff into their software. “We do it because we can”, you know. I see a big challenge in making a product that a) produces excellent sonic results, b) allows for a sufficient “artistic freedom”, and is c) still easy to use and intuitive right from the start. With hardware, the parts directly available to the user are usually critical cost factors, like knobs, switches, displays and available front-plate space for a given form factor. So, when a device turns out as easy-to-use, its design process has already passed economic decisions. This is usually non-existent in audio-software interface design. We tend to put in what we like, or what our customers want us to put into. Depending on a maker and its target audience, this might work out fine, but sometimes it won’t, and then you have these feature-bloated monsters with cluttered UIs and a parameter set that needs reading of 50 manual pages and watching 10 YouTube tutorials. Keeping it all under control, and following a consistent set of rules, THAT is a challenge!

I do agree so much with you about your product idea/design excursion! There is that much difference in using a “one knob, one job” interface vs. twiddling through countless menu options of a “jack of all trades” approach. Talking about modeling: How important is that from your point of view and how does it compares to rather empirical approaches?

I’m pretty sure there are a lot of products on the market that claim they’re using modeling techniques but rather follow an empirical approach under the hood. People might find that dubious, but I think it could still be valid up to a point where just the sonic results matter, and when the product is supposed to offer a relatively narrow parameter range and deliver a very specific sound. One is often tempted to think circuit or physical modeling is the more precise way and far superior.

But quite often the contrary is true, since one has to outweigh the computational costs of solving an immense amount of equations in realtime against a perhaps much smarter process that is perhaps less ‘correct’, but lighter on the cpu and probably just spot-on and musical right away. Physical or circuit modeling would of course be appropriate when you’re aiming at a more generic solution, for instance when designing a product like a guitar tube amplifier capable of tones ranging from clean twang to ultra high-gain. Or when you’re trying to mimic what happens across a snare drum head by implementing waveguide-mesh techniques or following the Huygens–Fresnel principle.

But chances are your model gets so huge, complicated and includes so many nonlinearities that it makes realtime calculation impossible. Even industry-standard Spice models have difficulties in accurately modeling non-linear circuit parts and processes. Then comes the point where you have to strip down and replace things using black boxes or general high-level approaches. I personally am quite happy with a mixed approach based on some nice modeling going on but still decide on an artistic scale and let my ears, my intuition and personal experience be the final judges.

Analog vs. digital and considering the available CPU power today: Are we there yet or does the available technology still restrict us when designing digital audio effect processors? Or, in which regard the digital domain is already superior?

My assumption is that as long as computers become faster, DSP algorithms will also increase in quality. Today, everybody knows about frequency warping with digital IIR filter designs, which hasn’t been a ‘problem’ at all a decade ago. People then were just glad that computers could run filter algorithms at all within a complex DAW song project. And now, a typical DAW channel strip has to implement filters emulating the finest analogue studio mixing desks ever made, including nonlinearities, high bandwidth and zero-delay feedback topology. Customer demand has grown, which is basically a good thing. With current software technology, chances are a typical DAW production sounds way better than 10 years ago.

In my opinion, the gap in terms of audio quality is closing, if it hasn’t already. I sold my last big studio desk 8 years ago, and frankly, I’m not looking back. My mixes became significantly better, although they were worse when I started out DAW-only. Although, I have to admit, I’m not always sure if it’s only because of software quality or because my mixing abilities adapted and improved…

Digital tools are surely superior in fields not using algorithms based on analogue counterparts. This especially includes techniques like FIR-based filters or processes based on Fourier transformation. Analog circuits can’t do linear-phase filtering, nor can they do high-quality time or pitch correction. Basically, the entire field of audio restoration, sample-accurate manipulation and of course forensics is unthinkable without modern digital tools.

Restrictions in the digital realm… well, I see those more in terms of ergonomics and haptics. Since I grew up with analog gear, large, bulky devices, long-throw faders and heavy knobs, I sometimes miss that extra touch, the space and general overview. DAW controllers still don’t give me that feeling, since there’s still the mouse and the keyboard demanding my attention.

You’ve already announced two brand new dynamic processors: A tape emulator and a compressor. What do they make different and to stand out from the crowd?

The compressor (currently named Presswerk) wasn’t meant as a complete product initially. We wanted to modernize and generalize our dynamics tools in the u-he code framework, so that we could easily take something ‘off the shelf’ for a particular product, like a synth, for instance. Compressors generally consist of many interconnected building blocks, so we suddenly had numerous modules and sub-modules which we arranged in an Über-Compressor fashion, out of curiosity. It sounded really good, although we didn’t do any rocket science, we just implemented reasonably good ingredients. We then handed an early alpha state on to our testers and got overwhelmed, as they said it sounded huge, fat, got balls, whatever. I wasn’t sure what it was but it could have been those extra things like the saturation and warmth control acting directly on the gain-reduction process. Presswerk then was destined to become an actual product and suddenly needed a definite parameter set, concrete workflow, a proper GUI, all that stuff. This still goes on, we’re still in the early stages of this product, so it might still take some time until we know what it’s going to be in the end.

Satin, our tape machine, was also a bit of a happy accident. I had ideas for building a generic tape device for almost 10 years, and one day I was fiddling with virtual hysteresis and high-frequency bias signals to linearize an operation curve, when suddenly my tiny model of a recording head’s voice coil and magnetic tape came to life. Still scratchy and somewhat mis-aligned, but the more I got it ‘right’ the better it could cope with real-world data and parametrization.

Having generic models of tape-recorder ingredients is the heart of Satin. We never wanted to simulate a specific device, so everything should be fully customizable. We have continuous speed ranging from domestic hi-fi to pro-studio, we have exchangeable industry-standard equalization curves, we can even change the gap width of our virtual heads which has a great impact on frequency response and the typical peaks, dips and resonances along the spectrum, for instance head bump. That’s why we communicate it as a tape construction kit. You can tweak the model in a way that it starts being your personal custom tape device. We’re engineers to the core, but we also love to make way for the artist in us.

Related Links

u-he.com

compressor aficionados (1) – Fabien from TDR

compressor aficionados (2) – Nico from BigTone

compressor aficionados (3) – Tony from Klanghelm

compressor aficionados (4) – Bob Olhsson

compressor aficionados (5) – Dave Hill

compressor aficionados (6) – Christopher Dion

compressor aficionados (7) – Dave Gamble

Show more