The optimisation of real-time programming is one of the most difficult challenges for audio application developers.
At ADC'19, seasoned developers Dave Rowland and Fabian Renn-Giles are presenting Real-time 101, a 2-hour talk covering common problems within this space, and the means to solve them.
We spoke with Fabian about their upcoming talk. (Interview by Joshua Hodge)
To be honest, I’m a bit egoistic when it comes to choosing a topic for a talk: sure, it will be something that I feel I have some experience, but often this experience hasn’t really solidified yet. Giving a talk about a topic really forces you to sort your thoughts and experiences and you learn a lot about the topic in the process. As they say: “Learning by teaching!”. I think you have had a similar approach with your channel.
Another reason, why this topic is particularly close to my heart is the observation that understanding real time programming has become an almost necessary skill for any advanced/senior audio programming role. It’s almost like this secret gateway which divides people into “professionals” and “enthusiast/hobbyist”: if you know your std::atomic from your std::mutex then you are part of the “in-crowd”. It’s regularly asked in interviews, however, it’s completely absent in most university curriculums. There is so little info on this out there. I’m hoping that Dave and I can fix some of this by providing at least some reference material where people can turn to.
Yes that’s true. A while back, I was talking to Timur Doumler about my talk - a common friend of Dave and I - when he mentioned that Dave is submitting an almost identical talk. So we got together and compared notes. I think we had very similar ideas - and if we would have given the talks independently, there would have been a lot of duplication. So it made sense to put our talks together into one two-part talk.
Luckily our talks did have a slightly different emphasis: I think Dave is more interested in the low-level building blocks which are used for thread synchronisation, whereas, over the years, I have collected a bunch of design patterns and utility classes which I find myself using over and over again. I will talk a bit more about these design patterns and tools in the second part of the talk. But ultimately, I think Dave and I will still share the stage for both parts of the talk - so don’t expect part I to be exclusively Dave and part II to be exclusively me.
Things get interesting in domain specific languages (DSLs) like SOUL. It’s not trying to be a one-size fits all like C++ or Java - it’s specifically for real-time audio - so I think you can get closer to having your cake and eating it too. From what I’ve seen of SOUL, I’m convinced that a lot of the complexity of real-time audio programming will vanish and it will still be performant by default.
I recently watched this great talk on Nvidia’s CUDA at cppcon 2019. As you know, CUDA is a DSL language for graphics programming - I think this is something SOUL would like to emulate. Although, CUDA is a language specifically created for graphics programming, so, in theory, should do a good job at abstracting away the difficulties of the underlying hardware, the talk went into the nitty gritty of real-time programming with CUDA and all these low-level realtime concepts popped up like atomics and memory ordering. So, I guess, even with a DSL like SOUL, if you want absolute optimal performance, you will still need to understand low-level real-time audio programming.