Using JUCE ValueTrees and modern C++ to build large-scale applications
JUCE ValueTrees are a tree based data structure capable of holding free-form data. They have a callback interface for being notified of changes to data members or the tree structure and have undo capability built in. Think XML on steroids! These features make them an ideal candidate for the data model of many applications. Building on the ideas presented at last year’s ADC “Using Modern C++ with JUCE to Improve Code Clarity” this talk aims to unwrap the complexities involved in dealing with ValueTrees and expose the elegant nature contained within them, utilising them to build large scale graphical and audio applications quickly. The talk aims to give an in-depth explanation of ValueTree best practices and how apply them to many coding situations using clear, concise, modern C++. Throughout the talk there will be special emphasis on the reference counted nature of these objects and the synchronous/asynchronous nature of the callback events generated and how these relate to performance, thread safety and debugging. Common application tasks such as serialisation, copy/paste and undo/redo are also explored and how they are simply implemented with ValueTrees. Additionally the talk will look at ways to build your own utility classes on top of ValueTrees where it is sensible to do so. Custom developed classes such as the ValueTreeObjectList are explored which allow type-safe, concrete object creation managed by ValueTree state. This example-led talk aims to ensure attendees of all experience levels leave with a solid understanding of how to quickly build C++ JUCE applications utilising ValueTrees in a modern, safe and fun way.
The future is wide: SIMD, vector classes and branchless algorithms for audio synthesis
Having reached the practical limits of clock speed and multiple cores, CPU architectures are increasingly going wide: using vector processing to provide vast amounts of number crunching capability in a power efficient way. This talk looks at how to exploit all that power for virtual instrument / software synth applications, leveraging polyphonic synthesisers' inherent voice-level parallelism to make full use of available CPU hardware resources. Topics under discussion include vector unit architecture, C++ template polymorphism, compiler optimisation, and fine tuning DSP algorithms to run on SIMD parallel floating-point architectures such as Intel AVX / AVX-512 and ARM NEON and Scalable Vector Extensions.
Varun Nair, Hans Fugal
Spatial Audio At Facebook
The Audio360 team at Facebook develops the spatial audio technology that power the Facebook, Oculus and other major apps across the VR industry. Spatial Workstation, a suite of tools developed by the team, is widely in use by post-production studios across the world to create content for new immersive media. Spatial Workstation is written with Juce and an audio library that packs the feature set of a lightweight game engine optimised for spatial audio design and rendering — including spatial audio building blocks for ambisonics and binaural rendering, advanced geometry analysis for sound propagation, spherical video rendering and VR headset support. The challenge is in exposing such a powerful (and complex) system through simple plugin interfaces in a DAW that allows sound designers to make content quickly. This session will cover basic spatial audio concepts that relevant to VR/MR followed by a deep dive into the technical lessons we've learned in building up a complex authoring system that includes VR control, video playback, audio processing, multi-process communication and cross-platform compatibility.
VST3 history, advantages and best practice
This session will give you an overview about the history of VST as plug-in format, starting with VST1 up to VST3. We will talk about the advantages of developing VST3 based plug-ins (cross-platform, silent flag, side-chain, note expression, 3D audio, better host integration, ...) including demos with Cubase and we will explain some best practices concerning VST3 (moving from VST2 to VST3, 64 bit processing, licensing and testing).
Exploring time-frequency space with the Gaborator
This talk introduces the Gaborator, a C++ library that generates zoomable, pixelation-free constant-Q spectrograms for visualizing audio signals, supports an accurate inverse transformation of the spectrogram data back into audio for spectral effects and editing, and runs tens to hundreds of times faster than real time. The talk includes a live demonstration using an experimental web based user interface based on online map technology.
The development of Ableton Live
In my talk, I will take you on a tour of the development and evolution of Ableton Live from its beginnings in 1999 to today. While focusing on the software engineering challenges and how we tackled them, I will also discuss the evolution of our design approaches and development processes.
Glenn Kasten, Sanna Wager
Learning the pulse: statistical and ML analysis of real-time audio performance logs
Real-time audio is usually processed in periodic equal-length bursts of a few milliseconds or less. Handling each burst on time and within CPU budget is critical to glitch-free audio. In this session we will show how to instrument your time-critical code with non-blocking performance logs and collect the data you need without significant interference. We conduct statistical analysis on the logs and use ML to help you understand and improve your code’s performance under various conditions. We use Python and open source Android platform in the live coding demo and examples, but the techniques are appropriate for developers on any platform.
Physical modelling of guitar strings
Martin will show how to implement physical modelling of guitar strings. He will do this by explaining the current state of his guitar model, with a very basic model and step by step refine it, until a fairly realistic guitar sound is produced. Martin will mainly be talking about guitar strings, but the same techniques can be applied to many other types of stringed instrument, such as for example a harp, banjo or piano.
Tools from the C++ ecosystem to save a leg
C++ gives you enough rope to shoot your leg off. Readable (and thus easy to maintain, easy to support) and error-free code in C++ is often hard to achieve. And while modern C++ standards bring lots of fantastic opportunities and improvements to the language, sometimes they make the task of writing high quality code even harder. Or can’t we just cook them right? Can the tools help?
David Zicarelli, CEO, Cycling’74
This talk will demonstrate a simple source code generation scheme that translates a user interface in Max/MSP into a Lilttlefoot program downloaded onto a Lightpad BLOCK. Once programmed with this generated code, the BLOCK can be used as a standalone MIDI controller.
Harmonisation in modern rhythmic music using Hidden Markov Models
A method for harmonising rhythmic music, implemented in Haskell, will be presented. It uses a hidden Markov model to learn harmonisations of different artists in different genres and allows new chord sequences to be generated with respect to a given melody. The main focus of this talk is on how to perform the feature extraction for the chords and the melody lines necessary for the method to perform well. Nikolas will show the results of the harmoniser by discussing 3 different harmonisations using a simple music theoretical analysis of the results.
The amazing usefulness of band limited impulse trains, shown for oscillator banks, additive synthesis, and modeling old stuff
Band limited impulse trains, known as BLIP, are known to be very useful for creating waveforms such as sawtooth and square waves. Little known are other uses for these, so some examples of unusual uses of BLIP are shown and explained: A fully polyphonic oscillator bank, it's additive synthesis capabilities, and the recreation of the sound from a speech synthesizer from the 80s. This presentation may require familiarity with some basic understanding of digital signal processing and C/C++ programming. However, as it comes with graphics and sound examples for explanation, it might be entertaining for beginners as well.
Techniques for debugging realtime audio issues
Debugging Time-Constrained Audio Issues can be tricky for various reasons. Since the code has to execute in a time-constrained environment, placing breakpoints and stepping through code is often not an option. Visualizing audio from raw data - even if it's shown as floating point or integer values, is difficult. In addition, traditional tracing and logging approaches also don't work as audio callbacks are not allowed to make system or library calls which may block. In addition, some issues only appear after a long period of time, so capturing and analyzing this information is not straightforward. This session will provide techniques that can be used to overcome these limitations, and to gather useful data to solve these often difficult to analyze problems.
Some interesting phenomena in nonlinear oscillators
The world of sound and music is crucially depending on systems governed by strong nonlinear interactions. However, little is known about nonlinear phenomena in the audio and music community, despite the fact that research is done since over a century ago. In this presentation I will give a quick introduction into the field of nonlinear dynamics. I will explain two nonlinear phenomena which are ubiquitous in nature and such in the area of sound as well.
Panel: Everything you want to know about the law but were afraid to ask
Moderator: Heather D. Rafter.
Panellists: Mike Warriner, Iris Brandis
We all know that coding is far more exciting than legal compliance but legal guidance will be required by all coders and developers at one time or another. This session will feature lawyers from some of the leading audio companies and their outside counsel. Our panel of experts will provide practical advice on how to take your program from conception through launch and all the legal checkpoints in between. Topics for discussion will include:
- Corporate formation: types of collaboration and how should you organize your company. Do you need to incorporate? If so, how?
- Protecting your ideas: trademarks, copyrights, patents, non-disclosure agreements, assignment of inventions, co-development and licensing agreements
- Licensing and open source models
- Websites, data collection and online policies: dos and don’ts
- Distribution models and agreements
- Marketing, COPPA and other pitfalls for the unwary
- Product launches, warranties and everything in between
- Other legal strategies for monetizing your product
The new C++17, and why it is good for you
The new C++17 standard is out and already enjoys a good level of support by popular C++ compilers. In this talk, I will present the new standard and show code examples that demonstrate how switching to C++17 will help audio developers to write faster and better code in daily practice. This talk is not an exhaustive recital of all the new features in the standard (this is widely available elsewhere). Instead, it focuses on a practical selection of features that are most useful for developers interested in low latency and real-time performance, cross-platform development, and cleaner code. This includes the new built-in support for over-aligned memory allocation, a portable way to access the cache line size, structured binding declarations, initialisers in if statements, guaranteed copy elision, useful standard library additions such as std::variant, std::optional, std::any, and std::byte, file system support, and the new compile-time static if. We will also discuss the level of support provided for these by all popular compilers, and caveats when switching from Boost libraries to their new C++17 equivalents.
Introduction to voice applications for Amazon Alexa and Google Assistant
2017 was all about voice assistants: The launch of Amazon Alexa and Google Assistant in the UK opened great possibilities for developers to build for an emerging market. In this session, Jan König will introduce the topic and give practical advice for the development of voice applications (called Alexa Skills and Google Actions).
Fifty shades of distortion
‘Distortion’ is a word we hear a lot in audio and DSP areas. Historically, it is associated with "nonlinear distortion" (NLD), and we talk a lot nowadays about the "saturation" of high-gain guitar amplifiers, "fuzz" and "overdride" pedals, audio effects such as dynamics compressors, exciters, tape recorders and transformers simulations... But we use also that word for the phase response of some textbook IIR filters, for "spatial distortion" meaning changes related with multi-channels audio streams. Distortion exists in a lot of contexts with different meanings and origins, like bias distortion, crossover, granular, group delay distortions, bitcrushing, hysteresis, chaos, aliasing, frequency-warping, clipping, slew rate, glitches, inter-peak clipping and even programming bugs ! In this talk, you will listen to a song which has been designed to exhibit around fifty different kinds of distortion, and we will study most of them, to understand better why some of these algorithms feature the so-called "analog warmth". You will learn some basic principles of analog modeling, how to bring some life in your classic waveshapers, using the features of the new DSP module, and you will discover how to code original audio effects based on unknown before kinds of distortion.
Assessing the suitability of audio code for real-time, low-latency embedded processing with (or without) Xenomai.
Writing code suitable for real-time performance comes with a specific set of well-known recommendations and best-practices. Strictly adhering to these is not always easy or convenient and there are a number of cases, especially when developing for desktop applications, where there can be workarounds that give "good enough" performances without fully meeting the requirements. When porting a piece of code to an embedded device with low-latency real-time design requirements, we find that the limitations are more strict than in the desktop world and that "best practices" become an "absolute must". Using Xenomai to deliver real-time, low-latency performance on embedded devices enforces the programmer to strictly adhere to the best practices and helps debugging non-real-time code paths. A code that successfully passes the "Xenomai test" can then be re-used in non-Xenomai environments with improved real-time performance.
Back to the future. On hardware emulations and beyond
A few decades ago, access to large, expensive studios was essential to record music and process audio, making it achievable by only selected few. With today’s wide availability of computers, DAWs and amount of high quality plug-ins and software tools for audio processing and generation, a producer can conveniently perform the same tasks on a laptop in his/her living room for a fraction of the cost. With the arrival of digital technology, many of the artifacts and limitations of analog circuit components could be bypassed. However, since the arrival of cheaper digital technology and mainstream availability of computers, a significant engineering manpower has gone into emulating the behaviour of older studio equipment. This talk will cover some techniques and challenges in capturing the spirit of analog hardware into software and furthermore aim to answer the questions of what we seek in these devices and why?
Machine Learning & Embodied Interface: latest developments to the RAPID-MIX API
This lecture will present the latest version of the RAPID-MIX API, a comprehensive, easy to use toolkit and JUCE Library that brings together different software elements necessary to integrate a whole range of novel sensor technologies into products, prototypes and performances. API users have access to advanced machine learning, feature extraction, and signal processing algorithms that can transform masses of sensor data into expressive gestures that can be used for music or gaming. A powerful but lightweight audio library provides easy to use tools for complex sound synthesis.
Modeling and optimizing a distortion circuit
Modeling an analog circuit is not an easy task, making it fast is even worse. In this talk, from the schematics of a SD1/TS9/Tube Screamer pedal, we derive a crude model that we will improve by different techniques, from algorithmic changes to implementation tuning.
The use of std::variant in realtime DSP
C++17 introduces std::variant, a type-safe union class. A variant's value represents one of a fixed set of possible types, with C++'s type system ensuring that correct code paths are executed for the active type. This talk will explore the pros and cons of working with variants, with a special focus on DSP. Variants allow for well defined interfaces and minimal memory footprints, but what are they like to use in practice, and are they performant enough for realtime use?
Christof Mathies, Nico Becherer
Opening the Box - Whitebox Testing of Audio Software
Ensuring quality of audio software is a complex challenge. The sheer number of features, different combinations of input signals and parameters results in a combinatorial explosion of scenarios that have to be tested. This literally begs for automated testing. Over the course of several years, Adobe's audio team built up a dedicated automation framework ensuring the quality of a vast variety of features in Adobe Audition CC ranging from simple editing steps over audio encoding and decoding up to DSP code. This session will start by covering some general definitions and guidelines of software testing. After covering pitfalls and special characteristics of test automation for audio software, the session will deep dive into Adobe Audition CC's SDK that allows to test audio effects automatically. You will learn simple steps on how to test your own plugin!
Brecht De Man
Audio Effects 2.0: Rethinking the music production workflow
Digital music production tools have come a long way since the first DAW, offering ever higher track counts and computing power. Yet, audio plugin manufacturers focus largely on traditional processors and skeuomorphic designs, catering only to those who desire an emulation of a 1980s recording studio - and fuelling the notion that there are no alternatives. Cautious attempts at more abstract and intuitive interfaces - including less esoteric knob labels - have emerged in recent years, though a more radical rethinking of the mixing workflow has yet to occur. This talk presents the opportunities that cross-adaptive signal processing, semantic control, and artificial intelligence can afford, with examples from the state of the art of academic research. Implementation strategies are introduced, obstacles are identified, and a generalised taxonomy of audio effect types helps structure the possibilities.
Reactive extensions (Rx) in JUCE
Audio applications change asynchronously, all the time. The user presses a button, automates parameters or plays MIDI notes. The app reacts by changing the GUI and the audio output. Reactive extensions (Rx) express this flow in an elegant way, using a few simple concepts to handle all kinds of events – from network requests to plugin parameter changes. Updates happen automatically without repetitive glue code. Think of JUCE's Value and Value::Listener, but more composable and with less code. Rx is in widespread use, for example at Microsoft, Netflix and Soundcloud. The varx JUCE module brings Rx to JUCE, integrating it nicely with existing classes such as Label, Slider and Value.
Build a synth for Android
With 2 billion users Android is the world's most popular operating system, and it can be a great platform for musical creativity. In this session Don Turner will build a synthesizer app from scratch* on Android. He'll demonstrate methods for obtaining the best performance from the widest range of devices, and how to take advantage of the new breed of low latency Android "pro audio" devices. The app will be written in C and C++ using the Android NDK APIs. *Some DSP code may be copy/pasted
Test-driven development for audio plugins
As audio plugin developers, we often face problems which have no single "correct" solution. Whether we are coding a new audio effect or even just implementing a biquad filter, sometimes we feel that the only way to check whether our code is running correctly is if it "just sounds right". Code developed this way usually takes on a mystique where no one is confident in making changes, for fear of wrecking the sound. But there's a better way. Automated testing, unit testing, and test-driven development (TDD) are all concepts that have become important part of software development in other industries. Can these development tools and techniques be effectively applied to audio development? What possible way can they apply to code that works with real-time audio? In this talk, we'll discuss why a test-oriented mindset is essential when dealing with audio plugins and DSP. Thinking from the test-driven perspective will give you more confidence in your code, allow you to make changes with much more ease, and result in code that is more modular and reusable. We will go over real-world examples in C++ and C to demonstrate these techniques, to help you think about how you could apply these strategies to your own codebase.
Designing and implementing embedded synthesizer UIs with JUCE
JUCE has become an ideal platform to develop embedded UI applications. Moog engineering discusses C++ JUCE front-end application design on top of streamlined Linux distributions. This talk focuses on practical solutions with code examples, including: maintainable user-interface and user-experience design, code and application architecture, unit and functional testing, efficient message handling and dispatch, domain-specific interfaces, APIs promoting consistency and correctness, patch storage and retrieval, and application-specific scripting.
Present and future improvements to MIDI
Ben Supper, Amos Gaynes
Why and how to build a real-time collaborative musical application
Real-time, online collaboration is a key asset, whether you want to increase users interactions, reach a perfect device continuity or radically improve the learning curve of your app. Flip is a technology that was born out of the first fully collaborative DAW, Ohm Studio. Flip makes real-time collaboration available to music applications, and this talk will present use cases and the challenges we solved along the way.
Panel: Music making on mobile
Moderator: Matthias Krebs, Appmusik
Panellists: Jeannie Yang, Product Leader & Innovator, Smule, Matt Derbyshire, Head of Product, Ampify (part of Focusrite/Novation)
Smartphones and apps - new ways to make music? The use of mobile apps characterizes new forms of productivity and digital collaboration. Amongst these, there are also offers for professional use. For musicians and music technologists, apps provide both powerful tools for making music such as a mixer controller or a polyrhythmic metronome, a diverse set of instruments, a mobile recording studio with multi-tracks, effects, and web platforms for music productions with musicians across the world. Beside there are vast offers to naïve musicians to get in touch with sounds and making music. The current trends will be discussed in this panel with high-profile industry experts. Guiding questions are:
- What is the significance of mobile music making?
- What is the status of music applications in the pro audio area?
- Where do music tech companies see their main stake group?
- What are the challenges for hardware manufacturers in terms of apps? What role do they have in the product placement and development?
- In what context (on stage, studio production, amateurs, on the road, education) does mobile music making has its strengths? What are new contexts that can be musically conquered with apps?
Daniel Jones, Chief Science Officer, Chirp
Sound and Signalling: A Whistle-stop tour
Since the earliest civilisations, sound has played a vital role in communicating messages. Conch shells evolved into military bugles, telegraph wires paved the way for Morse code, and the telephone modem heralded the rise of consumer internet access. Today, we see a blooming diversity of ways to encode information in audio, from sonification and steganography to multi-way ultrasonic messaging protocols. Dan will present a whirlwind tour of the cultural and technical history of sound and signalling, revealing hidden messages in music from Bach to Aphex Twin.
Decoding Law: All that legal stuff demystified
Moderator: Heather D. Rafter, Attorney, RafterMarsh US
Panellists: Mike Warriner, General Counsel, Focusrite, Iris Brandis, Legal Counsel, Ableton, Jemilla Olufeko, Legal Counsel, ROLI
We all know that coding is far more exciting than legal compliance, but legal guidance will be required by all coders and developers at one time or another. This session will feature lawyers from some of the leading audio companies and their outside counsel. Our panel of experts will provide practical advice on how to take your program from conception through launch and all the legal checkpoints in between. Topics for discussion include:
- Companies - do you really need one?
- You’ve got a really great idea...now what?
- The Beauty and the Beast of open source
- Contracts - are they worth the paper they are written on?
- Offer ends soon – navigating the marketing labyrinth
- Selling your ideas without selling out
- Anything else? Just ask
Béla Balázs, Software Engineer, Apple, Doug Wyatt, Software Engineer, Apple, John Danty, Senior Product Manager, GarageBand
Modern audio development on Apple platforms
Apple provides a broad range of audio frameworks and technologies that empower developers to create professional audio applications, taking full advantage of the hardware. This talk will cover some of the APIs and best practices that allow you to make your audio application ideas a reality, both on desktop and mobile devices. We will cover a range of topics including Audio Unit Extensions, advances in AVAudioEngine, and MIDI capabilities. Join us to find new ways to make app development easier and provide the same high-standard user experience across Apple platforms.
David Saracino, Software Engineer, Apple
AirPlay audio, latency, and AirPlay 2
Apple’s AirPlay technology allows users to stream various media content types from their Apple devices to Apple TVs and Speakers. Explore these use cases, how latency and jitter influence design, and recent advancements made in AirPlay 2.
Tim Adnitt, Product Manager, Native Instruments, Carl Bussey, Software Developer, Native Instruments
Making computer music creation accessible to a wider audience
When creating computer music, visually impaired musicians and producers often find it difficult to interact with music software, which typically relies on presenting information visually. At Native Instruments, we have developed a hardware and software solution that enables visually impaired musicians to interact with virtual instruments. Any virtual instrument adopting the Native Kontrol Standard (NKS) is made accessible using our growing line of KOMPLETE KONTROL S-Series keyboard controllers. During this talk, we will present how this was achieved, how users responded and how the technology can be used to make your virtual instruments accessible to a wider audience.
SKoT McDonald, Lead Software Engineer & Head of Sound R&D, ROLI
BFDLAC: A fast lossless audio compression - An Update: ARM Port, Middle Out, and more...
BFDLAC is a lossless audio compression algorithm, designed for vastly superior decode speed via SIMD instructions over size reduction, yet it still achieves "near-FLAC" compression ratios for monophonic sounds. It is robust and market-proven, forming part of the core of the BFD disk-streaming audio engine. Since BFDLAC's introduction at last year's ADC, BFDLAC has been ported to the ARM NEON SIMD architecture, gained a JUCE API frontend, had additional sub-algorithms added, and been made capable of handling monolithic sound bank files containing hundreds of internal samples. This makes BFDLAC an intriguing audio codec choice for RAM/SSD/CPU constrained mobile/surface applications, and is now being licensed by 3rd parties. We also revisit 2016's late-breaking suggestion of a possible ‘middle-out’ decode optimization for bonus Silicon Valley pop culture reference points.
Julian Storer, Principal Software Engineer, Tom Poole, Senior Software Engineer,
Fabian Renn-Giles, Lead Engineer, Ed Davies, Projucer Developer,
Lukasz Kozakiewicz, Senior Software Engineer, Noah Dayan, Software Engineer Intern
This session is an open surgery where you can discuss all things JUCE with the team of developers who drive its evolution. Bring your questions and challenges, ideas and comments to tackle in a one to one environment.