Tags

ADC 2018 Spotlight - Devendra Parakh

We're excited to announce that Devendra Parakh will be speaking at ADC 2018, presenting a talk called "Techniques for Debugging in the Wild (Mac OS)," where he will show techniques to debug problems that may not be reproducible on your development system.

Date posted
Share on Social Media

Devendra Parakh is currently the Vice President of Software Development at Waves, and has had a fascinating career path which has taken him through a wide array of domains in development, including medical diagnosis, communications, security, and more. Devendra was kind enough to speak to us a little about this path, as well as how he sees the future of audio development and more on his talk for ADC. (Interview by Joshua Hodge)

Can you tell me a little about how you got into development?

When I was in high school, I was living in a very small town called Bikaner in India, and until I was in 12th grade (about 17 y/o), I had not heard about computers at all (in the early 1980s). My family wanted me to become a doctor, and I wasn’t too much into that, but that’s how it was in India- the family decides. I did try to get into medical school and while I was studying for the medical entrance exams, my father had travelled to America and when he returned, he had two computers with him. One was a Televideo portable PC and the other was more like a home computer (a TI 99/4A). It hooked up to a TV and had a version of the BASIC (programming language) interpreter on it.

While I was preparing for medical school, I was using this computer more like a toy. Unfortunately, all the documentation got shipped in a different parcel which never made it to India! Luckily, I had found a book of games for this toy, which had these one or two page listings. You could just enter the instructions in the interpreter and hit enter, and something would happen. That intrigued me, and I started playing with it. So that’s what got me into programming- I started trying to figure out what happened when I tried to type certain things from these listings.

That’s an amazing story! How did you convince your family to let you pursue programming, rather than medical school?

It did take some convincing! Like I was saying before, I lived in a very small town in India, and my Dad only had the PCs because of the accounting software for his business. Computer Science wasn’t even taught as a major back then. I did end up doing a year in college pursuing a science degree, but I dropped out of that program to pursue electronics and programming. I did meet a teacher, Mr. Chapman, who had good knowledge and is a great teacher, and luckily, he let me work with him. It was there that I started learning more. Soon enough, I convinced my dad to start up a side business and we started programming software for some local businesses.

Then another opportunity came by. My uncle, who lived in Michigan (USA) at the time, offered for me to come to the US and he could help pay for my studies. So I took this opportunity. We didn’t have a lot of money, so I couldn’t get into any of the major schools, but I did get into a community college in 1987. I didn’t have the opportunity to take any programming courses at the college, but they did have a lot of resources and books, the like of which I hadn’t seen before! So, I started reading on my own and learning.

My uncle had a food distribution business, so I started writing software for him. I had programmed software designed to help take lunch orders from schools, to assimilate these, and to help delivery drivers take their routes. I took about 2 years of college to get dual Associate degrees in Engineering and Humanities, but by that time my Dad needed more help with the business, so I returned to India. We did contract software development - primarily accounting and inventory management (all database management related) software for local businesses.

One of the first things I noticed when I was reading your bio was the wide variety of companies and applications you have developed for- everything from audio to medical technology.

Well, after about a year of doing the same type of work for my Dad, I found it wasn’t very exciting, so I started looking for other things to do. I had met a couple of doctors and started working with them on some software.

One of them, a Homeopath, had a system where they would have to take a patient’s symptoms and go through a process of Repertorization, to come up with medicines that would fit, and it would usually take them hours. There was software for this but it was too slow and expensive, so I started working on a version of this for him, which was primarily database driven. I did similar software for another doctor for performing Differential Diagnosis.

The same doctor bought an EEG machine. He said there were advanced versions of this machine which were available which could analyse the brain signals and tell about possible long term problems by doing frequency analysis. I told him, “we can also do that!”

So we bought this A/D convertor ISA card - it wasn’t like a commercial sound card. These were designed for industrial use. The problem was that you couldn’t really write your own software for the card. You could only really use the software that they gave you. This didn’t really provide for what we wanted to do, so we talked to the manufacturer, and they gave us the technical specs (IO Registers, DMA, IRQ) that they used to record the signals. This was around the time when I started learning more about Fourier Analysis and FFTs.

The software was more like an embedded device driver for one specific application. This started getting me more into device drivers and interfacing with hardware. Around this same time I was also working for an Instrument Manufacturer in India. They wanted some design software to print dials for analog meters. That was written in Turbo Pascal for Windows which directly generated PostScript based on instrument parameters. So I had to learn PostScript, and that lead to other interesting Desktop Publishing software projects for me.

It sounds like you taught yourself through experience, rather than taking any formal course to learn programming and Computer Science…

I did study some general Engineering, Math, and Science in College, but for Programming and Computer Science I had to teach myself. It was not easy because we didn’t have the Internet or Stack Overflow! So I mainly learned through books and magazines and a lot of patience.

While I was working in India I subscribed to a service called BIX (Byte Information Exchange). This was 1994 or so. Internet was not as widespread back then. BIX was more like bulletin board services for programmers.

Can you tell us how you got from designing medical software into audio?

Through BIX I had met this person who was running a company called Singing Electrons. He was doing software related to audio, and he wanted help. So I started helping him and writing audio applications and device drivers. We would communicate through BIX. That’s really how I got to working in audio.

Did you spend most of this time writing in C++?

Well it was widely varied. In the beginning, it was working primarily in BASIC, Pascal, and FoxBASE, and then when I got into writing device drivers this was primarily in Assembly. Early on, you couldn’t write device drivers in C++…it had to be Assembly and C. Later on, there were frameworks that were introduced that did allow you to use C++ in the Kernel.

I started using C++ in 98/99 and then more when Mac OSX came out, as the IOKit drivers are in C++, but until I started working with Waves I was using a combination of C++ for the core back ends and Delphi for GUIs.

How did you get to working with Waves?

Unfortunately Singing Electrons made some bad business decisions and eventually shut down their development centres. This is what led me to working with Waves. I had done some work with Waves before as a consultant since 1999, but once Singing Electrons shut down I began working with Waves full time. I set up a development centre for Waves in India, and I’ve been working with them primarily on device drivers and some applications as well (almost exclusively in C++) and of course running the Waves’ India Development Centre.

One thing I’m curious about is if you could see the future of where audio computing was going to go, having worked on it at such an early stage.

I think the turning point for me was when Intel came out with the MMX technology in the late 90s (which provided SIMD instructions). Until that point, I didn’t think there was anything that was going to revolutionize or change things. Before the MMX came out, it took a lot of CPU time to do even the most simple signal processing tasks. Once MMX came out I realized that this was going to take creative possibilities in new directions because MMX allowed you to do multiple calculations in parallel – almost like a proper DSP chip.

Another revolution were the 486 processors from Intel. Before the 486 processors, you weren’t able to directly do floating point calculations in hardware – unless you installed a separate Math Coprocessor. With 486 (and later chips), Intel made the Math Coprocessor integrated with the processor. So those two things changed the possibilities for computing- SIMD instruction (starting from the MMX technology) and the integrated Math Coprocessors (which allowed much much faster floating point calculations).

I think the future for audio dev is huge, because we have a lot of work to do in the field of spatialization because of augmented and virtual reality, and also with audio starting to play a larger role in the home (with Google Echo/Amazon Alexa etc). I’m curious to hear where you see the next frontier for audio?

I think AR/VR definitely because audio has such a big role to play in that experience. Spatialization has been there for a long time- one piece of software that I did in 1995 was designed for 3-D spatialization, but it was more of an embellishment. Now the audio in AR/VR is such an important part of the experience. If you don’t have audio spatialization within your AR/VR app, then the experience is not complete. Of course, there is still a lot to do, and people are coming up with various ways on how to enhance this experience. For example, even my company, Waves, have the Waves NX which provides technology to provide realistic 3D sound. NX can be integrated with an IMU (inertial measurement unit) attached to or embedded in your headphones, that gives clues to the spatialization software on how to adjust the audio based on your head placement. We have also incorporated Nx on DSP and helped embed it in multiple headphones, such as the Audeze Mobius. Nx is also used in VR games such as Racket Nx.

Waves is well-known for their analog emulations, and I wanted to ask (if you were able to tell) about the approach that Waves uses i.e. a white box approach (modelling the software to mimic the circuitry of its analog counterpart) vs a black box approach (modelling software based on input vs output changes in the audio signal)?

One disclaimer I would make is that I’m not too involved in the analog modelling software, but I do have more of a general idea. What we do is use a combination of black box and white box approaches.

When we model something, we do see what the devices’ responses are- what signal comes out vs what goes in, but we also take a look inside the box to see what sort of circuitry has been used- if they’re using tubes or transistors, or what kind of networks they’re using internally. But we always use a combination, even if we have full schematics of the hardware we are modelling. We also work to go beyond mimicking the hardware, because we want to give the user flexibility to create something new.

At Waves, do you use an in-house framework, or is there another framework that you use?

We use a combination of frameworks, so we have ones that are in-house, and we also use JUCE for some of our applications. We also use QT and several in-house and open source libraries. Personally, I like JUCE a lot! If someone would ask me to start a new project, I would say let’s start it in JUCE. I like the overall design and style of it and the way it is coded, and the fact that it’s lightweight.

You’re speaking at ADC again this year, and I really enjoyed your talk last year on a very practical problem which all audio developers face, which is that you can’t print values in your audio callback, as that causes blocking that could interrupt your audio. This year you’re speaking about another practical issue, which is how to debug software on multiple platforms, some of which you may not have in your workplace.

This is something that I personally run into a lot, because most of the time I’m working remotely. For example, I was in the India development center, and the main development center is in Israel, and there were problems that would be reported on their systems that wouldn’t be happening on my development machines.

The other issue, especially when we are doing audio development is that there is such a variety of hardware. So it’s not possible to have everything everywhere. Our QA department has access to, say, more than 20 different audio interfaces, and I have access to just 3 or 4.

So I ran into this a lot, where I had to debug remotely, and sometimes it’s with customers. Over time I’ve developed a set of techniques that I use when such issues come up. On both Apple and Windows, there are some tools that are provided that aren’t as well described. I know about them mainly because I’ve been around a long time, on Macs, since OS 9 days, on PCs since PC DOS 2.x days. So these tools which are installed on most machines are useful for profiling, for finding memory leaks, etc... There are also tools for doing tracing. One particular tool that I like is called Dtrace, and this has been available on Mac for a long time, and it actually forms the basis of XCode Instruments, but I see very few developers using DTrace. It’s an extremely powerful tool, and I’m going to give an introduction. You can build your own profilers and you can add logging to code at runtime, logging that was not present to begin with in the code.

The other thing that I subtly want to encourage people to do is use the command line a little more. That is something that, especially new developers, they want to do everything from the GUI. There’s nothing wrong with using the UI- sometimes you can see what’s going on better with a GUI, but many times there are limitations, and the command line can help you overcome these limitations.

Thank you for all your time! One more question I’d like to ask is where you see plugin development heading in the future?

One thing that Waves is moving towards are cloud centric things, where it’s possible to share more things between artists and users and setups. For example, Waves has been able to get more involved in live shows with SoundGrid technology, where we’ve been getting integrated with a lot of consoles, so if you have a favourite plugin chain or preset that you’ve been using in the studio, you’ll now be able to import that directly in a live environment and vice-versa.

The other thing is that we are now getting more into instruments, and with the technology that we have, we will be able to bring synthesizers and sample based instruments that will be utilizing and building upon the technologies that we already have.

In addition, we have opened up Waves SoundGrid as a plugin platform for third party developers – this allows their plugins to run on dedicated DSP servers with sub millisecond latencies, while running the GUI inside a DAW or a Console.

Further Information

© 2019 JUCE