Mitch from Accurate Sound

Would you? Could you? In a car? From 705.6kHz sample rates to running 23 channels of convolution on a Raspi to helping his customers win car audio competitions, Mitch Barnett is in the vanguard of a rapidly evolving audiophile industry.

You have this great career path. You started out live sound mixing. You then mixed in the studio for 10 years. You taught yourself to program. Then went back to university for software engineering management, became a consultant and at some point settled down and got into DSP. Is that right?

That's right. I was a corporate programming stooge for a long time. I worked for some large companies like Microsoft and Fujitsu and then a whole bunch of small companies, including my own startups.

I started up a software company in 2001. Grew that company to 25 people and sold out and moved on and got a couple corporate gigs. In 2011, I wanted to get back into audio again. I started doing investigations on digital room correction and DSP in general. And wrote a bunch of articles for Computer Audiophile, which is now Audiophile Style.

How did you get into room correction?

In the studio, all we had was analog gear. So we were doing so-called room corrections with third octave EQs — which compared to what we can do in DSP is just mind-blowing, right? I was going to build one myself, start getting back into the audio business, and then I came across a couple of programs that were already kind of doing what I wanted to do. So I got into that and kind did that as a side hobby until about 2019. And then I just quit the corporate work and decided to open up my own little shop and learn DSP.

I was a C# developer for way too many years and wanted to get back into C++. I watched a bunch of Josh from the Audio Programmer. I actually contacted him and said "Hey, I need some assistance." I was looking for some tutoring, not only on the JUCE classes but also just in C++ because it's been 30 years since I looked at that. So he hooked me up with José Díaz Rohena, who is now working at Ableton.

Amazing! I met José at the last ADC in London. We worked together a bit because I publish some open source, he's used some of it and we collaborated a bit. It's really nice to hear these connection points!

I should have done it earlier, to be honest. That's the thing I would highly recommend anybody getting into it the first time, just get a tutor. It saved me so much effort. I was struggling hard on numerous aspects. So that really helped me out, get me on the path. [Related editor's note: submissions for participating in the ADC mentorship program are still open as of this interview's publication date!]

The reason why I built the [Hang Loose] Convolver in 2019 was because now I'd gotten into digital room correction, I was offering that as a service to folks remotely. One of the questions that always comes up is "Which filter do I like better? How do I compare?"

Not having level-matched comparison, as you know, is really difficult. Your ears get fooled easily so that if one is a little bit louder than the other you'll pick the one that's louder. So that was the impetus for designing the Convolver. An in-house tool to compare corrections.

After José helped me out, I got it released on Mac and Windows. I got numerous calls to make it multi-channel, for surround systems and for bi-amp, tri-amp, four way active systems and Dolby Atmos systems.

Charles from Matkat Music helped me out on the multi-channel version. I was searching around for more advanced C++ courses, came across his channel, asked some questions and he kind of helped me out and got me going. And again it's a wonderful thing to have someone that is way smarter than me in the community — that can not only show me the ropes but communicate "why are you designing it this way." It's been great from that perspective, the community has been awesome.

Your website lists services which seem high-touch. You have calls with your clients and supply them with custom filters?

Yes. I'll get them to measure their system with a calibrated measurement mic in their room with however many speakers that they have. Getting people to measure their system that haven't done it before is kind of fraught with all sorts of issues. So I've got a bunch of videos. It's a high-touch thing.

Once the measurement is over with, I develop some filters, send them back, they load them up into the Convolver, give a listen. And with their feedback, I iterate again. After three or four iterations, folks are pretty happy with the way it's dialed in.

An example of before (top) and after (bottom) Mitch's room correction

And then I produce a video that walks through the steps of how I did it. I teach them how to replicate what I did. I point out areas in the DSP tool that they're using, areas of experimentation. "Oh you know, you can tilt the target up or down a bit if you need a little more top or bottom. You can expand the window so you get more of the room coming into the microphone."

I hand them off once they're able to create a filter and can AB and run the Convolver and they know they can do it. But I still get a lot of repeat customers. Because it's taken me 10 years and over 300 systems now from around the world. Each one's different. It's just like any kind of custom consulting, which you know about.

So you teach them how to fish, but they still want your fish.

[laughs] That's right.

So you offer lots of things, but the mainline is the high-touch, teach-you-how-to-fish pathway. And the convolver is subservient to that?

Yeah, that's right and it's taken off. It's probably a 50/50 mix of studios vs. audiophiles. I have a lot of studios that have engaged in the service. And a small percentage are high-end videophiles.

And then I have a line of headphone filters. I've got a special binaural equalized mic and all that nonsense. The headphone filtering thing has been wild. Headphones are so far off from neutral. And a lot of these are ridiculously priced headphones. I've had $3500 dollar headphones and the response was so bad that I said "even with all the DSP in the world, I can't correct them."

How old were you when you started learning DSP? I was 39 and have been feeling some FOMO about all these people who got to learn it at University programs like CCRMA, etc.

I was 62, man.

That is amazing.

Yeah, it's great. I love it. I wish I did it sooner, but better late than never. Totally happy that I ditched the corporate job and got into this. I'm having the most fun of my career right now, aside from startups which were initially fun and then didn't seem fun after a while.

There are a couple things I'm intrigued by. One is that I noticed you are selling JUCE's AudioPluginHost for 20 bucks — which is genius!


Did you modify it outside of theming? Or is it just repackaged, code signed, made into an installer?

Yeah, it's actually stripped down considerably. I'm trying to make it easy for audiophiles to get into plugins. I threw $20 on it just to cover the cost of all the modifications.

Maybe you can add in-app purchases to change the little connectors to be gold connectors! Could be great idea for your target market!

[laughs] You want your logo on there, no problem, that's another $200!

Logo could be a recurring charge! [laughs] The other thing I wanted to ask you about was "Hang Loose." Where does the name come from?

Total West Coast surfer kind of stuff. If I had my way, I'd be on the beach surfing every day. But you know, I can't. So at least I can add some branding to the product.

The other part is that, trying to setup and level match and all of this stuff... If you really want to legitimately try to A/B or ABX something, it does take some real effort to double check it. I wanted to make it easy. So people can just hang loose and listen to the music and push a couple of buttons so that they can hear the differences and make their choices.

Is there a kind of reluctance towards digital tooling in the audiophile world? There's so much emphasis on analog purity...

That's a great question. Over the last decade, there's been a real shift in audiophiles going from the straight analog two-channel stuff to room correction stuff. Everyone's got it and now the market's really picked up on it. We have many producers of DSP for room correction and filtering and all sorts of tools available.

A lot of the so-called audiophile software for music players, for example, don't host VST3 plugins or AU plugins. So there needs to be a way for people to insert that into the chain.

There's a music player called Roon. It's very popular amongst the music crowd because of its recommendations and the metadata that it sucks in. People can read and view while they're listening to the music. But they don't have a plugin model. What's really popular in the audiophile community are a couple things with cross-talk cancellation — not only for headphones but also for speakers — so there's a bunch of cross-talk VST3 plugins. And there's no way for those guys to incorporate that. So they're looking for a way to plug stuff in.

That's why I packaged up that host. That's kind of a missing piece. So if your audio interface supports hardware loopback, you can take the output of Roon and feed it through and put it into the standalone host and from there it's game over. Because you can plug in any VST3 or PEQ [Parametric EQ] or cross-talk or Convolver or what have you.

What kind of device is the host running on, a computer?

It's typically on the computer that they're running the music program on.

Ah, so Roon is an app on their computer

Yeah, on the computer. Through Blackhole, I can take the output of Roon—

Oh, you're using Blackhole!

Yeah, and Virtual Audio Cable on Windows, etc. That's how they are able to do it. But you know, this gets into the other aspect where people don't want it on their computer or have bought a proprietary music server or a music streamer that has its own proprietary OS — so there's no way to load anything on that. I'm working with another company on developing a hardware version of this Convolver. And that's where it gets into the Raspberry Pi aspect of it.

People are looking for a little standalone box with various inputs and digital inputs and outputs that they can just drop on the table and hook up their music streamer, whether it's AES or SPDIF or TOSLINK output into the device. And then the output of that device goes into that person's DAC. And then all of a sudden, they've got access to any plugin on the planet. Well, presuming it runs on Linux.

I've already worked with a couple of the plugin manufacturers that audiophiles would be interested in and said "hey, can you target Linux?" If you can target Linux, you can target Raspberry Pi.

Ah, right, you are running JUCE on the Raspberry Pi! That's pretty exciting. And you said it was like six or eight channels and only using 40% of cpu or something?

23 channels, actually! The guy's got a 7.1.4 Atmos system, which equates to 12 channels. But each of those, like the mains and surrounds are also doing bass offloading into a digital crossover that's included in the convolution, so there's another 11 channels of bass offloading to the subs. And then all of that's time aligned right down to a 1 sample difference. So all of the direct sound from all of those speakers arrive at your ears at precisely the same time. From an audio experience it's like, wow, hard to believe out of a little 80 dollar Raspberry Pi, right?


That's the Pi 4 by the way. I've got a Pi 5 here as well, which is about twice as fast... I was really impressed how efficient JUCE runs on the Raspberry Pi with that type of convolution. And it isn't just small filter taps. These are full on 65,000 tap filters per channel.

That's amazing. Tom [JUCE's director] said you are using the JUCE DSP classes, is that true? The convolution classes?

Yes. Absolutely. Convolution classes. The digital delay lines. Smoothing. All of that stuff was working great. It was Tom that I first contacted in 2019, JUCE 5 point something, and I was looking at the convolution class and at that point it wasn't real-time thread safe. So you couldn't load a filter without it wrapping out. So he mentioned to me that JUCE 6 made it real-time safe and that's when I took the plunge.

The other aspect was it that it was also a zero-millisecond latency convolver. That was really important because half my audience is using convolution for movies, YouTube, Netflix, that kind of thing. And so with those long-tap FIR filters there's probably about three-quarters of a second of latency. But if you convert the filters to minimum phase rather than linear phase, of course there is no delay. With the combination of zero milliseconds delay in the filter and the Convolver, people can have really good low frequency bass correction with the high tap filters with no lip-sync issues.

Did you have to make any modifications or was it all happy out of the box?

I mentioned to Tom that in the convolution class, we're using the ResamplingAudioSource class, which is unfortunately not very well tuned towards an audiophile. Great for convolution, reverb, but not for high resolution FIR filtering. Some of these guys are crazy right, they are running at 705.6KHz.

Wow, that's a thing? I thought it topped out at 192kHz. I think I've seen 384kHz, but wow.

It's a thing, man. These kind of scenarios crop up. It gets used in the craziest places.

I've got a guy that just won a car competition. He's using the convolver in there. He's running convolution and digital crossovers in his car.

He's running JUCE in his car?!!

Yeah, that's right! He was really happy. He said "I can't believe it man, this was the magic piece that I was missing in my setup," because he was just using you know, old school parametric equalizers and stuff like that. But nothing like full-on FIR filtering.

A lot of these DSP programs are taking out the reflections in the car. So you can tune the frequency dependent window so that those low-frequency reflections are getting tuned out. And so what you're left with is this incredible sounding bass. That was a moment of clarity for this guy.

Yeah, so the only thing I'd say about the JUCE convolution class is that it would make it easier if there was an interface that allowed you to plug in something else other than the default re-sampler. I did plug in R8 Brain, the re-sampler source I got from Aleksey, the Voxengo guy.

To me, that's the beauty of C++. Not only was I able to take that source and integrate into the JUCE framework, it compiled on all platforms as-is.


And you know, when I was saying earlier that it was "better late than never," I realized that if I was trying to do this earlier, none of these programming tools and classes were available at the time. So there's nothing I could have done anyways!

We showed up at a good point, I think!

That's right.

Mitch was interviewed by Sudara from

Have comments on this interview? Ideas for who we should talk to next? Let us know on the JUCE forum

More "Made With JUCE"

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram