This post will guide you through the steps required to build a Unity native audio plug-in using JUCE. For more details on how to create an Audio Plug-in project in JUCE, see this tutorial
Setting up the Project
To add a Unity target to your plug-in project, simply enable the "Unity" option in the Plugin Formats section of your project settings and re-save the project:
Then open the project in your IDE of choice and build it (if you don't know how to do this then check out the Projucer tutorial).
The build product will be a
.bundle on macOS, a
.dll on Windows or a
.so on Linux. A C# script for the plug-in GUI will also be generated alongside this file on Windows and Linux (on macOS the file is embedded in the
.bundle). Once you have copied the plug-in and C# script (on Windows and Linux) into your Unity project folder it will be usable within the Unity editor.
Using the Plug-in in Unity
The plug-in file(s) should be placed in a folder called "Plugins" in the root of your Unity project, directly within the "Assets" folder, to be detected correctly by Unity (for reference, see the Special folder names section of the Unity manual):
The plug-in can then be added to an Audio Mixer in the Unity editor and used in your signal flow.
The GUI that is shown in the editor window for your plug-in depends on the return value of your plug-in's implementation of AudioProcessor::hasEditor() - if this returns
true then the full plug-in GUI will be drawn to the screen and if it returns
false, Unity’s default sliders will be used for any AudioProcessorParameters that your plugin exposes (a helpful tutorial on plug-in parameters in JUCE can be found here). Using the default Unity sliders allows you to expose your parameters to Unity scripts by right-clicking the parameter name and selecting "Expose
yourParameterName to script" (this is not supported when using the plug-in's GUI):
And that's it! Please post any feedback for this feature to the JUCE forum.