sliders

I’ve spent the last two weeks banging my head against the proverbial wall, re-learning class and function indications in C#, the programming language that underpins the Unity Engine (Unity), trying to get the Pinch Slider assets from the Microsoft’s Mixed Reality Toolkit (MRTK) to talk to the AudioMixer class that’s native to Unity.

With some help from a few professional programmers in my network, as well tutorials and manuals that have been developed by both Unity and Microsoft, as of this weekend, I have a functional skeleton for the first ‘movement’ of Touching Light, which involves independent audio faders that will adjust the volume of loop-based original music.

This will function as an augmentable backing track alongside which the performer can improvise freely, or engage with some pre-written melodic and harmonic material that will be presented with traditional staff-notation.

Final testing will occur in the next day or two to ensure that device deployment works as I am intending, after which I’ll take a break from app development for the remainder of the week to shore up chapters 2 and 3 before sending it along to my research advisor for preliminary comments.

Included below is a technical overview of the Unity assets and scripts involved at this juncture, which are subject to change as I optimize:

In-Engine Render

Here you’re seeing a collection of 3D objects that have been adapted from the PinchSlider pre-fabricated (prefab) assets provided in the MRTK; each fader is interactable with the HoloLens 2 (HoloLens) “pinch” gesture (hence, “PinchSlider”) and the ‘thumb’ (the knob that slides) will move vertically along that track, returning a value between 0 and 1, depending on where it is located along that track.

The context menu near the bottom is a profiler asset that allows me to track the CPU usage of different interactions in real-time, keeping an eye on whether or not things are in danger of causes lag, and freezing/crashing the program; so far, we’re in the green.

PinchSlider Inspector

The connection between the PinchSlider asset exists within the ‘Events’ section: whenever the value of the slider (the number between 0 and 1) changes, the PinchSlider will return that value which can then be collected by other scripts (programs) and used to alter things like the volume of specific sounds, loops, etc.

I wrote the MixLevels.cs script (referenced in the On Value Updated event) to take that slider value and apply it to the volume for the appropriate track.

MixLevels.cs

public class MixLevels : MonoBehaviour
{
    public AudioMixer masterMixer; //create a new object to reference the in-engine audio mixer

    private float dB; //create a new variable to map the slider's
    private string param;


        [SerializeField]
 //ask for input
        private string exposedParam = null;

    public void SetMusicLvl(SliderEventData eventData) //take an input of sliderEvent type

    {
        param = exposedParam;
        dB = eventData.NewValue;
        if (!(dB == 0)) //because the logorhythmic function to alter the value for a fader level breaks with an input of '0,' checks for an input of 0
        {
            masterMixer.SetFloat(param, (Mathf.Log10(dB) * 20)); //change slider value to something that works for a volume fader; essentially map the 0-1 range to a -80 to 0 range.

            Debug.Log(dB); //print dB value in order to confirm that it is being changed

        }
        else

        {
            masterMixer.SetFloat(param, -80); //if input is 0, set the fader to -80

            Debug.Log(dB); //print dB value in order to confirm that it is being changed

        }
    }
}

A fairly ‘simple’ script, as far as what is possible in the grand scale, this program allows for the user to input which fader’s volume (an “exposed parameter,” essentially meaning that it is visible and available for other programs to edit) this specific copy of the MixLevels.cs script should be attached to.

I’ve color-coded the code above; anything in sage green are ‘comments;’ they are notes within the program written in English for other programmers, that the computer ignores when it is running the program. Commenting is an important way to communicate with others who will look at your code in order to help them understand what your program is doing, and how.

Master Fader

While there is likely a more efficient and elegant way to create these connections, this is final piece of the puzzle: the place where the user can designate which audio track the slider should be in control of. You can see here that in the ‘Exposed Param’ field (which I’ve shown is a ‘Serialized Field’ in the script code above, thus prompting for an input) I’ve designated ‘masterVol’ which, as you might guess, is the reference to the Volume of the Master Fader (the one that controls the total overall volume of all of the tracks).

But multiplying this by five or ten times, I’ve then generated the necessary faders to control all 11 (10, plus the master fader) of the individual tracks that make up the complete original music.

Screen recording while rendering 3D and audio in real-time is a bit taxing
(hence the red on the profiler)