renders

Exciting times! Lots of things going on at WVU, both related to my dissertation, and otherwise. With midterms having just concluded, we enter the final stages of planning for our virtual percussion ensemble concert, student juries, and subsequently my final recital… luckily, I seem to have some apps that work!

The past month has been a lot of keeping my nose to the grindstone and iterating on the development of the final movement of Touching Light, ‘Synecdoche’ (or ‘the one with cubes on’ as it has been called). I’ll include a video at the end of this post that shows the app for the movement in action.

The artistic goal for this movement was to explore musical interactions that were unique to MR; while movements 1 and 2 engage MR to broaden the possibilities of live performance, ultimately both the holo-mixer and the carousel could be achieved via other means. The weightless projections and interactions of the holographic objects in movement 3 are a different story.

There were three main things that I wanted to do with Synecdoche. The first one was to make ‘primitives,’ (in this case, cubes) somehow the ‘star of the show.’ I started my 3D modeling experience with Blender, and so the ‘meme’ about deleting the default cube may have directly inspired this in more ways than one. So, the first step was to figure out what I could do to make the cube(s) interesting.

As I’ve shown before, I colored the cubes so that they corresponded to the three principle colors in the RGB color profile, with some edge-lighting to make them a bit more abstract, and then made them weightless.

For the final version, I’ve spent a lot of time on the HandMenu, building out specific controls for each cube individually, as well as some global controls.

Mute All command will forcibly mute the Sound Sources on all of the Cubes, using the boolean operator in the Audio Source

Reset All will reset all of the cubes to the origin, without removing any inertia that they may be carrying using a custom script that called SetToOrigin

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class SetToOrigin : MonoBehaviour
{
    public Vector3 pos;
    public Quaternion rot;

    // Use this for initialization  
    void Start()
    {

    }

    // Update is called once per frame  
    void Update()
    {
        transform.SetPositionAndRotation(pos, rot);
        this.enabled = false;
    }

}

Boundaries generates the cage around the performer which helps keep the cubes within a reasonable performance space

CREATE

The Create commands enables the Mesh Renderer on each of the cubes (which is disabled by default)

RELEASE

The Release commands activate a few different functions, unique to each cube.

Release RED
When released, the Red Cube activates the ‘Object Puller’ script, which causes the Red Cube to be pulled toward a designated object (the Blue Cube), if it is within range using the ObjectPuller custom script:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

 public class ObjectPuller : MonoBehaviour
 {
     public GameObject attractedTo;
     public float strengthOfAttraction = 0.01f;
     public float radiusOfAttraction = 1.0f;
     float distance;
     void Start() {}

     void FixedUpdate()
     {
        distance = Vector3.Distance(transform.position, attractedTo.transform.position);
        //Debug.Log(distance);
        if (distance < radiusOfAttraction)
        {
            Vector3 direction = attractedTo.transform.position - transform.position;
            gameObject.GetComponent<Rigidbody>().AddForce(strengthOfAttraction * direction);
        }
     }
 }

Release BLUE
When released, the Blue Cube activates the ‘Object Pusher’ script, which causes the Blue Cube to be pushed away from a designated object (the Green Cube), if it is within range using the ObjectPusher custom script:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class ObjectPusher : MonoBehaviour
{
    public GameObject attractedTo;
    public float strengthOfRepulsion = 0.01f;
    public float radiusOfRepulsion = 1.0f;
    float distance;
    void Start() { }

    void FixedUpdate()
    {
        distance = Vector3.Distance(transform.position, attractedTo.transform.position);
        //Debug.Log(distance);
        if (distance < radiusOfRepulsion)
        {
            Vector3 direction = attractedTo.transform.position + transform.position;
            gameObject.GetComponent<Rigidbody>().AddForce(strengthOfRepulsion * direction);
        }

    }
}

Release GREEN
When released, the Green Cube is affected by gravity, causing it to accelerate downwards at approximately the same rate as an object on earth’s moon. No custom scripts are required for this interaction; instead the button toggles the ‘Use Gravity’ toggle on the cube’s RigidBody.

SING

Each of the cubes has a different function when ‘Singing, but RED and GREEN are functionally the same. Both cubes’ Audio Source is unmuted (it begins muted).

This allows the tracks that they play to remain in sync, as they play on load.

For the BLUE cube, the ‘SING’ button activates a custom script that plays a randomized diatonic note whenever the cube detects a collision, using the ImpactTrigger custom script:

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class ImpactTrigger : MonoBehaviour
{
    public AudioSource source0;
    public AudioSource source1;
    public AudioSource source2;
    public AudioSource source3;
    public AudioSource source4;
    public AudioSource source5;
    public AudioSource source6;
    AudioSource sourceToPlay = new AudioSource();
    AudioSource[] notes = new AudioSource[] { };

    private void OnCollisionEnter(Collision collision)
    {
        AudioSource[] notes = { source0, source1, source2, source3, source4, source5, source6 };
        //Debug.Log(notes[Random.Range(0, notes.Length)]);
        sourceToPlay = notes[Random.Range(0, notes.Length)];
        sourceToPlay.Play();
    }
}

Those sources are defined through the Impact Triger (Script) component, and are attached as separate audio sources and children of Blue_Cube.

SILENCE

The Silence function is essentially the reverse of the Sing function, muting the various Audio Sources, and disabling ImpactTrigger.

FREEZE

The Freeze function is essentially the reverse of the Release function, putting the Rigid Body components on the cubes to sleep and disabling the ObjectPuller, ObjectPusher, and gravity interactions.

FIND

The Find function generates a floating orb (that does not have a RigidBody collider) which will always point towards its appropriate cube, disappearing once the cube is in view.

PING-PONG

Finally, and this was more a happy accident than an intentional design, because of the way that I built the Hand Menus, they can be used like paddles to hit the cubes and send them floating off.

Overall, I pleased with the way that things are shaping up… recital is on May 1 at 10 AM EST!

gravity

With the spring semester at WVU in full force, finding the right balance between writing chapters for the dissertation, working on the Touching Light application, and teaching a coaching various lessons and ensembles has been tenuous, but so far, the project is still on track! With the first two ‘movements’ in a stable place and successfully ported to the HoloLens, the last piece of the puzzle is the development of the third and final movement of the application which involves both the most intractability as well as the most custom coding and design.

That said, the initial versions of the third movement have been picking up steam.

While the first movement aims to create a virtual representation of a physical object (a sound mixer), and the second movement creates an abstracted virtual representation of physical objects (the photosphere), the third movement explores interactions and functionality that is unique to the MR space by employing what I’ve referred to in my research as an ‘esoteric liminality’ (still not sold on this title, but it’s what I’m working with at the moment). In English, this is simply a virtual environment that manifests and takes advantage of interactions that are unique to the MR platform, including interacting with abstract shapes (in our case, colored cubes), audio spatialization in an MR volume, and MR collisions of virtual and physical objects (this bit is proving to be a smidge tricky).

Overall, the progress of development is on track, but as always, I’ll need to remember that this is intended to be a proof of concept, not necessarily a polished, marketable experience (but it would be nice it if was, right?).

Details below about specific application features and how they’ve been achieved.

The Barrier

One of the biggest downsides to working in the Unity Editor (which would be the same for any development that is not done natively on the XR platform) is the inability for the IDE to map the physical environment in any meaningful way during development. To address this issue, I’ve simulated a physical room by using 3D places in a vaguely room-sized cube surrounding the initial instantiation of the MTRK camera, which allows any sort of collision interactions to be explored, even if it doesn’t solve the issue of need to track collisions with the physical volume – this will come soon.

The Cüb

The cube represents the simulates physical volume, and acts as a barrier for virtual objects to collide with, like they would in a physical space with walls, floor, and ceiling. The coloration is simply for ease of identifying the edges of the barrier and will not be present in the final application.

A lot of Cubes

With the barrier in place, I’ve devised three simply ‘esoteric’ objects to serve as prototypes for interactability which are represented by three smallish cubes, one red, one blue, and one green.

RGB cubes

At the moment, the cubes have three primary components: an spatialized audio source, interactability using hand rays and articulated hands, and new for this movement, rigidbody physics simulation.

RedCube Inspector

Grabbable

Interactability was fairly straightforward and simply involved recreating the combination of components that I’ve used in the prior two movements; both the mixer and the photosphere have Object Manipulator and NearInteractionGrabbable scripts tied to them.

Using the simulated hand articulations

Spatialized

Secondarily, each cube serves as an audio source for one part of the three-part pre-recorded track (that I am still in the process of writing). For the moment, the bass, kick, and lead from movement one are assigned for testing purposes. Unique to these audio source components however is their spatialization. This means that the panning and volume of the individual tracks is determined by the user orientation and distance to the object; the closer the user is to the cube, the louder the volume of the track. Similarly, if the user places the cube to their right or left, the audio will be sent only to the appropriate side of the stereo mix.

The audio spatializer

Weightless

Spatialization by itself is a very interesting tool to play with, and I’ve had a lot of fun places the three cubes at different distances and orientations to create unique mixes, both in terms of volume and panning. In some ways, this concept is similar to the traditional style of mixer present in the first movement, but speaking from experience, the interactable experience is, for lack of a better word, much more esoteric in this model.

Compounding the creative interest of the audio spatialization is perhaps the most engaging element of the movement in its current iteration, namely the rigidbody physics simulation. By identifying the cubes as ‘rigidbodies’ (a term in visual effects and 3D modeling that means that the physics simulation treats the object as a contiguous, immutable whole), it is a fairly straightforward process to get Unity (and by extension the HoloLens) to apply a desired physics simulation to the objects.

In this case, I wanted the cubes to move through the virtual volume as if they were in outer space, without gravity and with near-perfect conservation of momentum. A few changes to the physics settings later, and we suddenly have space cubes!

Example of ridigbody physics and spatialization in action

I’m not sure yet why the cubes seem to lose so much momentum when the strike a barrier; I’ve tried to play around with physic materials to add elasticity to the interactions, etc. but haven’t had great luck so far. I’m sure there some relationship between the angular momentum and the static momentum that I’m missing, but even with that in mind there still seems to be an inordinate amount of velocity being shed after every collision. Ideally, the cubes will continue ricochet indefinitely, perpetually altering the spatialization of the track, but that’s a story for another day.

Using Hand Menus

Apart from the volume and the cubes themselves, I’ve also implemented a contextual ‘hand menu,’ which is simply a menu whose appearance is determined by the gaze tracking on the HoloLens platform; if you are looking at the back of your right hand, the ‘hand menu’ should appear in the air toward the inside of the hand.

After ‘losing’ the cubes a few times, I thought it might be a good idea to have a menu that would be able to call simple commands like ‘freeze,’ ‘mute,’ ‘reset,’ etc. in the event that the cube(s) is for some reason unreachable. So far, the freeze and mute functions are working as their can be rigged to simple onClickEvent functions. The ‘Reset All’ command, as well as an as-yet un-created ‘Generate’ command which will create cubes of certain colors will require some specific coding that I’m still getting a handle on (no pun intended).

It’s been difficult to get the ‘hand menu’ to show up without hands…

Stretch Goals

Depending on how long it takes to get the hand menu and the rigidbody interactions working the way that I’m intending, I also have some other ideas to map various other volumetric parameters to specific audio/mixer effects (e.g., size to effect pitch shifting, XYZ orientation to effect EQ, etc.), but that will take a back-seat to getting the initial interactions working smoothly.

sliders

I’ve spent the last two weeks banging my head against the proverbial wall, re-learning class and function indications in C#, the programming language that underpins the Unity Engine (Unity), trying to get the Pinch Slider assets from the Microsoft’s Mixed Reality Toolkit (MRTK) to talk to the AudioMixer class that’s native to Unity.

With some help from a few professional programmers in my network, as well tutorials and manuals that have been developed by both Unity and Microsoft, as of this weekend, I have a functional skeleton for the first ‘movement’ of Touching Light, which involves independent audio faders that will adjust the volume of loop-based original music.

This will function as an augmentable backing track alongside which the performer can improvise freely, or engage with some pre-written melodic and harmonic material that will be presented with traditional staff-notation.

Final testing will occur in the next day or two to ensure that device deployment works as I am intending, after which I’ll take a break from app development for the remainder of the week to shore up chapters 2 and 3 before sending it along to my research advisor for preliminary comments.

Included below is a technical overview of the Unity assets and scripts involved at this juncture, which are subject to change as I optimize:

In-Engine Render

Here you’re seeing a collection of 3D objects that have been adapted from the PinchSlider pre-fabricated (prefab) assets provided in the MRTK; each fader is interactable with the HoloLens 2 (HoloLens) “pinch” gesture (hence, “PinchSlider”) and the ‘thumb’ (the knob that slides) will move vertically along that track, returning a value between 0 and 1, depending on where it is located along that track.

The context menu near the bottom is a profiler asset that allows me to track the CPU usage of different interactions in real-time, keeping an eye on whether or not things are in danger of causes lag, and freezing/crashing the program; so far, we’re in the green.

PinchSlider Inspector

The connection between the PinchSlider asset exists within the ‘Events’ section: whenever the value of the slider (the number between 0 and 1) changes, the PinchSlider will return that value which can then be collected by other scripts (programs) and used to alter things like the volume of specific sounds, loops, etc.

I wrote the MixLevels.cs script (referenced in the On Value Updated event) to take that slider value and apply it to the volume for the appropriate track.

MixLevels.cs

public class MixLevels : MonoBehaviour
{
    public AudioMixer masterMixer; //create a new object to reference the in-engine audio mixer

    private float dB; //create a new variable to map the slider's
    private string param;


        [SerializeField]
 //ask for input
        private string exposedParam = null;

    public void SetMusicLvl(SliderEventData eventData) //take an input of sliderEvent type

    {
        param = exposedParam;
        dB = eventData.NewValue;
        if (!(dB == 0)) //because the logorhythmic function to alter the value for a fader level breaks with an input of '0,' checks for an input of 0
        {
            masterMixer.SetFloat(param, (Mathf.Log10(dB) * 20)); //change slider value to something that works for a volume fader; essentially map the 0-1 range to a -80 to 0 range.

            Debug.Log(dB); //print dB value in order to confirm that it is being changed

        }
        else

        {
            masterMixer.SetFloat(param, -80); //if input is 0, set the fader to -80

            Debug.Log(dB); //print dB value in order to confirm that it is being changed

        }
    }
}

A fairly ‘simple’ script, as far as what is possible in the grand scale, this program allows for the user to input which fader’s volume (an “exposed parameter,” essentially meaning that it is visible and available for other programs to edit) this specific copy of the MixLevels.cs script should be attached to.

I’ve color-coded the code above; anything in sage green are ‘comments;’ they are notes within the program written in English for other programmers, that the computer ignores when it is running the program. Commenting is an important way to communicate with others who will look at your code in order to help them understand what your program is doing, and how.

Master Fader

While there is likely a more efficient and elegant way to create these connections, this is final piece of the puzzle: the place where the user can designate which audio track the slider should be in control of. You can see here that in the ‘Exposed Param’ field (which I’ve shown is a ‘Serialized Field’ in the script code above, thus prompting for an input) I’ve designated ‘masterVol’ which, as you might guess, is the reference to the Volume of the Master Fader (the one that controls the total overall volume of all of the tracks).

But multiplying this by five or ten times, I’ve then generated the necessary faders to control all 11 (10, plus the master fader) of the individual tracks that make up the complete original music.

Screen recording while rendering 3D and audio in real-time is a bit taxing
(hence the red on the profiler)

foundations

Approval for the project prospectus has come through from my DMA committee, and so it is time to properly begin this project. Today’s task was to connect and deploy a Unity scene onto the Microsoft HoloLens 2 (“Lens”) using the Microsoft Visual Studio (“VS”).

It took some time and a bit of fiddling to get everything connected, but following some Microsoft (“MSFT”) tutorials from their HoloLens documentation eventually proved successful.

The “foundations” scene that was successfully deployed is very basic: it is a gradient image that evokes a sense of a horizon line, rest approximately twelve inches away from the headset, and locked in orientation to always be directly in front of the Lens. The camera tracking allowed the image to maintain relative position (in relation to the Lens) while changing its absolute position (in relation to its coordinates in 3D space) and absolute orientation in response to the motion of the Lens, and by extension, my head.

The image was not interactable, and I eventually ended the deployment through VS on my laptop, as I had not given the Lens a way to quit out of the application once it had launched, though there may be something built into the MSFT Mixed Reality Toolkit (“MRTK”) that I am not aware of.

The next step will be to create a scene that is stationary that allows the camera to be track to the Lens, as well as determining how to run an application natively on the Lens without needing to launch and support it from VS.