Successfully deployed a stable build of Mixer Test 01 to the HoloLens 2 with good success; everything seemed to run smoothly, and the audio with in sync with itself and adjustable using the programmed faders. The console itself is a bit awkward to move about as I didn’t include a way to view the bounding box, but with minimal trial and error I was able to place the mixer in a set location and successfully use the ‘pin’ function to lock it in place.

Continuing to stress test the app and clean up the code for the mixer app, but as it stands, with a successful deployment, that portion of the project is officially in alpha!

Chapters 1-3 of the dissertation have been submitted to my research advisor for a first pass at suggestions of editing and revisions and I’ve begun to reach out to some folks that I’ve had the opportunity to cross paths with to line up some interviews regarding their experience working with percussion and live technology, as well as to solicit their thoughts about specific elements of the project.

The second bit of programming will center around the development of a sort of adaptive photo-sphere which will rotate around the performer over a set duration to serve as a sort of inspirational score for improvisatory and/or pre-composed musical material.

A basic render of the current iteration in action below:

At the moment, the images exist as grossly extrude 3D cubes, with little regard to the original dimensions of the images mapped onto them. This element will likely remain fairly similar (in terms of being size-agnostic) in the final build, but there will definitely be some tweaking regarding the overall size of the images, taking into account the fact that they would likely clip through the floor of the physical environment at their current scale, etc.

Ideally, I’ll also get them to curve…!

Images are mapped directly as the albedo map for the 3D objects, bypassing the material creation structure native to the Unity editor; this might not be the right option, but for the moment, it let’s things be visible when I need them to be. Currently the textures are approx. 50% translucent (using the RGBA alpha key mapped to the object opacity) to allow the performer to still be aware of the physical environment beyond the ‘sphere’ (e.g. the audience!), but the current clipping of the geometries combined with the texture wrapping create odd visuals like the double-sunset (which might be fixed by adjusting the Cull Mode to “Back” instead of “Off”).

Currently the panels are being drawn arbitrarily using a GridObjectCollection script, then tweaked individually to present the image-facing side toward the ‘origin’ (in this case, the center of the ‘sphere.’

On execution, a HeadPositionOffset script also forces the ‘sphere’ to de-center from the perspective and facing of the user.


Approval for the project prospectus has come through from my DMA committee, and so it is time to properly begin this project. Today’s task was to connect and deploy a Unity scene onto the Microsoft HoloLens 2 (“Lens”) using the Microsoft Visual Studio (“VS”).

It took some time and a bit of fiddling to get everything connected, but following some Microsoft (“MSFT”) tutorials from their HoloLens documentation eventually proved successful.

The “foundations” scene that was successfully deployed is very basic: it is a gradient image that evokes a sense of a horizon line, rest approximately twelve inches away from the headset, and locked in orientation to always be directly in front of the Lens. The camera tracking allowed the image to maintain relative position (in relation to the Lens) while changing its absolute position (in relation to its coordinates in 3D space) and absolute orientation in response to the motion of the Lens, and by extension, my head.

The image was not interactable, and I eventually ended the deployment through VS on my laptop, as I had not given the Lens a way to quit out of the application once it had launched, though there may be something built into the MSFT Mixed Reality Toolkit (“MRTK”) that I am not aware of.

The next step will be to create a scene that is stationary that allows the camera to be track to the Lens, as well as determining how to run an application natively on the Lens without needing to launch and support it from VS.