gravity

With the spring semester at WVU in full force, finding the right balance between writing chapters for the dissertation, working on the Touching Light application, and teaching a coaching various lessons and ensembles has been tenuous, but so far, the project is still on track! With the first two ‘movements’ in a stable place and successfully ported to the HoloLens, the last piece of the puzzle is the development of the third and final movement of the application which involves both the most intractability as well as the most custom coding and design.

That said, the initial versions of the third movement have been picking up steam.

While the first movement aims to create a virtual representation of a physical object (a sound mixer), and the second movement creates an abstracted virtual representation of physical objects (the photosphere), the third movement explores interactions and functionality that is unique to the MR space by employing what I’ve referred to in my research as an ‘esoteric liminality’ (still not sold on this title, but it’s what I’m working with at the moment). In English, this is simply a virtual environment that manifests and takes advantage of interactions that are unique to the MR platform, including interacting with abstract shapes (in our case, colored cubes), audio spatialization in an MR volume, and MR collisions of virtual and physical objects (this bit is proving to be a smidge tricky).

Overall, the progress of development is on track, but as always, I’ll need to remember that this is intended to be a proof of concept, not necessarily a polished, marketable experience (but it would be nice it if was, right?).

Details below about specific application features and how they’ve been achieved.

The Barrier

One of the biggest downsides to working in the Unity Editor (which would be the same for any development that is not done natively on the XR platform) is the inability for the IDE to map the physical environment in any meaningful way during development. To address this issue, I’ve simulated a physical room by using 3D places in a vaguely room-sized cube surrounding the initial instantiation of the MTRK camera, which allows any sort of collision interactions to be explored, even if it doesn’t solve the issue of need to track collisions with the physical volume – this will come soon.

The Cüb

The cube represents the simulates physical volume, and acts as a barrier for virtual objects to collide with, like they would in a physical space with walls, floor, and ceiling. The coloration is simply for ease of identifying the edges of the barrier and will not be present in the final application.

A lot of Cubes

With the barrier in place, I’ve devised three simply ‘esoteric’ objects to serve as prototypes for interactability which are represented by three smallish cubes, one red, one blue, and one green.

RGB cubes

At the moment, the cubes have three primary components: an spatialized audio source, interactability using hand rays and articulated hands, and new for this movement, rigidbody physics simulation.

RedCube Inspector

Grabbable

Interactability was fairly straightforward and simply involved recreating the combination of components that I’ve used in the prior two movements; both the mixer and the photosphere have Object Manipulator and NearInteractionGrabbable scripts tied to them.

Using the simulated hand articulations

Spatialized

Secondarily, each cube serves as an audio source for one part of the three-part pre-recorded track (that I am still in the process of writing). For the moment, the bass, kick, and lead from movement one are assigned for testing purposes. Unique to these audio source components however is their spatialization. This means that the panning and volume of the individual tracks is determined by the user orientation and distance to the object; the closer the user is to the cube, the louder the volume of the track. Similarly, if the user places the cube to their right or left, the audio will be sent only to the appropriate side of the stereo mix.

The audio spatializer

Weightless

Spatialization by itself is a very interesting tool to play with, and I’ve had a lot of fun places the three cubes at different distances and orientations to create unique mixes, both in terms of volume and panning. In some ways, this concept is similar to the traditional style of mixer present in the first movement, but speaking from experience, the interactable experience is, for lack of a better word, much more esoteric in this model.

Compounding the creative interest of the audio spatialization is perhaps the most engaging element of the movement in its current iteration, namely the rigidbody physics simulation. By identifying the cubes as ‘rigidbodies’ (a term in visual effects and 3D modeling that means that the physics simulation treats the object as a contiguous, immutable whole), it is a fairly straightforward process to get Unity (and by extension the HoloLens) to apply a desired physics simulation to the objects.

In this case, I wanted the cubes to move through the virtual volume as if they were in outer space, without gravity and with near-perfect conservation of momentum. A few changes to the physics settings later, and we suddenly have space cubes!

Example of ridigbody physics and spatialization in action

I’m not sure yet why the cubes seem to lose so much momentum when the strike a barrier; I’ve tried to play around with physic materials to add elasticity to the interactions, etc. but haven’t had great luck so far. I’m sure there some relationship between the angular momentum and the static momentum that I’m missing, but even with that in mind there still seems to be an inordinate amount of velocity being shed after every collision. Ideally, the cubes will continue ricochet indefinitely, perpetually altering the spatialization of the track, but that’s a story for another day.

Using Hand Menus

Apart from the volume and the cubes themselves, I’ve also implemented a contextual ‘hand menu,’ which is simply a menu whose appearance is determined by the gaze tracking on the HoloLens platform; if you are looking at the back of your right hand, the ‘hand menu’ should appear in the air toward the inside of the hand.

After ‘losing’ the cubes a few times, I thought it might be a good idea to have a menu that would be able to call simple commands like ‘freeze,’ ‘mute,’ ‘reset,’ etc. in the event that the cube(s) is for some reason unreachable. So far, the freeze and mute functions are working as their can be rigged to simple onClickEvent functions. The ‘Reset All’ command, as well as an as-yet un-created ‘Generate’ command which will create cubes of certain colors will require some specific coding that I’m still getting a handle on (no pun intended).

It’s been difficult to get the ‘hand menu’ to show up without hands…

Stretch Goals

Depending on how long it takes to get the hand menu and the rigidbody interactions working the way that I’m intending, I also have some other ideas to map various other volumetric parameters to specific audio/mixer effects (e.g., size to effect pitch shifting, XYZ orientation to effect EQ, etc.), but that will take a back-seat to getting the initial interactions working smoothly.

foundations

Approval for the project prospectus has come through from my DMA committee, and so it is time to properly begin this project. Today’s task was to connect and deploy a Unity scene onto the Microsoft HoloLens 2 (“Lens”) using the Microsoft Visual Studio (“VS”).

It took some time and a bit of fiddling to get everything connected, but following some Microsoft (“MSFT”) tutorials from their HoloLens documentation eventually proved successful.

The “foundations” scene that was successfully deployed is very basic: it is a gradient image that evokes a sense of a horizon line, rest approximately twelve inches away from the headset, and locked in orientation to always be directly in front of the Lens. The camera tracking allowed the image to maintain relative position (in relation to the Lens) while changing its absolute position (in relation to its coordinates in 3D space) and absolute orientation in response to the motion of the Lens, and by extension, my head.

The image was not interactable, and I eventually ended the deployment through VS on my laptop, as I had not given the Lens a way to quit out of the application once it had launched, though there may be something built into the MSFT Mixed Reality Toolkit (“MRTK”) that I am not aware of.

The next step will be to create a scene that is stationary that allows the camera to be track to the Lens, as well as determining how to run an application natively on the Lens without needing to launch and support it from VS.