Successfully deployed a stable build of Mixer Test 01 to the HoloLens 2 with good success; everything seemed to run smoothly, and the audio with in sync with itself and adjustable using the programmed faders. The console itself is a bit awkward to move about as I didn’t include a way to view the bounding box, but with minimal trial and error I was able to place the mixer in a set location and successfully use the ‘pin’ function to lock it in place.
Continuing to stress test the app and clean up the code for the mixer app, but as it stands, with a successful deployment, that portion of the project is officially in alpha!
Chapters 1-3 of the dissertation have been submitted to my research advisor for a first pass at suggestions of editing and revisions and I’ve begun to reach out to some folks that I’ve had the opportunity to cross paths with to line up some interviews regarding their experience working with percussion and live technology, as well as to solicit their thoughts about specific elements of the project.
The second bit of programming will center around the development of a sort of adaptive photo-sphere which will rotate around the performer over a set duration to serve as a sort of inspirational score for improvisatory and/or pre-composed musical material.
A basic render of the current iteration in action below:
At the moment, the images exist as grossly extrude 3D cubes, with little regard to the original dimensions of the images mapped onto them. This element will likely remain fairly similar (in terms of being size-agnostic) in the final build, but there will definitely be some tweaking regarding the overall size of the images, taking into account the fact that they would likely clip through the floor of the physical environment at their current scale, etc.
Ideally, I’ll also get them to curve…!
Images are mapped directly as the albedo map for the 3D objects, bypassing the material creation structure native to the Unity editor; this might not be the right option, but for the moment, it let’s things be visible when I need them to be. Currently the textures are approx. 50% translucent (using the RGBA alpha key mapped to the object opacity) to allow the performer to still be aware of the physical environment beyond the ‘sphere’ (e.g. the audience!), but the current clipping of the geometries combined with the texture wrapping create odd visuals like the double-sunset (which might be fixed by adjusting the Cull Mode to “Back” instead of “Off”).
Currently the panels are being drawn arbitrarily using a GridObjectCollection script, then tweaked individually to present the image-facing side toward the ‘origin’ (in this case, the center of the ‘sphere.’
On execution, a HeadPositionOffset script also forces the ‘sphere’ to de-center from the perspective and facing of the user.