Glowbox later continued their research by developing a stage for volumetrically capturing performers using three Azure Kinects. Most of my research prior had been focused on static point clouds. Shifting to dynamic point clouds introduced new opportunities and new challenges. One challenge was the gap in fidelity between time of flight data in the Azure Kinect which was produced in realtime and photogrammetry which could take hours to produce. We were also challenged with rendering people in a way that allowed abstraction but better represented high frequency details like facial features.

I approached this issue by looking into techniques for filling holes between points. I presented this research as a deck to the rest of the team which explained different approaches.

Untitled

Untitled

The first approach I tried was jump flooding to make a voronoi based coordinate field from the pointclouds.

Untitled

The simplest approach to filling in missing data ended up being splats made of camera facing cones which produced voronoi like results when sorted in depth.

Working with streamed data allowed me to bring in interesting approaches to rendering motion from video art.

stream_ghost_001.mp4

Volumetric ghosting effect

vfx_graph_slitscan.mp4

Volumetric slit scan effect

We were also interested in ways we could expand tooling for artists. In Les Boréades we used a couple in house tools. Thomas Wester developed a camera pathing tool which allowed Brad Johnson to make camera paths in VR and a sequencer which allowed Cat during the performance to change VFX and cameras in response to cues in the music. To expand this field we looked into things ranging from midi controlled effects to attribute painting and baking for VR.

audioreactive_rearrange_grid.mp4

Midi controlled building effect