https://www.youtube.com/watch?v=ED5R_LXF3kQ

Suga developed out of Volumetric Performance Toolbox in the context of Eyebeam's Rapid Response program. Volumetric Performance Toolbox set out to make accessible hardware and tools for performers to project themselves into shared virtual spaces in Mozilla Hubs.

My involvement in Suga included the creation of meshes and point clouds for the piece and code contributions to the Volumetric Performance Toolbox.

The majority of the time I spent working on Suga went into processing a massive 20gb LIDAR scan of the Annaberg Plantation by Cyark into a mesh or point cloud with a small file size that could be viewed on a range of devices for the web. The biggest challenge in doing so was finding a toolset that would be able to do so without running out of memory. Usually for pointcloud reduction tasks I would go to a tool like Meshlab but I found the process to be prone to crashes. Ultimately working with PDAL (Point Data Abstraction Library) and Python I was able to reduce the pointcloud in a way that didn't overwhelm my hardware.

From there I was able to make aesthetic decisions about abstraction and meshing using Blender and Meshlab. Still working with a large amount of data in Blender I also worked to create solid baking workflows from the high density meshes to simplified and textured models.

Here are some of those models in the context of the piece with scenes designed by Marin Vesely.

Untitled

Valencia inside of the baked mesh version of the sugar mill from LIDAR

Untitled

Point cloud version of the Sugarmill from LIDAR

Untitled

Sugar mill in landscape made from aerial image dataset

Untitled

Forest of Commemoration point cloud from LIDAR