https://s3-us-west-2.amazonaws.com/secure.notion-static.com/7aba98fd-11a6-42a5-a46b-8cacf9a334d0/2021_tad_laruta_websiteheroimage_1450x550_gtw1.jpg

In Isaac Gómez's La Ruta, we are told the heartbreaking story of a border town, a bus route and the women of Ciudad Juárez. Inspired by real testimonies from women affected by the ongoing femicides along the border, La Ruta weaves together beautiful storytelling and music in a celebration of the resilience of Mexican women in the wake of tremendous loss. Read more about this stage play and its production here:

La Ruta | Department of Theatre and Dance - The University of Texas at Austin

I was brought into the production of La Ruta after the pivot to a remote production was decided. The media designer’s role in this play, which previously was traditional media creation and stage projection, quickly grew into a massive, tangled up job of systems engineering, media design, direction, and audio engineering. John Erickson, the media designer, brought me in as his assistant. In this role, I took over solving the technical aspects of the performance, and early on we decided that our remote queuing/compositing system would primarily live in TouchDesigner.

Our baseline goal for La Ruta was a live-streamed table read. You can imagine this as a Zoom stream of the actors rehearsing their parts from their individual homes, with less emphasis on a polished performance. As the production began to develop, the idea evolved into a Zoom/Skype call between all the actors with customizable window placements, dynamic media backgrounds, and programmable cues. From this point, we drew up a list of technical challenges: working with remote performers in multiple locations, compositing and arranging their streams in real time, having a familiar cueing system for John and our operator, and individual audio routing for all the actors. Through the planning process, John’s design choices influenced our technical needs and my technical research influenced the end design.

Our technical objective, as it became more focused, was to be able to ingest and manipulate nine different live feeds from actors in 2D space and to recall each unique layout based on cues sent from a seperate computer running QLab. We also wanted the actor feeds to have soft, blurred edges, rather than sharp rectangular windows. Finally, the media designer (John) wanted to be able to composite the actor feeds on top of content sent from QLab , and have that final video and audio be streamed to Vimeo.

Here’s a sneak peak of what we would end up designing, with this in mind:

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/5df0f571-5f09-478c-949d-52dd010dc98c/Untitled.png

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/533b3fb4-1a57-4337-9eda-02a2448e17f5/Untitled.png

So, with the goal outlined, let’s take a look at the final system diagram:

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/d3aa330c-cd64-44e0-93eb-684129332ad2/LRTA-SystemsDiagram01_02.png

This is a little dense, so here’s the software system flow. The actor feeds are individually sent to the main PC via OBS.Ninja.

https://s3-us-west-2.amazonaws.com/secure.notion-static.com/44e59fe9-3ff7-4e84-9417-00e7e9446d86/Untitled.png