Woah, That Was Crazy


I love showing people their first VR experience. The headset handoff is always awkward. The new user consistently fiddles with the controller, holding it upside down or simply dropping it. I always hesitate to tighten the strap for fear of pulling hair or dislodging glasses: "Does that feel ok? Is it playing?" I'm a broken record. Just a few moments into their experience, even one so passive as a 360 video, the exclamations of shock and awe begin. I listen for the oooohs and ahhhhs as the new user spins in their swivel chair. And the moment they take the headset off, after having encountered a Great White Shark or knocked down blue and red cubes in Beat Saber, consistently makes me smile. The red imprint of the headset encircles their eyes, and they blink extra fast to regain the moisture they lost after staring at an screen a mere inches from their retinas. Often users will come away with the images stuck in their mind, still waving their imaginary digital wands at colorful cubes that have, much to their chagrin, stopped coming. Even seasoned VR/AR veterans — those having tested the first Oculus Development Kit (DK1) or played with the HoloLens demos — feel the difference. Immersion is not a party trick. Time and again, I hear the refrain: "Woah, that was crazy."

No matter the hype and spectacle surrounding XR technology — these moments point to a genuine newness and qualitative difference in contemporary XR experiences. This newness begins with the medium's technical affordances: these are the design characteristics that indicate to users how the technology should be used [1]. For example, a VR headset lends itself to tracking in a space, fully immersive field of view, and 3D visuals. These affordances contribute to XR's distinct media logic: the assumptions and processes for structuring messages within the medium [2]. I remix this framework from David L. Altheide and Robert P. Snow's first book Media Logic in 1979*:* as the writers have refined the concept over the decades, they have consistently analyzed journalism's adaptation to the storytelling processes inherent to each successive medium [3]. Through analyzing XR journalism pieces, studying theoretical papers and scientific articles, I have settled on three major characteristics, which I think structure the messages that XR can uniquely communicate in the news.

  1. Presence, with virtual reality.
  2. Context, with augmented reality.
  3. Spatial Storytelling, with virtual and augmented reality.

These three factors of XR's media logic are indicative of the niche that immersive journalism is actively carving out in public life. I will showcase case studies from creators ranging from the New York Times to Time that leverage these characteristics and show off what only extended reality can do. There are hundreds of cases since the first immersive journalism experiences in 2012 — my curation is based on experiences that were groundbreaking, that have been celebrated by festivals, or that I discovered in my years of researching immersive journalism. As a reminder from previous chapters, the scope of this analysis is limited to American journalism: it is a mix of published experiences by news organizations or documentaries/games by independent developers, museums, companies, or filmmakers. All the experiences discussed below are publicly available, for free and online. For some, the only prerequisite is a VR headset.

Presence in Virtual Reality


Leave it to the godmother of immersive journalism, Nonny de la Peña, to label the most salient element of virtual reality. De la Peña and PBS Frontline producer Raney Aronson-Rath produced a best practices guide for creating VR journalism. The two pioneers write that presence is "the single defining characteristic of virtual reality; the way in which, thanks to a certain combination of sensory input, your mind can trick your body into feeling as though it is somewhere else" [4]. The term has since become commonplace in immersive talk: the hallmark of a good VR experience is its ability to maintain the viewer's sensation of presence throughout an experience. Creators are warned of technical errors that break "being there," like a nausea-inducing low frame rate or insufficiently spatialized audio. For extra evidence, remember that the historian of immersion from Chapter 1, Oliver Grau, tracks the constitution of presence as the common goal laced throughout the history of immersive media. As you slide along the spectrum of immersion, the capacity for presence compounds. Augmented reality, with its pass-through displays and window to the real world, does not transport you elsewhere so much as it enhances your presence in your current space. Consequently, in this section I will focus on VR pieces that most produce a sense of remote spatial presence in viewers.

Affordances


First: a word on the technical nuances of immersion. Passive VR experiences, like 360 videos, only afford 3 Degrees of Freedom (3DoF): the yaw, pitch, and roll of your head. The user can't move their body in virtual space, therefore limiting the sensation of presence. But the more complex the VR experience gets (enabled mostly by computing power), the more degrees of freedom it affords. 6 Degrees of Freedom (6DoF) experiences allow for forward/backward, lateral, and up/down bodily movement while in the headset.

3 Degrees of Freedom: yaw, pitch, and roll. 6 Degrees of Freedom: add forward/backward, lateral, and up/down. Photo from VentureBeat.

3 Degrees of Freedom: yaw, pitch, and roll. 6 Degrees of Freedom: add forward/backward, lateral, and up/down. Photo from VentureBeat.

Today, headsets like the Oculus Quest are equipped with computer vision and inside-out-tracking so you can move in a limitless expanse of virtual space, as long as you set up a big enough play area to do so. The Quest is a 6DoF device: the headset tracks your movement and adjusts the visuals accordingly. Clearly you would have at least six degrees of freedom if you were visiting a place sans **headset; to transport you there most convincingly, the VR experience needs to give you every bit of agency it can.

As De la Peña and Aronson-Rath reference in their guide, "A number of clinical studies, as well as a large body of anecdotal evidence, shows that viewers have a stronger emotional response to a scene witnessed in VR than they do to one watched on a 2D screen" [5] In effect, the heightened emotional response can be chalked up to the high sense of presence in the remote virtual scenes. A series of these clinical studies were conducted by Jeremy Bailenson's Virtual Human Interaction Lab (VHIL) at Stanford. A 2015 study conducted by Bailenson and James Cummings, called "How Immersive is Enough? A Meta-Analysis of the Effect of Immersive Technology on User Presence" belongs to this academic discourse [6]. The study investigates the simplistic notion that a higher degree of technical immersion directly correlates to a heightened sense of presence in a space (seen as separate from social co-presence).

The experiment sought to find the technical sweet spot of immersion that solicits sensations of being in a remote location. Bailenson and Cummings analyzed 83 previous studies and found tracking level, stereoscopic visuals, and field of view to be the technical affordances that facilitated the most user presence, more so than higher fidelity visuals or audio stimuli [7]. In other words, users feel present in a virtual environment when they can move freely, see with three-dimensional depth, and look around widely. These three technical affordances belong only to virtual reality and are critical contributors to the sensation of presence.