

Moderation lede final
Julie Mora-Blanco remembers the day, in the summer of 2006, when the reality of her new job sunk in. A recent grad of California State University, Chico, Mora-Blanco had majored in art, minored in women’s studies, and spent much of her free time making sculptures from found objects and blown-glass. Struggling to make rent and working a post-production job at Current TV, she’d jumped at the chance to work at an internet startup called YouTube. Maybe, she figured, she could pull in enough money to pursue her lifelong dream: to become a hair stylist.

It was a warm, sunny morning, and she was sitting at her desk in the company’s office, located above a pizza shop in San Mateo, an idyllic and affluent suburb of San Francisco. Mora-Blanco was one of 60-odd twenty-somethings who’d come to work at the still-unprofitable website.
Mora-Blanco’s team — 10 people in total — was dubbed The SQUAD (Safety, Quality, and User Advocacy Department). They worked in teams of four to six, some doing day shifts and some night, reviewing videos around the clock. Their job? To protect YouTube’s fledgling brand by scrubbing the site of offensive or malicious content that had been flagged by users, or, as Mora-Blanco puts it, "to keep us from becoming a shock site." The founders wanted YouTube to be something new, something better — "a place for everyone" — and not another eBaum’s World, which had already become a repository for explicit pornography and gratuitous violence.
Mora-Blanco sat next to Misty Ewing-Davis, who, having been on the job a few months, counted as an old hand. On the table before them was a single piece of paper, folded in half to show a bullet-point list of instructions: Remove videos of animal abuse. Remove videos showing blood. Remove visible nudity. Remove pornography. Mora-Blanco recalls her teammates were a "mish-mash" of men and women; gay and straight; slightly tipped toward white, but also Indian, African-American, and Filipino. Most of them were friends, friends of friends, or family. They talked and made jokes, trying to make sense of the rules. "You have to find humor," she remembers. "Otherwise it’s just painful."
Videos arrived on their screens in a never-ending queue. After watching a couple seconds apiece, SQUAD members clicked one of four buttons that appeared in the upper right hand corner of their screens: "Approve" — let the video stand; "Racy" — mark video as 18-plus; "Reject" — remove video without penalty; "Strike" — remove video with a penalty to the account. Click, click, click. But that day Mora-Blanco came across something that stopped her in her tracks.
"Oh, God," she said.
Mora-Blanco won’t describe what she saw that morning. For everyone’s sake, she says, she won’t conjure the staggeringly violent images which, she recalls, involved a toddler and a dimly lit hotel room.
Ewing-Davis calmly walked Mora-Blanco through her next steps: hit "Strike," suspend the user, and forward the person’s account details and the video to the SQUAD team’s supervisor. From there, the information would travel to the CyberTipline, a reporting system launched by the National Center for Missing and Exploited Children (NCMEC) in 1998. Footage of child exploitation was the only black-and-white zone of the job, with protocols outlined and explicitly enforced by law since the late 1990s.
The video disappeared from Mora-Blanco’s screen. The next one appeared.
Ewing-Davis said, "Let’s go for a walk."
Okay. This is what you’re doing, Mora-Blanco remembers thinking as they paced up and down the street. You’re going to be seeing bad stuff.
Almost a decade later, the video and the child in it still haunt her. "In the back of my head, of all the images, I still see that one," she said when we spoke recently. "I really didn’t have a job description to review or a full understanding of what I’d be doing. I was a young 25-year-old and just excited to be getting paid more money. I got to bring a computer home!" Mora-Blanco’s voice caught as she paused to collect herself. "I haven’t talked about this in a long time."
Mora-Blanco is one of more than a dozen current and former employees and contractors of major internet platforms from YouTube to Facebook who spoke to us candidly about the dawn of content moderation. Many of these individuals are going public with their experiences for the first time. Their stories reveal how the boundaries of free speech were drawn during a period of explosive growth for a high-stakes public domain, one that did not exist for most of human history. As law professor Jeffrey Rosen first said many years ago of Facebook, these platforms have "more power in determining who can speak and who can be heard around the globe than any Supreme Court justice, any king or any president."
Launched in 2005, YouTube was the brainchild of Chad Hurley, Steve Chen, and Jawed Karim—three men in their 20s who were frustrated because technically there was no easy way for them to share two particularly compelling videos: clips of the 2004 tsunami that had devastated southeast Asia, and Janet Jackson’s Superbowl "wardrobe malfunction." In April of 2005, they tested their first upload. By October, they had posted their first one million-view hit: Brazilian soccer phenom Ronaldinho trying out a pair of gold cleats. A year later, Google paid an unprecedented $1.65 billion to buy the site. Mora-Blanco got a title: content policy strategist, or in her words, "middle man." Sitting between the front lines and content policy, she handled all escalations from the front-line moderators, coordinating with YouTube’s policy analyst. By mid-2006, YouTube viewers were watching more than 100 million videos a day.
In its earliest days, YouTube attracted a small group of people who mostly shared videos of family and friends. But as volume on the site exploded, so did the range of content: clips of commercial films and music videos were being uploaded, as well as huge volumes of amateur and professional pornography. (Even today, the latter eclipses every other category of violating content.) Videos of child abuse, beatings, and animal cruelty followed. By late 2007, YouTube had codified its commitment to respecting copyright law through the creation of a Content Verification Program. But screening malicious content would prove to be far more complex, and required intensive human labor.
They followed a guiding-light question: "Can I share this video with my family?"