Kaye, David. Speech Police. Columbia Global Reports. Kindle Edition.

Link to book - Last read on 05-30-2020

My thoughts and beliefs after reading the book:

  1. We unavoidably live in a world where an oligopoly of social media companies control the primary channels through which we all communicate and organize, and as a result have the power to both enable and suppress various forms of discourse
    1. These companies are in effect public utilities at this point
    2. Competition could in theory help reduce outsized influence of any one company on how we all communicate, but given historical precedent the past ~10-20 years I'm doubtful
  2. Incentives are messed up such that these social media companies amplify harmful content inadvertently on their quest to maximize profit through virality-fueled ad revenue
    1. Legal liability to censor harmful content without sufficient specificity on what is/isn't harmful incentivizes social media corporations to over-censor so as to not get penalized
    2. I wonder if you could financially sustain a social media business with a no advertising freemium model with billions of users having free access and wealthier users paying for premium features... my guess is that'd never successfully compete with Facebook et al
  3. Seriously negative outcomes have already come from insufficient content moderation, eg: exacerbating genocide in Myanmar, hate crimes / murder in America, Russian election interference, and more
    1. That said, there's a long-term insidious risk of over-pivoting toward censoring content that could lead to more of these negative outcomes. If we're not careful, we may centralize too much control over freedom of expression (or the lack thereof) to the leadership of Facebook / Twitter / YouTube / et al, rather than following standard democratic lawmaking processes
  4. My current thinking for what content moderation should look like is:
    1. Social media companies should abide by encoded worldwide human rights laws around violence, manipulation, and freedom of expression
    2. Governments should adopt content moderation laws for their jurisdictions through the normal legislative process that is subject to judicial/independent challenge.
      1. Social media companies should respect those laws so long as they don't violate human rights laws
    3. Beyond that, where there is discretion, social media companies should lean towards strictly following legality around access to content (if you can view it at all), while exercising their discretion around amplification of content (if it's easy to find or pushed/recommended to you)
    4. We should suppress and maybe even delete content that has a high likelihood of directly violating human rights law (ie: causing death) where resultant crime has not happened yet
      1. If the crime has already happened, records should be kept as evidence. Whether it should be visible to the public (and if so, how viral it can go) should be more nuanced
    5. Wherever content isn't going to lead to imminent violence or manipulation that'd cause "serious harm" to the recipient, should never censor and instead focus on maximizing overt transparency into the quality of the content and associated links
      1. No company should be the arbiter of truth
      2. Maximizing users' visibility into the origins of the content they are consuming, though, could be beneficial (eg: origin of poster, quality of link as determined by objective metrics like traffic/number of reports, etc.)
    6. Social media companies should be as transparent as possible about what their content moderation policy is, when and how they act on what types of content, and how to appeal
      1. Should also have a clear and open record of decisions that have been made (at least open to a regulatory body if not the public, depending on the class of content)
  5. In cases where social media companies’ leadership are hesitant about transparency, rooted in fear of legal outcomes that are bad to the business, they should try to align with regulators and then ultimately take a moral stance
    1. Not just for what the policy is, but how it’s enforced, what happens to your account/community if you’re flagged, and how to appeal
    2. I wonder if lack of transparency around content policing policy hurts earnest engagement (ie: a small business being afraid to keep posting about the Declaration of Independence on Facebook in case their page gets taken down)
  6. I'm interested in if a Lockean approach to community standards would work well
    1. eg: Facebook could have high-level standards for the whole platform, rooted in human rights law, and then different countries / digital communities could self-organize their own content moderation laws on their corner of the platform
    2. This is what Reddit seems to do with subreddit moderators
  7. I wonder how effective ML might be at separating satire/art from literal content
  8. A big philosophical question for social media companies: "Should we prevent the next Declaration of Independence from being written?"
    1. I think not. Societal progress is not done with and while hopefully violence on that scale won’t ever be needed again, history suggests it may be at one point or another. We may need more revolutions whether it's in our lifetime or in 100 or 1000 years. It should be possible to organize them
    2. However, should have standards around specifically the planning of future violence

Key takeaways from the author:

  1. We unavoidably live in a world where the internet is social media. Literally true in terms of access in much of the developing world, and practically true in the developed world when it comes to where attention is spent
    1. The horizontal and disorganized web of blogs linking to other blogs, finding things mostly through word of mouth and search, was a much more benign way to consume content on the internet than today
  2. The incentive structures of social media today - advertisement revenue model fueled by content virality - lends to amplification of fake news, radicalization, hate speech, and even off-platform violence
    1. Before social media, hateful propaganda or intentional misinformation had to travel slowly by word of mouth or geospatial proximity
    2. It's especially hard to disentangle rights-oriented content policies from social media companies' profit motives
  3. The internet's architecture by default makes it hard to control speech (anonymity, decentralization, encryption, etc.), but social media unintentionally acted against this by centralizing speech/discourse under a set of well-controlled mediums
  4. Some seriously negative outcomes have definitively come from how social media amplifies hateful propaganda (ethnic cleansing in Myanmar, Russian election interference, etc.)
  5. There are a set of a big open questions around how content should be moderated online:
    1. What role should companies vs national governments vs worldwide governing bodies have in determining the rules for online expression?
    2. Who ensures protection of individual rights?
    3. How should we as a global society determine the influence that nondemocratic societies (eg: Myanmar) have on governing online speech in their countries?
  6. In most of the world, countries are asserting that content policing is a public function, but most of the details on how to do it and which specific content to censor is left up to the companies
    1. This results in over-censoring in cases where the companies have too high a liability to not assume the lowest possible bar within legal interpretability for if content is in violation with regulation