Why do you find this Thing interesting?

I think there has been a lot of confusion over this announcement, which actually refers to more than one separate measure. Suggest a watch of the video and a read of the Daring Fireball article linked below to get a decent picture of things that isn’t filtered through me.

Important links related to this Thing

Child Safety

Apple's New 'Child Safety' Initiatives, and the Slippery Slope

https://www.youtube.com/watch?v=OQUO1DSwYN0

Opinion | Apple Wants to Protect Children. But It's Creating Serious Privacy Risks.

Original Tweet that broke the story, from a chap who teaches crypto at Johns Hopkins.

https://twitter.com/matthew_d_green/status/1423071186616000513?s=21

This seems to be one of those "more heat than light" things, which is surprising from Edward Snowden.

The All-Seeing "i": Apple Just Declared War on Your Privacy

Bad arguments from people in favour of Apple’s measures

https://twitter.com/neilfairbrother/status/1431474735347605504?s=21

Arguments in favour of privacy are not arguments in favour of CSAM.

What key questions do we need to consider regarding this Thing?

NOTE: CSAM stands for “Child Sexual Abuse Material”

  1. Apple announced three distinct things, that I have listed immediately below. We can pretty much ignore the first two:
    1. Updates to Siri and Search These provide information and help if they detect “unsafe situations”. Nobody seems in any way concerned about this.
    2. Communication Safety in Messages A feature of Messages for children (part of iCloud Family groupings) that uses on-device ML to identify sexual imagery that has been sent to the child via Messages, and obscure it with a warning message to the child offering them the option to delete it without viewing. They can say that yes they do want to see it anyway, and can proceed but if under 12, then they are told that their parents will be notified and asked to confirm that they want to see it.
    3. CSAM Detection A feature intended to identify whether images that are being uploaded to iCloud Photos match existing known CSAM. This feature is currently only scheduled to be released in the USA.
  2. Although there were a lot of hot takes, and some very alarming sounding conflation of the latter two things above, ultimately privacy activists seem to be unconcerned about the Siri thing, less concerned by the Communication Safety in Messages feature, and very concerned about the CSAM Detection feature.
  3. Concerns with CSAM Detection feature:
    1. Governments might pass laws allowing them to force Apple to add other images to the list of CSAM, e.g. the photo from Tiananmen Square of the tank man, etc. for the purpose of surveilling their populations.
    2. Part of the mechanism used involves the user’s own device comparing images to known CSAM at the point where they are about to be uploaded to iCloud photos. The concern with it is that it crosses a line to have ones own device working to help “surveil” you.
    3. The matching process is fuzzy, so that it can still recognise images, even if they have been resized or otherwise edited. The concern is that there may be false positives from the matching process.
  4. Apple’s responses to concern (a)
    1. We’ll just say no if governments try to force us to add non-CSAM images to the list of CSAM that is being searched for This isn’t that convincing. You just have to look at the accommodations that Apple has been forced to make with China to see that they can be coerced by law into acting against their principles.
    2. The way it works makes that difficult This is a bit more convincing, because:
      1. The list of image hashes (sourced from US quango NCMEC, the National Center for Missing & Exploited Children) that functions as the database of known CSAM is included in the signed OS images, and is not readily modifiable - especially not on a per-user basis.
      2. the process includes a minimum threshold of (around 30) images that must be matched before anything happens
      3. there is a human review process that will validate that the images are, in fact, CSAM before freezing an iCloud account and notifying law enforcement.
    3. All other cloud photo providers already do this Yes, they do, but only their servers do it - your device doesn’t have a role.
    4. The role played by your phone is relatively small At the point where your images are about to upload to iCloud Photos, your phone hashes it and compares the hash with the on-device database of known CSAM. It attaches a “cryptographic safety voucher” to each uploaded image which contains encrypted information about whether the image matched known CSAM. There is no on-device scanning or anything similar. If you don’t use iCloud Photos, then nothing happens at all.
    5. Your phone is only involved to remove need for Apple to access your images I’m a bit more hazy about this. iCloud Photos are encrypted server-side, but not end-to-end, so Apple can decrypt them. However, I think that the reason for your phone’s involvement is to avoid Apple needing to have access to your images in order for the process to work which might unlock future expansions of the scope of their e2e encryption.

Other Notes

...