iPhone slogan: What Happens on your iPhone stays on your iPhone.
"About that..." Via @zackwhittaker

Notes on Apple's CSAM scanning announcement

(Or: "Screeching voices from the minority") 

Last week (on Thursday, 5 August) Apple announced planned changes to its operating systems and iCloud services that have (re)ignited debates on the limits of security, privacy and safety. In a nutshell: Apple announced it plans to update all Apple devices with the capability to automatically scan images as they are backed up in iCloud against a database of known child sexual abuse material (CSAM) that is managed by the National Center for Missing & Exploited Children (NCMEC). Apple is also rolling out changes to accounts registered to children, in which iMessage images sent or received will also be scanned (client side, meaning on the device itself, not the server) for sexual material, and Apple would ping the kid's folks when it hits a positive.

Before moving on, here's my pithy take: this amounts to a backdoor. It bypasses encryption and both device and account security. It introduces a significant shift in Apple's priorities and previously stated business interests away from a brand based on user-centric privacy and security. Instead, we have what could be seen from a marketing point of view as "family friendly," while also finding ways to end arguments it's been having with various interest groups and U.S. federal agencies. It's dangerous at the user level, will do little to actually thwart the problems it aims to solve and globally, it introduces a dangerous precedence.

As a technologist working with media, human rights and anti-corruption groups globally and as a parent with a keen interest in child safety locally, I've got a good amount of skin in this game. I wouldn't want this level of invasion on devices run by either my kid, my work contacts, or myself. That said, The issue of CSAM is absolutely and indisputably a high priority, and while I have seen many well-reasoned and insightful points made from both sides of this debate, I've also come across more than a few really horrible takes. More on that later. First, here's a smattering of the different arguments put forward by people with deeper thinking on the matter than yours truly...

  • Apple - Expanded Protections for Children
    It makes sense to start here as it's to what everything that will follow is responding. In this, we're not getting into the parts on "Communication safety in Messages" or "Expanding guidance in Siri and Search" which are all, in the end, some reporting tools and safety messaging aimed at minors. The part to  look at is on CSAM detection. Skip the marketing text and get down to the "More Information" section. Check out the PDF called "CSAM Detection — Technical Summary" and the three technical summaries to get Apple's implementation of all this.

  • NCMEC - End-to-End Encryption: Ignoring Abuse Won't Stop It
    The National Center for Missing & Exploited Children maintains the CSAM database Apple would use. I did not find (in my only couple-minutes search) an official statement on the current issues around its collaboration with Apple, but the above was written in 2019, is still prominently featured from the organisation's homepage, and reflects its general stance on all these issues. As the other main player in this story, its views should be seen and understood. I don't think anyone on either side of the debate has any issue with its central cause, thinks its trivial or not important, or has any interest in defending or protecting abusers.  In fact, many people who are knowledgeably criticising Apple's implementation of this database actively work in safety and protection of numerous at-risk groups.

  • Ben Lovejoy in 9to5mac - Opinion: Four problems with Apple’s reported approach to scanning for child abuse images
    I don't know who Ben is really, or follow this site much but here's a post that lays out a clear overview of many of the main concerns with client-side scanning and this sort of backdoor, or side-door, to your supposedly private and encrypted content on your Apple device. I think of many things I've read, this one is very accessible and gives someone with a casual interest a quick glance of the concerns.

  • Various experts - An Open Letter Against Apple's Privacy-Invasive Content Scanning Technology
    Signed by leading experts in fields of security, safety, privacy, cryptography, researchers and legal experts, this one summarises the concerns raised by organisations such as Electronic Frontier Foundation,  Center for Democracy and Technology,  Open Privacy Research Society and others.

  • EFF - Apple's Plan to "Think Different" About Encryption Opens a Backdoor to Your Private Life
    Though linked to in the above letter, the Electronic Frontier Foundation's arguments against client-side and (this kind of) iCloud content scanning deserves it's own mention. What is good about this one is its international focus, and the wide ranging implications of this being built into technology for people living under more totalitarian/authoritarian regimes. If you can use hashed datasets to scan for one kind of content, you could use them for many kinds. And if you don't want to do that, maybe rule out operating in large markets around the world. Once opened, the door only swings more widely. To be honest, this is the area in which my day job sits, so the one that gets more of my attention. American's can shout about their constitutional rights and suchlike, fine. I'm interested in users who are depending on the technology they can access being engineered as though they should have those rights, too. If that helps those of us in "the West", it's a nice bonus.

  • Alec Muffet - So @pwnallthethings decided to try calling out @wcathcart and @WhatsApp as “scanning content” like Apple; here’s why that’s incorrect & misleading
    There's a load of whatabouttery already flying around on this topic, and I'm not going to link to or get into all (or even most) of it, but here's a thoughtful example and a reasoned response, both from people who are respected experts on these issues.  And you can find a good exchange on Twitter between the two of them, if that's your thing. People have made similar whataboutism cases about Google, Kaspersky, etc. Many of these actually aren't good comparisons for one reason or another, but even if they were, the "other companies are doing it" argument is a pretty bad one. Something about that annoying thing your parents may have asked you regarding all your friends jumping off tall, dangerous things applies here.

For what it's worth 

My take is that Apple is backing off its promises in the thornier area of privacy and user security, and going more towards family, safety and things that give people warm and squishy feelings. It's own supposedly tightly controlled app store is filthy with scamware. Also, there remains still unknown exploits in iOS, leveraged by the NSO Group's Pegasus spyware. And if you're a security researcher digging in the wrong way, Apple may just see you in court. In this particular case, in an internal Apple mailing message from NCMEC's executive director of strategic partnerships, critics of allowing constant content monitoring were dismissed as "screeching voices of the minority." No one who can speak for Apple has objected to this description.

Perpetrators will bypass this: This turn toward the "nothing to hide; nothing to fear" crowd is chilling. The CSAM database is a list of cryptographic hashes -- "fingerprints" of the original horrendous content. If an image triggers a positive hit, the authorities can be contacted, or someone can be tasked to review content to confirm it's legitimate. And here's where we start going into the complex rabbit warren. If the fingerprints are too precise, the sick bastards who peddle them can easily work around it. They can change the image format, shave off a pixel or two in the width or height, alter the colour or any number of things. In order to work, hashes need to be what's called "fuzzy," or approximate, so something close enough can trigger it. 

Legitimate, private content will be viewed by more people, thus more users have lost their privacy, and more content is is at risk of misuse: The more this fingerprinting is made approximate, the more false alarms go off, so the more people and processes will be needed to review them. So these legal, private images are being seen by more people, and the number of people that have to be trusted to not be bad actors or sloppy at their work or a target for an attack grows.

New methods of attack: There's also the problem of an easy attack vector. Anyone can send anyone a WhatsApp message and WhatsApp and iMessage (maybe others) media automatically backs up to iCloud. Anyone can send anyone else an image they never asked for that triggers a positive match. This may seem insane, or at least like an "edge case" proposition. I want you to consider people who are not people you know who deal with threats from regimes or corrupt entities you will likely never have to deal with. If I wanted to discredit a political dissident or rival candidate, it's an option. Shut down an anti-corruption case? Dirty up the reputation of a journalist? You betcha.

New use cases: I'm not being original on this one, but once this exists as infrastructure, it has to have new use cases. It's almost a law of physics. If I were a government that had a large iPhone user base, so large that losing that market would be painful, I might have my own database of media file hashes I'd want scanned. Maybe it's memes making fun of leaders, or documents that I don't want the public sharing or seeing. Maybe if you don't run my database in my country, and follow my laws with technology you obviously have that makes it trivial to do so, it gets harder to sell iPhones, or for your in-country offices to not get raided.

Meanwhile...

So, it would be great if Apple worked with the broader security community on these things. It would also be groovy if some who fancied themselves as "purists" would lay off the "Perfection of Panopticon" act and considered the world as it is.

Finally, the really unhelpful counter-argument is to do nothing.  The "Why don't parents just pay better attention to their children" thing  doesn't fly, and guess what? We live in a society. Why don't people do all kinds of things that they should? We wouldn't 3rd and 4th wave pandemic lockdowns, insane climate disasters, or regimes violating treaties on the use of chemical weapons.

Part of the reason Apple is doing what it's doing is that there are segments of public and private sectors, and just the public in general, asking for something to be done. Like it or not, something will be done. Eventually something is always done. My argument would be that investment is better placed in existing, under-funded, under-resourced policing and the use of advanced, warranted and targeted surveillance of these networks of abusers, possibly with the use of taxes these companies should be paying... as a start.

This article was updated on 9 August 2021