Apple's Image Abuse Scanning Worries Privacy ExpertsExpert: Tool Could Open Door to Broader Device Content Checks
Apple on Thursday unveiled a new system for detecting child sexual abuse photos on its devices, but computer security experts fear the system may morph into a privacy-busting tool.
The system, called CSAM Detection, is designed to catch offensive material that's uploaded to iCloud accounts from devices. It works partially on a device itself - a detail that privacy and security experts say could open a door to broader monitoring of devices.
"I don’t particularly want to be on the side of child porn and I’m not a terrorist," tweets Matthew Green, a cryptographer who is a professor at Johns Hopkins University. "But the problem is that encryption is a powerful tool that provides privacy, and you can’t really have strong privacy while also surveilling every image anyone sends."
The system will be implemented later this year in iOS 15, watchOS and macOS Monterey, which is the next desktop iteration. The Financial Times reports that it will only apply to U.S. devices.
"CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos," Apple says on its website. "This program is ambitious, and protecting children is an important responsibility. These efforts will evolve and expand over time."
Apple is also tweaking its Messages app to address explicit content either sent or received by children. Messages will use machine learning to automatically analyze image attachments and blur content that is sexually explicit. That will happen on the device, and Apple says it does not have access to the content. Apple will also display different types of warning messages, including to parents.
"When receiving this type of content, the photo will be blurred and the child will be warned, presented with helpful resources and reassured it is okay if they do not want to view this photo," Apple says.
Apple's CSAM Detection system flags abusive images before they're uploaded to iCloud. The system uses a database of known abusive material compiled by the National Center for Missing and Exploited Children.
Apple renders that database into a set of unreadable hashes using a system called NeuralHash. Images are converted into a unique number that is specific to the image. The list is then uploaded and securely stored on a user's device. The system makes it virtually impossible for someone to figure out what images would trigger a positive detection.
Apple says it can also detect what is essentially the same image but with slightly different attributes.
"Only another image that appears nearly identical can produce the same number; for example, images that differ in size or transcoded quality will still have the same NeuralHash value," according to an Apple technical document.
Apple says it's using a technique called private set intersection, or PSI, to detect photos with CSAM content and also ensure other photos remain private.
If the system detects enough matches for CSAM content, Apple will review the material and in some cases, a user's account may be disabled and a report sent to the NCMEC.
Apple says the system preserves users' privacy in that it doesn't see images that lack a match with the CSAM database.
It's not entirely clear why Apple has chosen to do part of the analysis on users' devices, which some security experts find concerning.
iCloud backups are unencrypted and easy game for law enforcement agencies. There's no reason why Apple can't do CSAM scanning on the material once it has been uploaded from a device, which is already done by other cloud storage providers.
Green postulates the system's design may be part of an effort to perform broader device scanning, even for files not shared with iCloud and possibly also content that is encrypted end to end.
But ask yourself: why would Apple spend so much time and effort designing a system that is *specifically* designed to scan images that exist (in plaintext) only on your phone — if they didn’t eventually plan to use it for data that you don’t share in plaintext with Apple?— Matthew Green (@matthew_d_green) August 5, 2021
From there, it may be a slippery slope, Green contends. Apple has "sent a very clear signal. In their (very influential) opinion, it is safe to build systems that scan users’ phones for prohibited content. That’s the message they’re sending to governments, competing services, China, you," he writes.
But Apple's website features three assessments of CSAM Detection from cryptography and security experts who endorsed the system.
"In conclusion, I believe that the Apple PSI system provides an excellent balance between privacy and utility, and will be extremely helpful in identifying CSAM content while maintaining a high level of user privacy and keeping false positives to a minimum," writes Benny Pinkas of the Department of Computer Science at Bar-Ilan University in Israel, in a three-page paper.