Apple has built its brand on the promise to defend privacy — but the company has signaled a foundational shift in its approach with its August 5, 2021 announcement of system updates that will introduce monitoring of all photos uploaded to iCloud as well as certain images sent via iMessage, side-stepping encryption.
These updates are part of Apple’s “Expanded Protections for Children” and aim to help limit the spread of Child Sexual Abuse Material (CSAM), which is a serious issue that needs to be addressed. However, the measures Apple plans to implement are not a viable or proportionate solution, and they put everyone’s privacy and security at risk by circumventing end-to-end encryption and reducing individuals’ control over their own device.
“Apple’s plans for on-device surveillance jeopardize everyone’s privacy, can be abused by authoritarian governments and malicious actors, and represent a crucial diversion from Apple’s prioritization so far of end-to-end encryption, privacy, and security,” said Namrata Maheshwari, Encryption Policy Fellow at Access Now.
Less than a month ago, the Pegasus Project revealed that many devices were targeted and surveilled by exploiting the vulnerabilities in Apple’s iMessage.
“At a time when firmer steps need to be taken to fix vulnerabilities, strengthen end-to-end encryption, and solidify privacy protections, Apple is effectively amplifying the scope for exploitation of vulnerabilities and abuse by authoritarian governments targeting the digital devices so many of us carry every day,” said said Raman Jit Singh Chima, Senior International Counsel and Global Cybersecurity lead at Access Now.
Apple’s new technology, to be rolled out later this year, will give Apple devices the ability, on the device, to scan images stored in iCloud Photos in an attempt to identify CSAM. The company plans to update the operating system software stored on all Apple devices to include a database of image hashes — based on hashes of known CSAM provided by the National Center for Missing and Exploited Children (NCMEC) in the United States and “other child safety organizations” — as well as a machine learning mechanism (about which we know little) to compare the hash of images on the device being uploaded to iCloud against the CSAM database. If the process identifies a match, Apple will manually review the content, and if it is found to be CSAM, will disable the account and send a report to NCMEC.
This system of on-device scanning can be used to scan all images, and Apple could change its decision to scan only images uploaded to iCloud at any time, despite its assurances to the contrary. It could also easily expand the scope of the image hash database, identifying a wider range of content on individuals’ devices. Client-side scanning intrudes upon a person’s control over their own device, undermines end-to-end encryption, and introduces vulnerabilities in the system.
“Just as there is no backdoor that only opens for the good guys, there is no client-side scanning system that will only be used to prevent the uploading and sharing of CSAM,” added Raman Jit Singh Chima, Senior International Counsel and Global Cybersecurity lead at Access Now. “A measure intended to make people safer will instead make everyone’s personal information and expression vulnerable to exploitation, abuse, and censorship.”
Apple’s forthcoming updates will also introduce a feature for message monitoring on childrens’ accounts which can be activated when parents opt in. When the system detects an “explicit image” being sent to or from a child on iMessage, Apple will alert the minor regarding the explicit nature of the image and inform them that their parents may be notified. There is no guarantee that such scanning will only ever be applied to children’s accounts, and minors in vulnerable communities, including the LGBTQ+ community, with unsupportive parents, are most likely to bear the brunt of these changes that are being implemented without adequate and meaningful consultation.
What constitutes an “explicit image” is highly subjective, and not all “explicit images” are cause for alarm. The inevitable risk of erroneous flagging, alongside the potential for continuously expanding monitoring of individuals’ private conversations that would otherwise be protected with end-to-end encryption raises serious concerns that this measure will result in undue deprivation of privacy and stifling of free expression.
“Once measures like client-side scanning are in place, it is only a matter of time before mission creep swings into action, and the scope is expanded to other categories of content,” said Eliška Pírková, Europe Policy Analyst and Global Freedom of Expression Lead at Access Now. “Furthermore, authoritarian regimes are likely to deploy client-side scanning to amplify surveillance and to monitor, black-list, and block legitimate content in private communications. Think, for instance, of the database of hashes being enhanced to censor content involving protest and dissent, the LGBTQ+ community, or even certain forms of artistic expression. This ultimately creates severe risk of human rights abuse, including the chilling effect on the right to freedom of expression and opinion, the erosion of privacy protections, and undermining basic principles of democratic governance.”
Apple only recently took incremental steps to address the human rights community’s concerns regarding the impact of its products and policies on free expression and access to information. In addition to privacy and security risks, the implementation of client-side scanning also adds further uncertainty around the company’s commitment to protecting human rights with respect to content governance.
We already know from experience with other platforms using machine learning systems to moderate and curate user-generated content that these systems often lead to unjustified restrictions of fundamental freedoms through their lack of transparency, inherent discriminatory bias against historically marginalized groups, and contextual blindness resulting in illegitimate takedowns. Apple’s client-side scanning system is specifically designed to automatically and indiscriminately scan all images that are being uploaded by Apple users to iCloud storage or interacting with children’s iMessage accounts. It is essentially impossible to limit the scope of such a system, and, if implemented, screening, illegitimate takedowns, and other forms of abuse are all but inevitable.
We strongly urge Apple to roll back its plan to implement client-side scanning, and refrain from implementing any technological measures that will circumvent or otherwise undermine end-to-end encryption on Apple devices, which has upheld people’s privacy and security, and earned their trust over many decades. Apple should engage in sustained consultations with civil society organizations and individuals impacted by its technologies, and vulnerable communities in particular, to understand their real-world impact, and implement measures that will meaningfully guard privacy and empower users.