On August 5th, 2021 Apple announced a system update that
would scan iCloud photos and images sent or received through
Messages app to identify and intercept sexually explicit
content. Apple’s new update, entitled
Expanded Protections for Children, to its operating system iOS 15, watchOS 8, iPadOS 15, and
macOS Monterey is a step toward addressing the problem of Child
Sexual Abuse Material (CSAM). But advocates of privacy worry it
could hamper people’s privacy. Apple announced that it will be
applicable in US only.
Apple’s use of CSAM is no new idea. All tech giants, including Facebook, Twitter, and Reddit, use tools similar to Microsoft’s PhotoDNA to scan users’ files against hash libraries. And just as the case with Apple, they, too, are required by the law to report CSAM to the National Center for Missing and Exploited Children (NCMEC). It’s because Apple is scanning on-device data rather than on-server, it has generated a far more backlash.
An overview of the update
In this article, we shall discuss key CSAM features and
understand from a privacy perspective how useful or disastrous
this update actually is.
An overview of the update
This
new update contains three distinct changes — Detecting and
Flagging Images in Messages, Scanning Photos Stored in iCloud,
and Changes to Siri and Search Results. It affects different
categories of people and different Apple products.
Facing a backlash from privacy activists and advocates across the globe, Apple has limited its CSAM scanning efforts for now. Apple so far hasn’t said when the software will be released. However, on Sep 3, it did announce a delay in the release for the purposes of improvements and addressing privacy concerns.
Detecting and Flagging Images in Messages
An addition to the Messages app, this feature is designed
to prevent children from sending or receiving what Apple labels
as “sexually implicit material.” This feature only gets
activated if the device is enrolled in a family plan and the
owner of the plan (parents usually) designates a device as
belonging to a child under 18, and the owner of the plan turns
on the protection for the device.
With this feature enabled, a machine learning algorithm evaluates the nature of the images being sent or received through the Messages app. When this algorithm determines an image as “sexually explicit”, it is blurred out and the user is notified that the image may be sensitive. It’s up to the user to choose to reveal the blurred photo regardless. If the user of the device has been designated by the owner of the family plan as being under 13, the device will send a notification to the owner of the plan, notifying that the user has chosen to send, receive or view the image despite “sexually explicit” warnings. The notification, however, won’t contain the message or image itself.
While Apple has clarified it won’t receive notifications related to bypassing “sexually explicit” warnings, it’s not clear whether Apple may game with the system to access any records of these notifications and user choices, and if so, it only remains a matter of time for how long. Moreover, even for now if Apple doesn’t log these notifications, the system lacks a concrete design proof which may prevent it from pursuing so in the future.
Scanning Photos Stored in iCloud
This feature entails the detection of potential CSAM
images, which are illegal to possess. Before an image from a
US-based user’s device is uploaded to iCloud Photos, Apple will
scan those photos, using its cryptographic techniques, against a
set of identified CSAM images maintained by the National Center
for Missing and Exploited Children (NCMEC), a nonprofit
organization that works alongside law enforcement.
The scanning of the images is done on a one-by-one basis while being uploaded to cloud storage. The process takes place partially on the user’s device and partially on Apple’s servers, enabling Apple to identify iPhone and iPad owners with CSAM images in their iCloud Photos library. If the scanning unveils 30 triggering images matching CSAM, it will alert Apple’s moderators and reveal the details of the matches. If a moderator confirms the presence of CSAM, they’ll disable the account and report the images to legal authorities.
Changes to Siri and Search Results
This feature is largely uncontroversial and will affect
Apple’s Search app and Siri. This feature directs a user to
resources for reporting it or getting help with an attraction to
it, when a user searches for topics related to child sexual
abuse.
Breaking up of Encryption or Cryptographic Codes
Apple has been a staunch advocate of encryption technology
and cryptography for protecting the personal information of the
user of its devices as well as its services. However,
implementing CSAM has brought the very same concerns to the
forefront of creating a back-door entry to the device to access
the on-device content that includes location data using GPS and
the content that is stored in iCloud or iMessage.
In a court battle in California in 2015, between Apple Vs Federal Bureau of Investigation (FBI) related to the request by FBI to gain access to the encrypted data of Apple Phone 5C used by one of the terrorists in a mass shooting attack on 2nd December, 2015 at San Bernardino, California in which 14 people died and 22 injured. The court ordered Apple to provide assistance to unlock the iPhone 5C the access to which was protected through a password.
Source : FBI–Apple encryption dispute – Wikipedia
In 2014, Apple had implemented the default hardware and software-based encryptions for all its devices. This feature prevented anyone without the passcode to accessing data stored on the device. Apple also created an extra safety feature that after 10 incorrect passwords attempts, the contents of the device will be erased.
However, Apple refused to comply with the court order to provide break into the terrorist’s phone in order to gain access to the pertinent information about the attack. Apple argued that they will have to write a new software to attack the encryption feature and cryptographic codes to gain a backdoor entry to the iPhone 5C. They argued that this software can be used to provide a backdoor to the iPhones and can be misused by criminals, foreign agents and militant organisations to access millions of phones. Also, the foreign governments can also make the same requests to Apple thus opening a another frontier of battle in safeguarding the privacy and security of consumers that are using Apple devices and services.
On 16th February, 2016, the United States District Court for the Central District of California, ordered Apple to provide access to the iPhone 5C of the terrorist so that the data can be obtained by the FBI. Instead, on the very same day, Apple published its denial to comply with the court order on its website Customer Letter – Apple and stated its position to uphold the privacy and security of its customers as a matter of paramount importance.
Bringing CSAM into the market is a step in which Apple is
compromising the very same principle of upholding the privacy
and security of its consumers through backdoor entry to the on
device content and its services.
Controversies stack up against CSAM features
Apple has for long created an image of privacy saviour and
touted its end-to-end encryption as the only secure option for
its customers’ personal information. However, this update
involving on-device content scanning deflects Apple from its
long-held encryption policy, attracting a slew of privacy and
technical concerns worldwide.
In a report (via BleepingComputer), a team of researchers at Imperial College London found technical flaws in the technology backing Apple’s CSAM — images can evade detection and cause privacy concerns. The Imperial researchers found that Apple’s technical measures to detect CSAM content can be easily bypassed by applying a hashtag filter to any image. Filter introduces an alternative hashtag and fools the system in excess of 99.9% of the time.
Apple’s safety changes take a deeper dive into how tech works to understand how it creates a privacy nightmare. The problem is that this system could be easily tempered and repurposed for surveillance and censorship. The design is not restricted to a specific category of content; in simple words, the content of the database can be easily swapped, and the person using that service would be none the wiser.
Edward Snowden, a computer programmer on a contractual employment with NSA, USA, known as a whistle-blower for exposing highly classified information from the National Security Agency in 2013 in the public domain, has already condemned CSAM. Snowden on Apple’s use of CSAM spoke, “it will permanently redefine what belongs to you, and what belongs to them,” pointing out forced intervention by governments across different nations to search for any images of their desire.”
Advocating concerns around threats to privacy, security, and free speech for Apple users, OTI (Office of Transition Initiatives) has joined nearly 100 other policy and right groups, signing on to an open letter to Tim Cook, CEO of Apple Inc., prompting Apple to abandon the use of new technology as it doesn’t outweigh the potential costs.
After the controversial flood broke out, Apple clarified through
a
FAQ
page that the scanning does not cover on-device images, and it
doesn’t learn anything about the images in the device’s library.
Apple responded to controversies as “We have faced demands to
build and deploy government-mandated changes that degrade the
privacy of users before and have steadfastly refused those
demands. We will continue to refuse them in the future.” Apple
also clarified that it won’t accede to governments’ requests to
include any other types of images to the list of images that
would be flagged.
Weighing Benefits vs. Risks
Apple’s step to curb developments in CSAM has been
applauded by some privacy and security experts, including the
prominent cryptographers and computer scientists Mihir Bellare,
David Forsyth, and Dan Boneh. Endorsing the update, David
Forsyth said, “This system will likely significantly increase
the likelihood that people who own or traffic in [CSAM] are
found.”
But other privacy experts and advocacy groups believe that the iCloud and Messages updates can be a framework for mass digital surveillance systems that work directly from phone or tablet. It could further invade end-to-end encryption and could open the door to more troubling invasions of privacy.
As Apple’s safety update works out its best to stop children from being a victim of nudity and adult content, it can even lead to greater abuse of vulnerable children, including potentially outing LGBTQ children to homophobic families. This could also lead to children in abusive homes being prevented from sharing photographic evidence of their abuse by the parental notification feature.
The device scanning feature is worse for the reason that by definition, it decreases users’ control over who can access information stored on their devices. Today, this feature is being loaded in the name of child safety; tomorrow it could be done to justify counterterrorism or national security. Or much worse it could open the door to abuse by state actors, including illiberal and authoritarian governments.
Today, this feature is limited to the US territory. However,
when applied across foreign countries, their governments could
compel a service to bar people sharing disfavored political
speech or anything of such sort for that matter. In fact, such
practices are already in action worldwide.
WeChat, a messaging app popular across China, already uses algorithms
to identify dissident material.
India enacted rules
that could require social media sites to pre-screen content
critical of government policy.
Conclusion
Technology has proven to be both a boon and a bane. With
its continued penetration in the lives of all age groups alike,
there has been a tremendous increment in the spread of illegal
material. As poison cuts poison, technology, while the catalyst
for the problem, could and should be a principal component in
the battle against it. Apple’s new technical measures set a
crucial precedent for other companies to join the rally and
better protect children online.
Child safety and privacy are key concerns for any tech company, and Apple must be equally accountable on both fronts. The fact that a single step by a tech company to reinvent the wheel can attract so much attention and critique worldwide time and again proves a need for regulation that takes into account correcting and regulating both government and privacy bodies to secure fundamental human rights.
For providing a safe and secure internet for Children, global laws have been enacted to safeguard their privacy and provide protection to them while they are on-line. EU GDPR, POPIA South Africa, LGBD Brazil and India PDPB 2019 contain provisions to safeguard the privacy of the children. In these laws, heavy penalties have been proposed to be imposed on the individuals and companies that are found to be violating the Children’ s privacy while being on-line. In USA, the Children’s On-line Privacy Protection Act, 1998 has been enacted to protect the Children’s privacy and security.
Though Apple’s initiative to protect the privacy of Children is laudable, however the implementation of the same is fraught with danger of breaching and violating the principle of Privacy as laid down by Apple in Privacy – Apple
Source : Children’s Online Privacy Protection Rule (“COPPA”) | Federal Trade Commission (ftc.gov)
Source : GDPR General Data Regulation – DATA SECURE
Source : India Draft Personal Data Protection Bill 2019 – DATA SECURE
We at Data Secure (DATA SECURE – Privacy Automation Solution) can help you to understand Privacy and Trust while dealing with personal data and provide Privacy Training and Awareness sessions in order to increase the privacy quotient of the organisation.
We can design and implement RoPA, DPIA and PIA assessments for meeting compliance and mitigating risks as per the requirement of legal and regulatory frameworks on privacy regulations across the globe especially conforming to India PDPB 2019. For more details, kindly visit DPO India – Your outsourced DPO service (dpo-india.com).
For any demo/presentation of solutions on Data Privacy and
Privacy Management as per EU GDPR, CCPA, CPRA or Draft India
PDPB 2019 and Secure Email transmission, kindly write to us at
info@datasecure.ind.in
or
info@dpo-india.com.
For downloading various Global Privacy Laws kindly visit
the Resources page in
DATA SECURE – Privacy Automation Solution