Apple, Inc. said on Thursday that it will install a new software later this year that will scan for child abuse imagery in an user's iCloud Photos account and then report instances to relevant authorities. Apple said that if a match is found, a human reviewer will then assess and report the user to law enforcement.
The tech giant said that the move is in a bid to expand upon the measures it had previously said it takes on the matter. Apple will also be adding features in its Siri digital voice assistant to intervene when users search for related abusive material.
Apple said that its Messages app will use on-device machine learning with a tool known as 'neuralHash' that will look for sensitive content, though the communications including child abuse imagery. However, those messages and images will not be read by the tech giant. The message app will analyze photos sent to or from children to see if they are explicit.
If 'neuralHash' finds or doubts an image to be explicit, it will be reviewed by a human, who if required will then notify the law enforcement. When a child receives a sexually explicit photo, the software will blur the photo and the child will be warned and told that it is okay if they do not want to view the photo.
The child will also be warned that if they try to view the photo, their parents will be notified via a message. Similar measures will be taken if a child tries to send a sexually explicit image.
The software works by comparing photos to a database of known child sexual abuse images compiled by the National Center for Missing and Exploited Children (NCMEC) and other child safety organizations. Those images are translated into 'hashes', numerical codes that can be 'matched' to an image on an Apple device.
The iPhone maker said that the technology will also detect edited but similar versions of original images. "Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes," Apple said.
High Accuracy Level
Apple also claimed that the system has an "extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account". Besides the new features in the Messages app, iOS and iPadOS will also "use new applications of cryptography to help limit the spread of [Child Sexual Abuse Material] online, while designing for user privacy," the company wrote on its website.
"CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos." Any user who feels their account has been flagged by mistake can file an appeal, the company said.
However some privacy experts have raised concerns. Many believe that the technology could be expanded to scan phones for prohibited content or even political speech. Experts worry that the technology could be used by authoritarian governments to spy on its citizens.
That said, the software will be launched later this year. The updates will be a part of iOS 15, iPadOS 15, watchOS 8 and macOS Monterey.