Apple 'significantly underreports' child sexual abuse, watchdogs say
After years of controversy over its plans to scan iCloud for more child sexual abuse material (CSAM), Apple scrapped the plan last year. Now, child safety experts are accusing the tech giant of not only failing to flag child sexual abuse material exchanged and stored on its services (including iCloud, iMessage, and FaceTime), but also failing to flag all reported child sexual abuse material.
The UK’s National Society for the Prevention of Cruelty to Children (NSPCC) shared data from UK police with The Guardian showing that Apple “grossly underestimates the frequency” with which CSAM content is detected globally on its services.
According to the NSPCC, police investigated more cases of CSAM in the UK in 2023 than Apple reported globally for the entire year. Between April 2022 and March 2023 in England and Wales, the NSPCC found that “Apple was involved in 337 recorded child abuse imagery offences.” But in 2023, Apple reported just 267 cases of CSAM to the National Centre for Missing & Exploited Children (NCMEC), which is supposed to represent all cases of CSAM on its platforms worldwide, The Guardian reported.
Major tech companies in the US must report child sexual abuse to NCMEC when it is detected, but while Apple reports a few hundred cases of child sexual abuse a year, its big tech peers like Meta and Google report millions, according to the NCMEC report. Experts told the Guardian there is ongoing concern that Apple is “clearly” underreporting child sexual abuse on its platforms.
Richard Collard, the NSPCC’s head of child online safety policy, told the Guardian he believed Apple’s child safety efforts needed major improvement.
“There is a worrying gap between the number of child sexual abuse offences committed on Apple’s services in the UK and the almost negligible number of reports of abusive content to authorities globally,” Collard told the Guardian. “Apple is clearly lagging behind many of its peers in tackling child sexual abuse, when all tech companies should be investing in security and preparing for the rollout of the UK’s Online Safety Act.”
Outside the UK, other child safety experts have shared Collard’s concerns. Sarah Gardner, CEO of a Los Angeles-based child protection organization called the Heat Initiative, told the Guardian that she sees Apple’s platforms as a “black hole” that obscures CSAM. And she expects that Apple’s efforts to embed AI into its platforms will intensify the problem, potentially facilitating the spread of AI-generated CSAM in an environment where sexual predators can expect less enforcement.
“Apple does not detect CSAM at all in the majority of its large-scale environments,” Gardner told the Guardian.
Gardner agreed with Collard that Apple is “clearly underreporting” and has “not invested in trust and security teams to be able to handle this” as it rushes to bring sophisticated AI features to its platforms. Last month, Apple integrated ChatGPT into Siri, iOS and Mac OS, perhaps setting expectations for continually improved generative AI features that will be featured in future Apple gear.
“The company is venturing into territory that we know could be extremely harmful and dangerous for children without having the experience to manage it,” Gardner told the Guardian.
Apple has not yet commented on the NSPCC report. Last September, Apple responded to the Heat Initiative’s calls to detect more child sexual abuse content, saying that instead of focusing on finding illegal content, its goal was to connect vulnerable users or victims directly with local resources and law enforcement that can help them in their communities.
0 Comments