WhatsApp refuses to scan messages for illegal material
WhatsApp CEO Will Cathcart says the company will not risk lowering message privacy to abide by proposed UK and EU laws requiring that messages be scanned for potential child sexual abuse material (CSAM).
In an interview with the BBC, Cathcart said implementing some form of scanning mechanism for messages would undermine the end-to-end encryption for which the app is reputed.
“What’s being proposed is that we — either directly or indirectly through software — read everyone’s messages. I don’t think people want that,” Cathcart stated.
“If we had to lower security for the world, to accommodate the requirement in one country, that…would be very foolish for us to accept, making our product less desirable to 98% of our users because of the requirements from 2%,” Cathcart stated.
Cathcart also said that client-side scanning of WhatsApp cannot work in practice.
He explained that WhatsApp had already implemented measures to clamp down on CSAM.
“There are techniques that are very effective and that have not been adopted by the industry and do not require us sacrificing everyone’s security,” he said.
“We report more than almost any other Internet service in the world.”
WhatsApp’s parent company Meta Platforms has been in lawmakers’ sights for years over how criminals use its apps to facilitate unlawful activities such as the distribution of CSAM and the sale of illegal goods.
The company previously acknowledged its machine learning tools and moderators had detected and removed 8.7 million child abuse images shared on Facebook in three months in 2018.
However, a recent report from the New York Times suggested it could be underreporting the volume of CSAM on its platforms.
According to the publication, a leaked corporate training document instructed content moderators on Facebook, Instagram, Messenger, and WhatsApp to “err on the side of an adult” when uncertain about a person’s age.
Meta is not the only big tech company struggling to balance privacy rights with its responsibility to clamp down on CSAM.
Apple received substantial backlash in 2021 after announcing its CSAM scanning feature for iCloud Photos, initially set to roll out on iOS 15.2.
The company failed to assure privacy advocates that the system’s various AI-powered mitigations would ensure that no non-violating content would be shared with human moderators.
The reaction forced Apple to announce that it would delay the feature by several months.
However, by December 2021, Apple had removed any mention of the feature from its website.