Child sexual abuse is a grave crime, and it’s everyone’s duty to prevent and swiftly respond to it. The European Commission proposed a law aimed at stopping child sexual abuse material online and online child grooming, allowing authorities to scan activities of users on apps or online services.
The proposed law hinges on effective scanning technologies which, unfortunately, are currently flawed and are predicted to remain ineffective for the foreseeable future. The flaws include substantial risks of false positives and negatives and potential misuse, thereby making the internet less safe. Consequently, as scientists, we advise against pursuing this proposal.
For instance, to avoid child sexual abuse material (CSAM) during scanning, a perceptual hash function is used. However, all known perceptual hash functions are susceptible to evasion and false detection. This creates a risk for false accusations and misallocation of law enforcement resources. Even with AI tools, there are challenges due to the likelihood of numerous false positives.
Another significant concern is the weakening of end-to-end encryption via “Client-Side Scanning” (CSS), proposed as a remedy to enable scanning of encrypted communications. CSS, which behaves like spyware, can be abused and undermine the overall security of communication. The effectiveness of such technologies is also questionable as savvy perpetrators could simply evade these detection mechanisms.