top of page
Protect Children

The Dichotomy of Artificial Intelligence: A Force for Good and Evil

Updated: Sep 23

BLOG POST

Artificial Intelligence (AI) has become a transformative force in our world, bringing about significant advancements and efficiencies in various industries. However, with great power comes great responsibility, and AI is no exception. This blog, researched and written by our wonderful Protect Children intern, Anna Gumenyuk, explores AI’s dual nature, highlighting its misuse in generating child sexual abuse material (CSAM) and its potential to fight against such heinous crimes.


The Role of Artificial Intelligence in Child Sexual Abuse

Introduction to Artificial Intelligence 

AI, often crowned as a pinnacle of modern technology, refers to systems that mimic human intelligence.[1] These systems, powered by sophisticated algorithms and vast datasets, are capable of reasoning, decision making, problem solving, perception, and language comprehension.[1] AI’s applications span numerous fields, including healthcare, finance, transportation, and entertainment, making it an integral part of our daily routines.[2]


The Dark Side: AI and CSAM

Unfortunately, beneath AI’s promising potential lies a darker reality. AI’s powerful capabilities are exploited to generate and distribute CSAM, posing severe challenges to law enforcement and child protection agencies.[3] The Internet Watch Foundation has revealed a disturbing truth: some AI-generated imagery is so realistic that even trained analysts struggle to differentiate it from real images.[4]


Imagine the horror of a child’s image being altered, their innocence stripped away by technology. Perpetrators use AI to ‘de-age’ celebrities, making them appear younger, and “nudify” clothed images of children (i.e. make them appear naked).[4] Using deepfake technology, offenders alter non explicit videos of children to make them appear explicit or overlay children’s faces on adult bodies in sexual activities.[5] AI-generated CSAM, catered to specific, depraved fetishes[6], can feature known child sexual abuse victims, celebrity children, or other children favoured by the offenders.[4] All this material can be created in unlimited quantities and posted online, overwhelming law enforcement who already struggle to keep up with the vast volume of  real CSAM.[4] Moreover, perpetrators can remain undetected by generating CSAM offline, on their local machines, where they have full control and privacy, making it nearly impossible for law enforcement agencies to intercept their activities.[7] 


Besides text-to-image and deepfake models, Large Language Models (LLM) can also be used in a sinister way. Trained on massive datasets, these models can perform a variety of natural language processing tasks, including understanding and generating natural language.[8] Offenders can exploit these capabilities to create CSAM-centered literature, ranging from guides and manuals on effective ways to contact children to stories tailored to personal fantasies about child sexual abuse.[9] They can also craft convincing messages to groom and coerce children, often posing as peers or trusted adults.[10] 


These crimes are fueled by an extensive network of dark web forums that are filled with resources and instructions needed to create CSAM, ranging from shared CSAM models to tips on how to manipulate a specific LLM to generate explicit textual content involving children.[9] With such a growing demand for customised AI-generated content, perpetrators have found opportunities to profit by providing bespoke CSAM creation services.[6] Access to these services is often hidden behind payment barriers, making it challenging for law enforcement to track and shut down these operations.[4] This commercialisation of CSAM further perpetuates re-victimisation of known child abuse victims and the exploitation of new ones. 


AI for Good: How AI is Used to Combat CSAM

Despite its dark potential, in the right hands, AI can also become a powerful tool in the battle against child sexual abuse. AI-powered technology has been developed to identify, track, and remove CSAM more efficiently than traditional methods. One such technology is image-based classification, where AI algorithms analyse vast volumes of image and video content to flag potentially harmful material for further review.[11] An example of this would be Microsoft’s PhotoDNA, which creates a unique digital signature (also known as ‘hash’) for images. This hash enables identification and removal of illegal content, even if it has undergone insignificant alterations.[12] 


In addition, AI can play a crucial role in facilitating prosecutions by using facial, object, and voice recognition to identify victims and perpetrators of child sexual abuse.[11] These advanced tools analyse images and videos to extract critical details, such as facial features, objects in the background, and unique voice patterns. This information can then be cross-referenced with databases of missing children or other relevant records, helping to rescue victims and bring criminals to justice. 


Natural Language Processing (NLP) techniques can also play a fundamental role in the fight against child sexual abuse and exploitation. NLP tools can analyse online conversations, emails, and other text-based communications to detect predatory behaviour and flag suspicious activities, hence preventing potential abuse before it occurs.[13]


Initiatives like AI for Safer Children provide law enforcement with advanced AI tools that significantly speed up investigations.[14] These tools include object detection, voice recognition, geolocation, and chat analysis, which collectively reduce the time required to analyse CSAM from weeks to mere days.[14] Besides reducing forensic backlogs, these algorithms can minimise mental impact of the violence in those files on investigators by pixeling flagged images, muting audio, or converting images to black and white.[15] As such,  the integration of AI tools can not only accelerate case resolution, but also ensure a more humane approach to handling sensitive materials. 


Conclusive Remarks

The effects of AI-generated CSAM are still not well understood: how it affects perpetrators, their thoughts and behaviours toward children, as well as the long-term implications of this phenomenon are areas requiring further research. While AI is certainly abused by some to create and distribute CSAM, it is not inherently evil – it is the way humans choose to use it that determines its impact. Technology, including advanced AI tools, can be incredibly useful in the fight against child sexual abuse. However, it is not a one-size-fits-all solution. To truly succeed in this battle, a joint effort and holistic approach are required, combining the inputs from law enforcements, policymakers, child protection organisations, and the community overall.

 


To stay updated on our work, subscribe to our newsletter!





References:


[4] Internet Watch Foundation. (2023). How AI is being abused to create child sexual abuse imagery.

[12] Microsoft. (NA). PhotoDNA.

[14] WeProtect Global Alliance. (NA). The AI for Safer Children Global Hub.

bottom of page