The UK has no money top improve child sex abuse conviction rates and monitoring of child molesters so what is the solution to appease the public? Create a new law as that costs nothing. The UK intends to be the inaugural nation to establish new charges pertaining to AI-generated sexual assault. New legislation will render the possession, creation, or distribution of AI tools intended to generate child sexual abuse material (CSAM) unlawful, with penalties of up to five years’ imprisonment. The legislation will render it unlawful for anybody to possess so-called “paedophile manuals” that instruct on the utilisation of AI for the sexual exploitation of minors.
In recent decades, the threat of cyber abuse against children has escalated significantly. The Internet Watch Foundation reports an 830% increase in online child sexual abuse imagery since 2014. The proliferation of AI picture generating tools is exacerbating this trend.
Last year, the International Policing and Protection Research Institute at Anglia Ruskin University issued a paper regarding the increasing demand for AI-generated child sexual assault material online.
Researchers examined discussions that occurred on dark web forums during the past year. We discovered indications of increasing interest in this technology and the online perpetrators’ inclination for others to acquire knowledge and produce abusive photographs.
Disturbingly, forum participants identified individuals generating AI imagery as “artists.” This technology is generating unprecedented opportunities for criminals to produce and disseminate the most egregious forms of child abuse content.
Our investigation indicated that forum users are utilising pre-existing non-AI-generated images and videos to enhance their learning and to train the software employed for image creation. Numerous individuals articulated their aspirations and anticipations that the technology would advance, facilitating the creation of this content.
Dark web environments are concealed and can only be accessed via specialised software. They provide offenders anonymity and privacy, complicating law enforcement’s ability to identify and prosecute them.
The Internet Watch Foundation has recorded alarming figures on the swift rise in the quantity of AI-generated images they encounter in their operations. The volume remains still low relative to the quantity of non-AI photographs being discovered, although the figures are escalating at an alarming pace.
The organisation revealed in October 2023 that a total of 20,254 AI-generated images were submitted in one month to a dark web forum. Prior to the publication of this study, less information was available regarding the threat.
Offenders perceive AI-generated child sexual assault imagery as a victimless crime, as the visuals are not “real.” However, it is far from innocuous, mostly because it can be generated from authentic photographs of youngsters, including those that are entirely benign.
Although the effects of AI-generated abuse remain mostly unknown, extensive study exists regarding the detrimental consequences of online child sexual abuse and the ways in which technology exacerbates or perpetuates offline abuse. For instance, victims may experience ongoing distress due to the permanency of photographs or movies, simply by being aware that the pictures exist. Perpetrators may utilise photographs (authentic or fabricated) to coerce or extort victims.
These factors are essential to current dialogues over deepfake pornography, the production of which the government intends to criminalise.
All of these difficulties can be intensified by AI technology. Moreover, moderators and investigators are likely to experience a traumatising effect from scrutinising abuse photographs in meticulous detail to ascertain if they are “real” or “generated” images.
UK law presently prohibits the acquisition, creation, dissemination, and possession of an obscene image or a pseudo-photograph (a digitally-generated photorealistic image) of a kid.
Currently, no legislation exists that criminalises the possession of technology for the creation of AI-generated child sexual assault photos. The new legislation should enable law enforcement to identify abusers utilising or contemplating the use of AI to produce such content, even in the absence of physical photos during investigations.
We will perpetually lag behind perpetrators in technological advancements, and law enforcement organisations globally will eventually be inundated. They require legislation aimed at facilitating the identification and prosecution of individuals attempting to exploit children and adolescents online.
The government’s commitment to action is encouraging, but it must be expedited. The protracted duration of legislative enactment increases the danger of child abuse.
Addressing the global issue necessitates more than legislation inside a single nation. A comprehensive system reaction is required that commences during the design phase of new technologies. Numerous AI products and tools have been created for wholly legitimate, honest, and non-malicious purposes; yet, they can readily be modified and used by perpetrators seeking to generate damaging or illicit content.
The law must comprehend and address this issue to prevent technology from being exploited for abuse, thereby enabling a distinction between individuals utilising technology for harmful purposes and those employing it for beneficial ends.
If you or anyone you know have been affected by the people highlighted in this article, then please report those individuals to the Police on 101 (999 if an emergency) or visit their online resources for further details of the options for reporting a crime. You can also make a report at Crimestoppers should you wish to be completely anonymous. There is help available on our support links page.