By Joe TidyCyber correspondent
A nonprofit organization focused on supporting individuals dealing with concerns about their thoughts or behaviors notes an increase in callers grappling with the ethical dilemmas surrounding the use of AI-generated child abuse imagery.
The Lucy Faithfull Foundation (LFF) points out that AI-created images are becoming a gateway to accessing illegal content.
The charity strongly emphasizes that engaging in the production or viewing of such material is against the law, regardless of whether the depicted children are real or not, issuing a firm warning.
One individual, known as Neil to protect his identity, contacted the helpline after being arrested for generating indecent images using AI technology. Despite claiming no sexual interest in children, the 43-year-old IT professional confessed to using AI to produce inappropriate visuals of children based on text prompts, driven more by a fascination with the technology than any malicious intent.
Seeking clarity, Neil turned to LFF for guidance, where support staff reiterated the criminality of his actions irrespective of the authenticity of the subjects involved.
Similarly, other callers have expressed confusion on this matter. For example, a woman reached out to the helpline upon discovering that her 26-year-old partner had viewed AI-generated indecent child images, justifying it by claiming they were harmless since they were not real. Subsequently, she sought assistance from the organization.
Furthermore, a teacher sought advice from the charity regarding her partner’s questionable viewing habits, uncertain about the legality of the content in question.
Donald Findlater from LFF highlights that some individuals contacting their confidential Stop It Now helpline perceive AI-generated imagery as blurring the lines between legality and morality, a misconception he considers dangerous. He emphasizes that engaging with such material, even if no actual harm is done, is unequivocally unethical.
Findlater underscores that nurturing deviant sexual fantasies significantly increases the risk of reoffending among individuals with prior sexual offense convictions.
While the incidence of AI-related offenses among helpline callers remains relatively low, LFF observes a troubling increase. The organization calls on society to recognize this issue and urges lawmakers to take steps to combat the spread of child sexual abuse material (CSAM) online.
Although specific platforms hosting such content were not disclosed, a well-known AI art website faced accusations of allowing the circulation of sexually explicit images featuring young models. When questioned by the BBC, Civit.ai reiterated its commitment to addressing potential CSAM on its platform and encouraged the community to report inappropriate content.
LFF also raises concerns about cases where minors unknowingly engage in creating CSAM. One caller expressed concern about their 12-year-old son using an AI application to generate inappropriate images of peers and subsequently searching for explicit terms online.
Recent legal actions in Spain and the US have targeted underage individuals using apps to produce illicit images of classmates.
In the UK, Graeme Biggar of the National Crime Agency advocated for harsher penalties for individuals in possession of child abuse imagery, noting that AI-generated content poses a significant risk of escalating offenders’ likelihood of committing actual abuse.
Some contributors have requested anonymity for this publication.