The Dark Side of AI: How Technology is Fueling Child Sexual Abuse

0:00

Law enforcement agencies across the United States are dealing with a growing concern – the creation and distribution of child sexual abuse imagery generated through artificial intelligence technology. The disturbing trend involves manipulated photos of real children, as well as graphic depictions of computer-generated kids. The Justice Department is cracking down on perpetrators who exploit AI tools, while states are racing to ensure that those generating “deepfakes” and other harmful imagery of children can be prosecuted under their laws.

The problem is complex, and law enforcement officials are struggling to keep pace with the rapidly evolving technology. The Justice Department’s Child Exploitation and Obscenity Section is working to signal to perpetrators that creating and distributing such content is a serious crime that will be investigated and prosecuted. “We’ve got to signal early and often that it is a crime, that it will be investigated and prosecuted when the evidence supports it,” said Steven Grocki, who leads the Justice Department’s Child Exploitation and Obscenity Section.

The Justice Department has already brought a federal case involving purely AI-generated imagery, where the children depicted are not real but virtual. In another case, federal authorities arrested a U.S. soldier stationed in Alaska who allegedly ran innocent pictures of real children he knew through an AI chatbot to make the images sexually explicit.

Child advocates are working to curb the misuse of technology to prevent a flood of disturbing images that could make it harder to rescue real victims. Law enforcement officials worry that investigators will waste time and resources trying to identify and track down exploited children who don’t really exist.

Legislation is being passed to ensure that local prosecutors can bring charges under state laws for AI-generated “deepfakes” and other sexually explicit images of kids. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse imagery. “We’re playing catch-up as law enforcement to a technology that, frankly, is moving far faster than we are,” said Ventura County, California District Attorney Erik Nasarenko.

Nasarenko pushed legislation signed last month by Gov. Gavin Newsom, which makes clear that AI-generated child sexual abuse material is illegal under California law. The legislation was prompted by the case of 17-year-old Kaylin Hayman, who starred on the Disney Channel show “Just Roll with It” and became a victim of “deepfake” imagery.

Hayman testified last year at the federal trial of the man who digitally superimposed her face and those of other child actors onto bodies performing sex acts. He was sentenced in May to more than 14 years in prison. Hayman said she felt like a part of her had been taken away, even though she was not physically violated.

Experts say that open-source AI models that users can download on their computers are favored by offenders, who can further train or modify the tools to churn out explicit depictions of children. Abusers trade tips in dark web communities about how to manipulate AI tools to create such content. A report by the Stanford Internet Observatory found that a research dataset used to create leading AI image-makers contained links to sexually explicit images of kids.

Top technology companies, including Google, OpenAI, and Stability AI, have agreed to work with anti-child sexual abuse organization Thorn to combat the spread of child sexual abuse images. However, experts say more should have been done at the outset to prevent misuse before the technology became widely available.

The National Center for Missing & Exploited Children’s CyberTipline received about 4,700 reports of content involving AI technology last year, a small fraction of the more than 36 million total reports of suspected child sexual exploitation. By October of this year, the group was fielding about 450 reports per month of AI-involved content. Those numbers may be an undercount, as the images are so realistic it’s often difficult to tell whether they were AI-generated.

The Justice Department says it already has the tools under federal law to go after offenders for such imagery. However, the US Supreme Court struck down a federal ban on virtual child sexual abuse material in 2002, and a federal law signed the following year bans the production of visual depictions, including drawings, of children engaged in sexually explicit conduct that are deemed “obscene.” The Justice Department is bringing charges under this law in cases involving “deepfakes.”

Associated Press
Associated Press
The Associated Press is an American not-for-profit news agency headquartered in New York City.

Latest stories

Ad

Related Articles

Leave a reply

Please enter your comment!
Please enter your name here
Captcha verification failed!
CAPTCHA user score failed. Please contact us!

Ad
Continue on app