A new report says that AI image generators are being trained with explicit photos of young children
ยท Dec 23, 2023 ยท NottheBee.com

Those fun AI image generators people like to use to create memes and bizarre, silly pictures may not be all fun and games.

A disturbing new report from Stanford shows that some AI image generators are using explicit photos of children in order to train the AI bots.

Yes, in efforts to make AI art more realistic, the tech folks have somehow given the robots access to pornographic images of kids.

Parents and law enforcement should beware.

Hidden inside the foundation of popular artificial intelligence image-generators are thousands of images of child sexual abuse, according to a new report that urges companies to take action to address a harmful flaw in the technology they built.

Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world.

It's no surprise that, like every single online technology ever, perverts and pornographers are leading the development. However, the use of this technology to create brand new child porn from AI and to digitally, realistically, undress teenagers is particularly disturbing.

And the authors of this study are urging the creators to make some serious changes to protect kids.

The response was immediate. On the eve of the Wednesday release of the Stanford Internet Observatory's report, LAION told The Associated Press it was temporarily removing its datasets.

LAION, which stands for the nonprofit Large-scale Artificial Intelligence Open Network, said in a statement that it "has a zero tolerance policy for illegal content and in an abundance of caution, we have taken down the LAION datasets to ensure they are safe before republishing them."

This was one of the AI image databases that was in the study, and when they were confronted with the fact that, out of billions of images, there were at least 1,000 images of child abuse, they shut the entire thing down until they could remove the images.

I think it's safe to say that EVERY AI image database should undergo a similar deep clean.

Many text-to-image generators are derived in some way from the LAION database, though it's not always clear which ones. OpenAI, maker of DALL-E and ChatGPT, said it doesn't use LAION and has fine-tuned its models to refuse requests for sexual content involving minors.

Those are some of the more popular AI image generators, and they do have some safeguards, though it seems their databases haven't been combed for explicit images.

Other non-AI tech companies have technology that they use to track and take down child sex abuse material, but AI creators have not adapted those technologies yet, but some activists say they need to start.

Tech companies and child safety groups currently assign videos and images a "hash" โ€” unique digital signatures โ€” to track and take down child abuse materials. According to Portnoff, the same concept can be applied to AI models that are being misused.

"It's not currently happening," she said. "But it's something that in my opinion can and should be done."

This is another reason to keep photos of your kids offline, there's no telling how people will abuse them.

It's also a cautionary tale of how technology can be abused and how we have to be vigilant and not casual when using these fun new toys.


P.S. Now check out our latest video ๐Ÿ‘‡

Keep up with our latest videos โ€” Subscribe to our YouTube channel!

Ready to join the conversation? Subscribe today.

Access comments and our fully-featured social platform.

Sign up Now
App screenshot