NewsStateState-Ohio

Actions

After Taylor Swift AI images circulate online, Ohio lawmakers propose law against 'malicious' deepfakes

Taylor Swift gives $100K bonuses to truck drivers on her Eras Tour
Posted at 6:22 PM, Jan 29, 2024
and last updated 2024-01-29 23:14:55-05

COLUMBUS, Ohio — A bipartisan group of Ohio lawmakers wants to criminalize the use of artificial intelligence in generating sexually explicit material of children — in addition to adults who did not give their consent.

With the advancement of AI, anything can be produced. Pop superstar Taylor Swift was victim to this last week when misogynistic users generated explicit and non-consensual photos of her on the Microsoft Designer software. This has created a deeper conversation about safety and online technology being used for malevolent purposes.

“We are committed to providing a safe and respectful experience for everyone," a spokesperson from Microsoft told News 5. "We’re continuing to investigate these images and have strengthened our existing safety systems to prevent our services from being used to help generate images like them.”

This isn't just happening to celebrities.

Just months ago, News 5's sister station WCPO covered a Kentucky man who pleaded guilty to possessing hundreds of images of children being sexually abused. He faces time for those real ones, but not for the other hundreds of AI-generated child sexual abuse material.

RELATED: Prosecutor: AI-generated images of children being sexually abused found in Boone County case

"The statute was silent on that and they really couldn't go after him, which to me, and I'm sure to you and many other people, is horrifying," Senator Louis W. "Bill" Blessing, III (R-Colerain Township) told Statehouse reporter Morgan Trau.

Blessing is trying to catch the law up with the changing media landscape. He and cosponsors Sens. Terry Johnson (R-McDermott) and Catherine Ingram (D-Cincinnati) have introduced SB 217, which would make it a felony to create, possess and share AI-generated child sexual abuse material.

Creating or distributing “simulated obscene material,” including depictions of minors, would be a third-degree felony. Buying or possessing materials would be a fourth-degree felony.

It would also criminalize sexually explicit deepfakes of someone without their consent. Victims of AI would also have civil recourse for their likeness being used. Blessing said that making this type of AI-generated content could also get a fifth-degree felony if people are using it to trick others.

The legislation would also require all AI-generated products to have a watermark. Removal of the watermark could result in civil action from the attorney general or private citizens.

"I just wanna make sure that what we do really puts some teeth behind this," Blessing said.

The bill also mandates that online platforms take down AI-generated child or non-consensual pornography within 24 hours if they are contacted by the attorney general. If they don’t, there is a $1,000 fee each day the content stays up.

The lawmaker knows that his bill would be tough to enforce, which Case Western Reserve University technology law professor Eric Chaffee echoed.

"Restricting creation of certain types of images is certainly going to create First Amendment concerns," Chaffee said. "Even requiring the inclusion of a watermark potentially creates First Amendment concerns."

Some websites, like PornHub, already have a system to monitor non-consensual and child abuse content — and in the first half of 2023, they removed nearly 8,500 videos and images, they reported.

"What will happen is when these images are created, a lot of them will end up being reported," the professor said. "But finding all of them is going to be extraordinarily difficult."

Self-reporting is likely the most effective process for removing content since $1,000 is a low fee for billion-dollar companies, Chaffee added.

Still, Blessing is asking for some action.

"We're doing what we can to give you some recourse on this," Blessing said, addressing victims. "I just wish we could do more for you."

The bill is being supported by Attorney General Dave Yost.

Full statement from Microsoft:

"We are committed to providing a safe and respectful experience for everyone. We’re continuing to investigate these images and have strengthened our existing safety systems to prevent our services from being used to help generate images like them."

The company provided the following as additional information:

"The Code of Conduct prohibits the use of our tools for the creation of adult or non-consensual intimate content, and any repeated attempts to produce content that goes against our policies may result in loss of access to the service.

"We have teams working on the development of guardrails and other safety systems in line with our responsible AI principles, including content filtering, operational monitoring and abuse detection to mitigate misuse of the system and help create a safer environment for users."

Follow WEWS statehouse reporter Morgan Trau on Twitter and Facebook.