San FranciscoThe chief deputy city attorney, Yvonne Meré, has filed a lawsuit against 16 websites that use AI to create fake pornography by releasing women and girls in photos without their consent.
This unprecedented legal action aims to shut down a popular site that has emerged as a dangerous trend among teenagers, using nudification apps to falsify images of their female friends.
According to New York TimesThe 16 targeted sites were visited 200 million times in the first six months of this year. The entities behind the website are located in California, New Mexico, England, and Estonia. When reached for comment, representatives from the website were either unavailable or unresponsive.
One site promotes the service by asking, “Got someone to undress?” Another state, “Imagine wasting time taking him out on a date,” advocacy that registered users using the website “to ask him to be mute.” Some sites offer free initial images but later charge more, accept cryptocurrency or credit card payments.
The deepfake technology used by the site relies on AI models trained on pornography and real-life images depicting child abuse to generate nude photos that look authentic from clothed images.
City Attorney David Chiu, the office’s top lawyer, stressed the minimal effect on those behind the images. He noted the challenge of identifying the specific websites responsible once the images began to circulate, which made it difficult for victims to successfully pursue legal action.
“The article was flying around our office, and we were like, ‘What can we do about this?'” Chiu recalled in an interview. “No one is trying to hold the company accountable.”
Sara Eisenberg, head of the legal unit focused on the major social problemsstated that the problem cannot be solved simply by educating teenagers about the safe use of technology. Any photo can be manipulated without the subject’s consent, making traditional protections ineffective.
“You can be as internet and social media savvy as you want, and you can teach your kids all the ways to protect themselves online, but nothing can protect them from other people using these sites to do bad and dangerous things. Eisenberg said.
The lawsuit seeks an injunction to shut down the website and prevent it from creating fake pornography in the future. It also seeks civil penalties and attorney fees.
The lawsuit alleges that the site violates state and federal hate pornography laws, child pornography laws, and California’s Unfair Competition Law, which prohibits unlawful and unfair business practices.
Meré took action after reading about the harmful effects of fake images in a New York Times article. They immediately contacted Eisenberg, and together, they enlisted Chiu’s support in filing the lawsuit.
“Technology has been used to create deepfake mute everyone from Taylor Swift to ordinary high school girls with some visible repercussions,” said Chiu. “Images that are sometimes used to extort victims for money or humiliate and harass people.”
Experts warn that deepfake pornography poses severe risks to victims, affecting their mental health, reputation, college, and job prospects. The problem is compounded by the difficulty in tracing the origin of the images, making legal efforts challenging.
“This strategy can be seen as a Whac-a-Mole approach as more sites can emerge,” Chiu said. However, the lawsuit proposes to add more sites as they are discovered, aiming for more enforcement as the problem develops.
San Francisco, which is the hub for artificial intelligence industry with large companies like OpenAI and Anthropic based there, it’s the perfect venue for this legal challenge. Chiu acknowledged the positive contribution of the AI ​​industry but pointed out that fake pornography shows a “dark side” that needs to be addressed.
“Keeping up with a rapidly changing industry as a government lawyer is difficult,” Chiu said. “But that doesn’t mean we shouldn’t try.”
The lawsuit marks a significant effort to combat the abuse of AI technology in the creation of harmful content and hold accountable those who perpetuate such harmful practices.
This unprecedented legal action aims to shut down a popular site that has emerged as a dangerous trend among teenagers, using nudification apps to falsify images of their female friends.
According to New York TimesThe 16 targeted sites were visited 200 million times in the first six months of this year. The entities behind the website are located in California, New Mexico, England, and Estonia. When reached for comment, representatives from the website were either unavailable or unresponsive.
One site promotes the service by asking, “Got someone to undress?” Another state, “Imagine wasting time taking him out on a date,” advocacy that registered users using the website “to ask him to be mute.” Some sites offer free initial images but later charge more, accept cryptocurrency or credit card payments.
The deepfake technology used by the site relies on AI models trained on pornography and real-life images depicting child abuse to generate nude photos that look authentic from clothed images.
City Attorney David Chiu, the office’s top lawyer, stressed the minimal effect on those behind the images. He noted the challenge of identifying the specific websites responsible once the images began to circulate, which made it difficult for victims to successfully pursue legal action.
“The article was flying around our office, and we were like, ‘What can we do about this?'” Chiu recalled in an interview. “No one is trying to hold the company accountable.”
Sara Eisenberg, head of the legal unit focused on the major social problemsstated that the problem cannot be solved simply by educating teenagers about the safe use of technology. Any photo can be manipulated without the subject’s consent, making traditional protections ineffective.
“You can be as internet and social media savvy as you want, and you can teach your kids all the ways to protect themselves online, but nothing can protect them from other people using these sites to do bad and dangerous things. Eisenberg said.
The lawsuit seeks an injunction to shut down the website and prevent it from creating fake pornography in the future. It also seeks civil penalties and attorney fees.
The lawsuit alleges that the site violates state and federal hate pornography laws, child pornography laws, and California’s Unfair Competition Law, which prohibits unlawful and unfair business practices.
Meré took action after reading about the harmful effects of fake images in a New York Times article. They immediately contacted Eisenberg, and together, they enlisted Chiu’s support in filing the lawsuit.
“Technology has been used to create deepfake mute everyone from Taylor Swift to ordinary high school girls with some visible repercussions,” said Chiu. “Images that are sometimes used to extort victims for money or humiliate and harass people.”
Experts warn that deepfake pornography poses severe risks to victims, affecting their mental health, reputation, college, and job prospects. The problem is compounded by the difficulty in tracing the origin of the images, making legal efforts challenging.
“This strategy can be seen as a Whac-a-Mole approach as more sites can emerge,” Chiu said. However, the lawsuit proposes to add more sites as they are discovered, aiming for more enforcement as the problem develops.
San Francisco, which is the hub for artificial intelligence industry with large companies like OpenAI and Anthropic based there, it’s the perfect venue for this legal challenge. Chiu acknowledged the positive contribution of the AI ​​industry but pointed out that fake pornography shows a “dark side” that needs to be addressed.
“Keeping up with a rapidly changing industry as a government lawyer is difficult,” Chiu said. “But that doesn’t mean we shouldn’t try.”
The lawsuit marks a significant effort to combat the abuse of AI technology in the creation of harmful content and hold accountable those who perpetuate such harmful practices.