My page - topic 1, topic 2, topic 3 Postbox Live

Girl Created by AI to Entice Sexual Offenders

Police Use 14 Year Old Girl Created By Ai To Entice Sexual Offenders

Police Use 14-Year-Old Girl Created by AI to Entice Sexual offenders


Is this truly the best approach to child protection?

 

 

 

According to a complaint the state filed, law enforcement officials in New Mexico utilized an AI-generated picture of a fictitious adolescent girl to draw in pedophiles.


Snapchat is the target of a lawsuit that was brought last week in New Mexico. According to the lawsuit, the social media platform has not done enough to “protect children from sextortion, sexual exploitation, and harm.” The document, which Ars Technica brought to light, states that the state’s Department of Justice employees set up a “decoy Snapchat account for a 14-year-old named Heather” as part of the police’s “undercover investigation.”

According to the lawsuit, the police, using the pseudonym “Heather,” “found and exchanged messages” with accounts that belonged to blatant pedophiles, such “child.rape” and “pedo_lover10.” As Ars points out, in the past, law enforcement officials carrying out comparable inquiries would utilize pictures of younger-looking adult women often police officers to persuade child predators that they were interacting with an actual adolescent girl. However, in this instance, police convinced the offenders that Heather was, in fact, the genuine deal by showing them an AI-generated picture of a sexualized 14-year-old.


The officers said that the strategy was successful since numerous accounts they dealt with, tricked by the AI-generated photo, tried to coerce “Heather” into sending them explicit sexual photos or child sexual abuse material (CSAM.)

However, as Ars points out, the investigation’s success in exposing the unsettling dark truths of Snapchat’s algorithms begs new ethical concerns over the police’ use of AI. Consider the rising trend of AI-generated CSAM: is it really necessary for the government to produce more of it, even if it is counterfeit?


Carrie Goldberg, a prominent attorney who has represented multiple victims of Harvey Weinstein’s sexual abuse, told Ars that “of course, it would be ethically concerning if the government were to create deepfake AI child sexual abuse material (CSAM) because those images are illegal, and we don’t want more CSAM in circulation.”

There are also ethical questions regarding the AI training datasets the cops’ efforts leaned on.


To generate fake images of children, an AI model has to be trained on photos of real kids. It’s hard to argue that a child can give their full consent for their image to be used for AI training in the first place a question made all the more serious when AI is being used to generate sexualized or otherwise harmful images of them.


Elsewhere, on a practical level, Goldberg warned Ars that using AI-made photos of fake kids could provide useful kindling for entrapment defenses by perpetrators.

Overall, the deployment of AI by the investigators is a catch-22 situation for law enforcement. Predators accepted the bait, the lawsuit claims, on the one hand. However, transforming real-life child photographs into sexualized, artificial intelligence (AI)-generated images of false ones seems a long way from complete safety if the intention is to safeguard real children.

 

 

 


Discover more from Postbox Live

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from Postbox Live

Subscribe now to keep reading and get access to the full archive.

Continue reading