For research purposes, I googled “AI nude images”. Within 0.29 seconds, the searcher found about 1,480,000,000 results. Most of them on the first page refer directly to specific apps for creating deepfake nudes or recommend the best ones.
This is alarming – the number of minors using these apps has been increasing since last year. Usually, female classmates turn into the victims. Once created, fake sexually explicit pictures begin to circulate, and the school life of the exploited girls becomes unbearable.
From this post, you’ll find out:
- ▪️ Why we should be concerned about deepfake apps used by students
- ▪️ How do they impact children’s safety
- ▪️ What’s our solution to avoid these apps at your school
How Serious is the Problem?
In December 2023, two middle school students were arrested in Miami. They had doctored nude images of their younger female classmates using a deepfake app. This was the first case of arrest and criminal charges for AI-generated nudes in the US.
In February 2024, eighth-grade students were accused of using an AI tool to generate fake explicit images of teenage girls from their classroom and sharing them at a school in Beverly Hills. The five most involved students were expelled from school.
A series of similar incidents in US schools opened a discussion about the new risks of artificial intelligence in education. In 2023, more deepfake, sexually explicit videos were published online without consent than in all previous years.
Fake Porn Still Unpunished
Artificial intelligence tools became available at the click of a mouse just last year. The law has not kept up with rapidly developing technologies. “Deepfake laws” to punish non-consensual, sexually explicit AI pictures haven’t been established yet. This creates a grey area for crimes related to fake porn, including the fake child porn.
The DEFIANCE Act introduced by Congresswoman Alexandria Ocasio-Cortez in March 2024 gives hope for a change in US legislation and justice for victims. With this bipartisan act, victims of “digital forgery”, in which false images have been created based on their appearance using AI, will gain the right to civil action.
“Deepfake pornography is a form of digital sexual violence. It violates victims’ consent, autonomy, and privacy. Victims face an increased risk of stalking, domestic abuse, loss of employment, damaged reputation, and emotional trauma.”, said Omny Miranda Martone, CEO of the Sexual Violence Prevention Association (SVPA).
Children, Victims of Deepfake Apps
Deepfake nude apps join another challenge related to artificial intelligence in school – the student assignments’ integrity. Both push to seriously reflect on the ethical aspect of AI technologies and their impact on children.
Sexual abuse, with the help of various AI-based apps, can happen to anyone. Their victims have included ordinary teenagers and celebrities such as singer Taylor Swift and actor Jenna Ortega.
For example, a company manipulated and used pictures of the 16-year-old “Wednesday” star to promote their deepfake nude app on Facebook and Instagram. After the press intervention, Meta suspended the ads. The app is still available online, although not in the official app stores.
Deep Fake Celebrity Impressions
byu/ButterscotchNed inDamnthatsinteresting
An example of a deep fake video using celebrity faces
When it comes to judging minor perpetrators, we must remember that K12 students are still in the process of growing and learning. Many of them are probably unaware of all the consequences of generating and sharing fake pictures (or any kind of harmful content) based on the identities of real people.
Teachers and parents should find out about these incidents as soon as possible to engage in discussing them with students. It would be even better if they could monitor student devices and school networks in real time. It helps to prevent and detect the use of these kinds of apps.
How to Prevent Deepfake Nudes at School?
More than twenty girls were abused by their classmate who created nudes using artificial intelligence at a high school in Illinois. He shared the fake pictures via his school email address. However, the admins didn’t notice that until another student reported the incident.
The good news is that, regarding emails, Google Classroom, the Chrome browser or Chromebooks in the school domain being used to disseminate harmful content, the K12 admins can easily detect and stop this.
With the right tool, it’s entirely in their hands.
GAT Shield from GAT Labs can block access to specific sites and web apps and detect and block explicit keywords and images based on text.
First, you need to add a new alert rule in GAT Shield to block specific words. When configuring it, you fill out the Regex field, including forbidden keywords. Check out our Knowledge Base for an example of Regex to block AI deepfake nude sites in your school domain.
The admin configures these rules to receive real-time alert notifications, close a window, add a message on the screen, and email any designated user if students break any of these rules on their school devices.
Closing Thoughts
Every student has the right to feel safe and happy in the school environment. Fake nude photos have already destroyed the lives of hundreds of girls in US schools and left them with trauma.
While artificial intelligence offers many functionalities to support learning and growth, it can also allow exploitation and humiliation. Adults responsible for underage students should do everything in their power to avoid this nasty practice.
Highly effective Classroom monitoring tools, such as GAT Shield, help teachers and parents detect this problem early and prevent it from spreading among children.
Audit. Manage. Protect.
Discover how Management & Security Services can help you with deeper insight and on-call, personalized assistance.