In This Section
World-first tool reduces harmful engagement with AI-generated explicit images
- World鈥檚 first research-backed intervention reduces harmful engagement with AI-generated explicit imagery.
- As the Grok AI-undressing controversy grows, researchers say user education must complement regulation and legislation.
- Study links belief in deepfake pornography myths to higher risk of engagement with non-consensual AI imagery.
A new evidence-based online educational tool aims to curb the watching, sharing, and creation of AI-generated explicit imagery.
Developed by researchers at 福利1000在线 College Cork (UCC), the free 10-minute intervention Deepfakes/Real Harms is designed to reduce users鈥 willingness to engage with harmful uses of deepfake technology, including non-consensual explicit content.
In the wake of the ongoing Grok AI-undressing controversy, pressure is mounting on platforms, regulators, and lawmakers to confront the rapid spread of these tools. UCC researchers say educating internet users to discourage engagement with AI-generated sexual exploitation must also be a central part of the response.
False myths drive participation in non-consensual AI imagery
UCC researchers found that people鈥檚 engagement with non-consensual synthetic intimate imagery, often and mistakenly referred to as 鈥渄eepfake pornography鈥, is associated with belief in six myths about deepfakes. These include myths such as the belief that the images are only harmful if viewers think they are real, or that public figures are legitimate targets for this kind of abuse.
The researchers found that completing the free, online 10-minute intervention, which encourages reflection and empathy with victims of AI imagery abuse, significantly reduced belief in common deepfake myths and, crucially, lowered users鈥 intentions to engage with harmful uses of deepfake technology.
Using empathy to combat AI imagery abuse at its source
The intervention has been tested with more than two thousand international participants of varied ages, genders, and levels of digital literacy, with effects evident immediately at a follow-up weeks later.
The intervention tool, Deepfakes/Real Harms, is now freely available to download.
Lead researcher John Twomey, UCC School of Applied Psychology, said: 鈥淭here is a tendency to anthropomorphise AI technology 鈥 blaming Grok for creating explicit images and even running headlines claiming Grok 鈥渁pologised鈥 afterwards. But human users are the ones deciding to harass and defame people in this manner. Our findings suggest that educating individuals about the harms of AI identity manipulation can help to stop this problem at source.鈥
, UCC School of Applied Psychology and research project Principal Investigator, said: 鈥淩eferring to this material as 鈥榙eepfake pornography鈥 is misleading. The word 鈥榩ornography鈥 generally refers to an industry where participation is consensual. In these cases, there is no consent at all. What we are seeing is the creation and circulation of non-consensual synthetic intimate imagery, and that distinction matters because it captures the real and lasting harm experienced by victims of all ages around the world.鈥
鈥淭his toolkit does not relieve platforms and regulators of their responsibilities in tackling this appalling abuse, but we believe it can be part of a multi-pronged approach. All of us 鈥 internet users, parents, teachers, friends and bystanders 鈥 can benefit from a more empathetic understanding of non-consensual synthetic imagery,鈥 Dr Murphy said.
, UCC School of Applied Psychology, said: 鈥淲ith this project, we are building on our previous work in the area of responsible software innovation. We propose a model of responsibility that empowers all stakeholders, from platforms to regulators to end users, to recognise their power and take all available action to minimize harms caused by emerging technologies."
Reducing intentions to engage in harmful deepfake behaviours
Feedback from those who have completed the intervention includes:
鈥淚 think it was very useful to show that deepfakes can be damaging even if people know they aren't real. Too much of the deepfake discourse focuses on people being unable to tell them apart from reality when that's only part of the issue.鈥
鈥淲hat stood out as good about this is that it didn鈥檛 come across as judgmental or preachy鈥攊t was more like a pause button. It gave space to think about the human side of the issue without making anyone feel attacked. 鈥 Instead of just pointing fingers, it gave you a chance to reflect and maybe even empathize a little, which can make the message stick longer than just being told, 鈥楧on鈥檛 do this鈥.鈥
Deepfakes/Real Harms is launched as part of UCC Futures - Artificial Intelligence and Data Analytics.
, Director of UCC Futures - Artificial Intelligence and Data Analytics and member of the Irish Government鈥檚 Artificial Intelligence Advisory Council said: 鈥淎s we work towards a future of living responsibly with artificial intelligence, there is an urgent need to improve AI literacy across society. As my colleagues at UCC have demonstrated with this project, this approach can reduce abuse perpetration and combat the stigma faced by victims.鈥
This project is funded by , the Research Ireland Centre for Software.