Translation. Region: Russian Federation –
Source: United Nations – United Nations –
An important disclaimer is at the bottom of this article.
February 4, 2026 Human rights
Approximately one in 25 children falls victim to criminals creating sexualized deepfakes. This is the conclusion reached by researchers from the United Nations Children's Fund (UNICEF), the international organization ECPAT, and Interpol. They conducted a joint study in 11 countries.
At least 1.2 million children reported having their images manipulated and transformed into sexually explicit deepfakes. To highlight the scale of the problem, UNICEF compared it to having a child in every classroom who had been a victim of such a crime.
Deepfakes – photos, videos, or audio files created or manipulated using artificial intelligence (AI) to look or sound like the real thing – are increasingly being used to create sexualized content featuring children.
Children themselves are well aware of this risk. In some countries surveyed, nearly two-thirds of children said they were concerned that AI could be used to create fake sexual images or videos.
UNICEF emphasizes that creating deepfake videos containing sexual content amounts to child abuse. These fake videos and photographs normalize the sexual exploitation of children and fuel demand for violent content.
“UNICEF welcomes the efforts of AI developers who are implementing safe-by-default approaches and robust safeguards to prevent misuse of their systems,” the fund said.
However, many AI models are developed without adequate safety mechanisms. Risks can be exacerbated when generative AI tools are embedded directly into social media platforms, where processed images are quickly shared.
UNICEF calls on all countries to criminalize the creation, acquisition, possession, and distribution of AI-generated content containing sexualized images of children.
AI developers must implement safety-enhancing approaches from the design stage. All digital companies must prevent the dissemination of AI-generated child sexual abuse images, rather than simply removing them after abuse has occurred, UNICEF emphasizes.
Please note: This information is raw content obtained directly from the source. It represents an accurate account of the source's assertions and does not necessarily reflect the position of MIL-OSI or its clients.
