Authorities face technological, legal hurdles in combatting deepfake porn

By Kim Dong-young Posted : August 29, 2024, 15:50 Updated : August 29, 2024, 15:50
A photo touched on the screen in rough drawings and grids on the bottom left Getty Images Bank
Getty Images Bank

SEOUL, August 29 (AJP) – Korean authorities are struggling to combat the rapid spread of deepfake pornography as detection and control technologies lag behind the artificial intelligence (AI) tools used to create such content, experts say.

The government has recently pledged a tougher crackdown and heavier sentencing in response to the proliferation of sexually explicit deepfake images targeting women and girls.

However, law enforcement is ill-equipped to handle the surge in digital sexual crimes due to insufficient technologies to track and remove deepfake pornography distributed through encrypted messaging platforms like Telegram and the dark web.

" Technologically, it's not easy to track and address deepfake pornography circulating on the dark web due to its closed nature," a security industry expert said.

"Even if you crawl for images of specific individuals, it's difficult to find them because storage locations aren't fixed."

According to police, a total of 297 cases were reported in the first seven months of 2024, up from 180 for the entire year of 2023 and 160 in 2021. Offenders are said to have captured images of targets from social media sites, including Instagram.

Telegram is a hotspot for such illegal content, with one chatroom reportedly having about 220,000 members who create and share deepfake images by doctoring photographs of women and girls.

The dark web is also widely used to circulate illegal AI-manipulated visuals. A recent analysis by global security firm NordVPN revealed that leaked explicit photos and videos on dark web forums receive an average of 1,850 comments.

Han Dong-hoon, leader of the ruling People Power Party (PPP), pointed out the lack of legal and regulatory frameworks to handle the surge in sexual crimes using new technologies.

"Despite efforts in the previous National Assembly to amend laws addressing AI technology misuse, including the AI Framework Act and the Special Act on Sexual Violence Crimes, the results were unsatisfactory," he said on Thursday during a meeting between the government and the party to discuss deepfake crimes.

"While humans may misuse deepfake technology, it is also humans who can prevent its abuse. We must address this issue within our legal and institutional frameworks," he added.

The Korea Communications Standards Commission, a media watchdog, is set to establish a 24-hour hotline for victims and a consultative body to communicate with social media firms, including Telegram, X (formerly Twitter), Facebook and others, to delete and block deepfake sexual content.

The Korean government is also considering legislation that would require generative AI to include watermarks on synthetic content, allowing for easier identification and removal of deepfakes.

However, experts warn that such measures may have limited impact on deliberately malicious content. "Criminals are unlikely to comply with watermark requirements," said Jung Sou-hwan, director of Soongsil University's AI Convergence Research Institute.

Researchers are exploring alternative strategies, including the development of technologies to detect unwatermarked illegal deepfakes and identify specific distribution patterns on messaging platforms to prevent further spread.
 
기사 이미지 확대 보기
닫기