Korea's election watchdog introduces AI solution to detect deepfake videos during presidential race
Published: 27 Apr. 2025, 13:56
Updated: 28 Apr. 2025, 10:09
![An illustration depicting AI-generated deepfake images [GETTYIMAGESBANK]](https://koreajoongangdaily.joins.com/data/photo/2025/04/28/e4b25157-5a56-461b-96e8-fa8f00f17f17.jpg)
An illustration depicting AI-generated deepfake images [GETTYIMAGESBANK]
Ahead of the June 3 presidential election, the National Election Commission (NEC) introduced a deepfake detection AI model developed by the National Forensic Service (NFS) to combat the spread of fake and defamatory deepfake videos. The model is called “Aegis,” named after the shield used by Zeus in Greek mythology, and will be used by the voting commission to detect sophisticated AI-generated deepfake videos that are difficult to identify with the human eye.
Aegis was jointly developed by the NFS and the Korea Electronics Technology Institute (KETI). The NFS has been conducting research since April last year under a project organized by KETI and funded by the Institute of Information & Communications Technology Planning & Evaluation titled "Self-evolving deepfake detection technology to prevent the social side effects of generative AI," according to industry sources on Sunday.
Aegis was developed in the second year of the research project. The Korean National Police University, KAIST, Cleon — an AI face and voice generation company — and film production company Wysiwyg Studios are also participating.
Aegis is optimized for detecting deepfakes. The term "deepfake" is a combination of "deep learning" and "fake," referring to AI technology that creates fake videos. Under the revised Public Official Election Act that took effect in January 2024, it is illegal to produce or post AI-generated deepfake videos for electioneering purposes within 90 days of an election.
Nevertheless, deepfake videos targeting major presidential candidates have been spreading rampantly, especially on social media. Examples include Democratic Party candidate Lee Jae-myung appearing in prison clothing, and People Power Party candidate Han Dong-hoon removing a wig — videos that are largely slanderous and defamatory.
![Democratic Party (DP) presidential primary candidate Lee Jae-myung's campaign officials on April 16 file a report to the Seoul Metropolitan Police Agency against YouTubers for creating and spreading deepfake videos containing false content. [YONAHP]](https://koreajoongangdaily.joins.com/data/photo/2025/04/28/3c574031-c290-4801-bed4-52175d3b00b5.jpg)
Democratic Party (DP) presidential primary candidate Lee Jae-myung's campaign officials on April 16 file a report to the Seoul Metropolitan Police Agency against YouTubers for creating and spreading deepfake videos containing false content. [YONAHP]
Since April 9, the commission has been operating a “special task force on false and defamatory AI deepfakes,” employing a three-stage detection process.
First, human monitors visually examine videos in a step called “visual and auditory detection.” If a video cannot be easily distinguished, an AI model like Aegis is used in the second step, “program detection.” The third step involves judgment by AI experts.
“We mainly use the NFS’s model Aegis for detection because of its high accuracy, but we also cross-verify with other models," an NEC official said.
Aegis is designed to detect output generated through the diffusion method, one of the latest approaches for creating AI images. There are two main AI image generation methods: generative adversarial networks (GANs) and diffusion models. GANs, the earlier method, involve two competing neural networks to generate images quickly, but the resulting images often suffer from distortions that make them easier to detect.
![An illustration depicting AI-generated deepfake images [GETTYIMAGESBANK]](https://koreajoongangdaily.joins.com/data/photo/2025/04/28/fe67ab62-a116-47cd-bff0-994a55118054.jpg)
An illustration depicting AI-generated deepfake images [GETTYIMAGESBANK]
In contrast, recent image generators like Midjourney and DALL-E from OpenAI use the diffusion method, which adds artificial noise to images and then gradually removes it to create high-quality, high-resolution results. Because of this, images generated by diffusion are much harder to distinguish from real ones using conventional detection techniques.
Aegis has been enhanced to detect even deepfake videos made by the diffusion method.
“We are equipping Aegis not only to detect deepfakes and deep voices, but also to analyze whether the generated content was created with malicious intent," said an NFS official.
Translated from the JoongAng Ilbo using generative AI and edited by Korea JoongAng Daily staff.
BY EO HWAN-HEE [[email protected]]
with the Korea JoongAng Daily
To write comments, please log in to one of the accounts.
Standards Board Policy (0/250자)