KAIST develops a deepfake detector, for real

Home > Business > Tech

print dictionary print

KAIST develops a deepfake detector, for real

Images showing detection results of digitally-manipulated photos by Kaicatch. [KAIST]

Images showing detection results of digitally-manipulated photos by Kaicatch. [KAIST]

 
Researchers at KAIST have developed a tool to detect deepfakes — computer-manipulated images or videos that are so realistic they look real to most viewers.  
 
Dubbed Kaicatch, the software powered by an artificial neural network is the first commercial deepfake detection tool in Korea, according to the Daejeon-based research university. Efforts to spot synthetic media have been limited to research environments. 
 
Lee Heung-kyu, a computer science professor at KAIST. [KAIST]

Lee Heung-kyu, a computer science professor at KAIST. [KAIST]

 
A team led by Lee Heung-kyu, a computer science professor, said that the major feat of Kaicatch lies in its ability to identify visual disinformation on still photos when a detector has not been given any information about the manipulation.  
 
“A lot of studies have been carried out to determine the authenticity of photos,” Lee said in a phone interview with the Korea JoongAng Daily.  
 
“But many of the successful results ended up being not replicable in the real world, because the studies focused on algorithms designed to detect a certain form of alteration — such as image blurring, morphing or copying and pasting,” he said. “So, the past research achieved a high accuracy rate when given a mission to verify whether a certain alteration is used. But in cases without the conditions, their accuracy is significantly compromised.”
 
Kaicatch combines different algorithms to better spot synthetic photos, even when the detectors are in the dark about which technique is used.  
 
The program is designed to sort out digitally-altered photos — not videos — although the professor said that future studies will cover videos.  
 
“We need to make progress into the functions of existing AI [artificial intelligence] engines designed to spot digitally-manipulated videos and make them work with the Kaicatch software,” he said, adding that it will “take some time” to achieve the goals.  
 
The controversy surrounding deepfake images and videos received considerable attention in Korea after the “Nth room” chat room scandal last year.  
 
A gang of criminals have been convicted of producing deepfake pornography with celebrity faces and distributing them through an illegal website since 2018. Over 100 celebrities are said to be featured on the website, which had around 3,000 videos.  
 
Against this backdrop, high-tech start-ups also have joined the fight to counter visual disinformation.  
 
MoneyBrain, a Seoul-based start-up, has joined a government-led project to establish data needed to develop a deepfake video detection tool.  
 
For the six-month project that ends by the end of this month, the government has committed 2.6 billion won ($2.3 million).  
 
BY PARK EUN-JEE   [park.eunjee@joongang.co.kr]
 
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)