Build an international body to stop deepfakes

Home > Opinion > Columns

print dictionary print

Build an international body to stop deepfakes

Hur Jung-yeon
The author is a reporter of the JoongAng Sunday.

As the damage from deepfake videos spreads fast, the Yoon Suk Yeol administration struggles to devise countermeasures. Following the decision on Aug. 30 to punish not only the possession or purchase of deepfake clips linked to sexual crimes but also the viewing of them, the government announced a comprehensive plan on Sept. 1 to deal with increasing cyberattacks taking advantage of new technologies such as artificial intelligence (AI). This plan, jointly developed by 14 government ministries, includes augmenting self-regulation by portal and platform operators.

Korea University Emeritus Professor Lim Jong-in, 68, says, “Excessive regulations which hinder AI innovation are not the only solution,” stressing that it can be more effective “to push for autonomous regulation and active countermeasures at the same time.” Prof. Lim, who majored in cryptography, is one of the pioneering cybersecurity experts in Korea. He established the Graduate School of Information Security at Korea University in 2000 and served as its dean for 15 years. After serving as the special advisor to the president for national security in 2015 during the Park Geun-hye administration, Lim was appointed as the special advisor to President Yoon Suk Yeol for cybersecurity in January 2024. He has since been dedicated to preparing government-level measures to tackle deepening cybersecurity challenges. The JoongAng Sunday met him at the presidential office in Yongsan District to listen to his views on the threats from deepfakes and AI.

Q. Could deepfake crimes be uprooted by self-regulation alone?
A. Deepfakes have gone beyond the scope of technical control. Due to the nature of cyberspace, people creating deepfakes can hide their identity easily. The best response now is to quickly grasp the intrusion and promptly respond to the situation to prevent the damage from spreading further. The National Cybersecurity Basic Plan the Yoon administration recently announced also underscores that point. Deepfake crimes are no different. It’s difficult to reduce damage by simply regulating platforms. We must first ask them to regulate themselves voluntarily, but at the same time we must clearly define illegal content such as child sexual exploitation material (CSEM) to raise the efficiency of responding to the new crimes.

What do you think about Telegram recently expressing willingness to help the Korean government?
The 25 digital sex crime videos the Korea Communications Standards Commission recently requested Telegram to delete were likely uploaded to official channels. But this can be just the tip of the iceberg when considering countless private chat rooms that are much more active than their official counterparts. Surprisingly, Telegram complied with our government’s request to delete deepfake materials only when they violated the Public Official Election Act. That’s because our domestic laws on deepfakes are only applied to the Election Act. In this case, they were deleted within three days. That’s why related laws should be enacted as soon as possible to demand the deletion of deepfakes from portals and platforms like Telegram.

California, dubbed “the mecca of Big Tech,” is also pushing for a draconian deepfake regulation law. Under the current law, the act of producing deepfake-based sexual exploitation videos is not illegal even if they target minors. That’s because under the U.S. Constitution’s First Amendment — which guarantees freedom of expression — prosecuting and punishing offenders are impossible if the person in the deepfake is not a real person. But the bill passed by the California State Assembly in August stipulates that deepfake CSEMs be penalized even if they don’t deal with real people. The bill exponentially strengthened regulations on deepfakes for elections, too. The European Union (EU) and the United Kingdom are sternly responding to deepfake crimes by revising related laws one after another.

How much damage is caused by deepfake crimes around the globe?
While sexually exploitative materials are a big problem for Korea, financial fraud caused by deepfakes has reached serious levels in the United States. The damage from deepfakes in America amounted to $12 billion last year. There are pessimistic forecasts that the losses from ransomware attacks will soon exceed $20 billion. Concerns are fast growing that deepfakes can be used for financial gains by impersonating high-ranking officials within an organization or accessing a company’s security network to steal important information. But those deepfake regulation acts recently introduced in the United States and Europe need more supplements, as the scope of their application is too broad and vague.

What is the current state of deepfake detection technology?
Earlier this year, when fake videos of former U.S. President Donald Trump being taken away by the police were circulated, vulnerabilities such as the shape of the mouth not matching the word spoken or the funny presence of an extra finger stood out. But if you watch the latest videos created by generative AI and deepfake apps, you can hardly tell the difference. The level of AI is said to double every six months. No matter how excellent the current detection technology is, it will likely be ineffective in just a few months.


Korea University emeritus professor Lim Jong-in, who was appointed as the special advisor to President Yoon Suk Yeol for cybersecurity in January, speaks about the growing deepfake challenges in an interview with the JoongAng Sunday at the presidential office, Sept. 3. [KIM HYUN-DONG]
 
According to the National Police Agency, 297 deepfake crimes were reported from January to July. Of the 178 suspects arrested, 131, or 73.6 percent, were teenagers. The police are preparing to respond more strictly by applying the Youth Protection Act when the target of a deepfake is a child or adolescent. In an alarming development, 53 percent of deepfake sexual exploitation victims worldwide were Korean, according to recent data released by a U.S. cybersecurity company.
 
It is shocking that a significant number of deepfake victims turned out to be Korean celebrities.
As Korean idol stars attract great international attention thanks to the high popularity of K-culture, they become easy targets for deepfakes. But the problem is that many of the offenders who created these videos are teenagers. As AI has developed so rapidly, there has been virtually no ethics education. As a result, teenagers are becoming criminals just out of curiosity or as a joke. We must urgently educate them about the merits and demerits of AI before it’s too late. 
 
On Aug. 8, the United Nations (UN) Ad Hoc Committee on Cybercrime unanimously adopted a draft UN Convention Against Cybercrime. The first cyber-related agreement at the UN level drew much attention as it included provisions mandating each country establish ‘criminal punishment regulations’ for online sexual crimes. The convention also mandated UN member states set standards for establishing a unified legal system to meet the requirements for collecting evidence to respond to crimes. This agreement is scheduled to be adopted at the UN General Assembly later this month.
 
What is Korea’s role in building global cooperation to block deepfakes from spreading further?
Just as the international community launched the International Atomic Energy Agency (IAEA) to prevent nuclear proliferation in the 20th century, we need to create an international body with stronger authority and solidarity to stop the ominous spread of cybercrime in the 21st century. Cybercrime has no borders. Currently, Korea and the United States are conducting joint cyber exercises and actively exchanging experts. We plan to expand beyond the Korea-U.S. cooperation to the global level in the future. To this end, it is equally important to promptly enact laws related to cybersecurity and AI. AI will soon be the norm across all fields. If we first present a law that can serve as a model to the rest of the world, we can grab a chance to advance as a leading country not only in hardware but also in software.
 
Due to increasing deepfake cases, public concerns about the side effects of AI are also growing.
Although Korea has a high level of AI utilization among OECD member countries, a positive recognition of AI is still lacking. AI itself is not a bad thing. According to recent data, the economic impact that can be reaped when AI is successfully introduced across domestic industries amounts to 300 trillion won ($225.2 billion) annually. A national AI committee chaired by President Yoon Suk Yeol and attended by 10 government ministers and representatives from academia and industries will be launched later this month. The time has come for Korea to establish a balanced basic AI law which effectively encompasses both promotion and regulation of the AI industry before it’s too late. 
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)