[AI IN ACTION] The threat from AI is real. And we need to talk about it now.

Home > National > Social Affairs

print dictionary print

[AI IN ACTION] The threat from AI is real. And we need to talk about it now.

[SHUTTERSTOCK]

[SHUTTERSTOCK]

It is not by sheer coincidence that the next global summit to discuss the possible dangers and risks of AI to humanity is coming to Korea.
 
Home to several data centers able to test generative AI, the Asian tech giant is eager to get ahead in the race to harness AI and automation for economic growth.
 
The government pledged to invest 1.2 trillion won ($910 million) next year to support corporate development and the use of AI, 15 percent more than this year’s budget outlays for the technology.

Related Article

 
But the conference is perhaps more relevant because Korea stands on a precarious principle regarding AI development: All of its draft regulations are built on the principle of developing AI before regulating it, except in cases where an AI product is deemed harmful to human life.
 
“This approach can be especially detrimental because AI programs are growing at a rate where complete human oversight is no longer possible,” said Choi Byeong-ho, an expert on AI at Korea University.
 
Choi spoke of the particular ability of AI nowadays to create and execute subsidiary tasks, which can lead to life-threatening decisions.
 
A classic example is an AI programmed to reduce carbon emissions inside an office space. As it creates sub-tasks to try to reach its goal, it analyzes what is causing the carbon emissions in the room. Assessing that the largest source of carbon emissions in the space is the office workers, the program decides to kill the workers.
 
“It starts to become scary when we cannot see how an AI thinks and makes decisions,” Choi said. “And that’s where we are today.”
 
Naver's employees managing its recently opened second data center in Sejong on Nov. 8. [YONHAP]

Naver's employees managing its recently opened second data center in Sejong on Nov. 8. [YONHAP]

Leaders at the inaugural AI Safety Summit at Bletchley Park in Britain — where World War II codebreakers once wracked their brains to save humanity — agreed on the need to identify AI's existential risks to humanity and coordinate policies. 
 
Several AI experts that the Korea JoongAng Daily spoke with stressed the importance of such organic conversations on the future of AI, not only between leaders of the world and corporate executives but also between laypeople.
 
“People should be asking questions on what type of a future with AI they want,” Choi said. “Because that future is coming regardless.”
 
Regulations vs. development
The global corporate competition over AI is at the core of Korea's lax regulations regarding the technology.
 
“The government is trying to focus on fostering the AI industry and the related ecosystem first,” lawyers at firm Shin & Kim based in Seoul said in a statement in June. “There is a general fear against falling behind in competition with the U.S. and China, that so-called ‘AI sovereignty’ might be threatened if the laws and systems exist.”
 
Hence, Korea’s corporate executives have pushed for lenient rules on AI, and the government has largely complied.
 
Naver, Korea’s largest portal website, launched its second data center in Sejong this month, filled with AI supercomputers able to store and churn through 65 exabytes of data.
 
With it, Korea could become one of the most advanced markets for large language models, or LLMs, following the United States and China, though Korea’s datasets are primarily in Korean and may not be globally relevant.
 
Naver’s executives have pleaded for prioritizing development rather than regulation regarding AI.
 
Lawmakers have listened. All the laws on AI regulation drafted and tabled for voting at the National Assembly follow the principle.
A robot in operation at Naver's second data center in Sejong on Nov. 8. [YONHAP]

A robot in operation at Naver's second data center in Sejong on Nov. 8. [YONHAP]

 
The latest one drafted by former tech entrepreneur and presidential candidate Ahn Cheol-soo, for instance, prohibits the development of any artificial intelligence product that poses a clear threat to human peace and dignity and fundamental rights such as freedom and equality, as well as democracy.
 
But where this is not the case, it calls for prioritization of development over regulations.
 
This can include AI products that make decisions about public resources such as energy, drinking water, health and medicine, nuclear power plant operations, and facial recognition for criminal investigations.
 
“If something goes wrong there, the costs to human well-being will be on another scale, possibly, an irrecoverable scale,” Choi said.
 
Leaders of corporate giants, like the CEO of Open AI, Sam Altman, have been frank in expressing fears about what they might be creating.
Leaders pose for a group photo on the second day of the UK Artificial Intelligence Safety Summit, at Bletchley Park, in Bletchley, England, on Nov. 2. [AP/YONHAP]

Leaders pose for a group photo on the second day of the UK Artificial Intelligence Safety Summit, at Bletchley Park, in Bletchley, England, on Nov. 2. [AP/YONHAP]

 
The European Union has taken the most prescriptive and detailed regulatory approach to AI.
 
Its AI act puts different types of AI products on a scale depending on the risk to human life and also stipulates regulations on these different types of at-risk AI.
 
U.S. President Joe Biden recently issued an executive order on AI, the strongest of its kind, to mandate companies to share “their safety test results” with the U.S. government.
 
Can AI behave ethically?
Even if world leaders could agree on what types of AI products posed existential risks to humanity, designing AI to behave as ethically as humans — or better — would pose another challenge.
 
“The world is already biased, which means that AI is being programmed by humans who all have their own biases,” said Lim Yun-jin, professor of AI engineering at Sookmyung Women’s University. “There is no standard yet on how to program AI to be unbiased.”
 
For instance, a self-driving vehicle carrying two passengers may have to make a split-second decision when it expects an imminent car crash with a vehicle carrying three passengers.
 
“If it hits the break, it may save the three passengers in the other car, but it could result in the death of the two passengers it is carrying,” said Lim. “How do we program AI to make the right decision in this case? What is the right decision in such a case anyway? These are tough questions that we need to ask when developing AI.”
 
Cultural differences can also yield different ethical standards on AI, which may cause international disputes.
 
“Something that is not offensive in a culture can be very offensive in another culture,” said Choi. “Who will get to define what is discriminatory in how AI thinks? These all need society-wide conversations and consensus.
 
“We need more time for this, but the speed of development of AI is not giving us that, unfortunately,” he added.
 

BY ESTHER CHUNG [chung.juhee@joongang.co.kr]
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)