Surviving the AI technology war

Home > Opinion > Columns

print dictionary print

Surviving the AI technology war

 
Lee Kyung-bae
The author is vice chairman of the Korea CIO Forum.

The launch of ChatGPT this year was a historic event that demonstrated the formidable power of artificial intelligence (AI). Since then, we have witnessed numerous cases where the service can bring tremendous productivity gains and exceed human capabilities.

On the other hand, some people are worried that AI, big data and abuse of robots will destroy the order of the human world.

Sam Altman, the CEO of OpenAI, was recently removed by the company’s board of directors, only to be reinstated five days later. Founded in 2015 by Tesla CEO Elon Musk, OpenAI was launched as a nonprofit with a mission “to ensure that artificial general intelligence benefits all of humanity.” Initially, the organization aimed to develop safe and stable AI that would help people rather than seeking profitability.

But something unfortunate happened this year as the boomers close to Altman rushed to develop a follow-up project to the successful ChatGPT. The doomers on the board, who wanted to slow down the development because of the disaster that could follow if AI were to surpass human control, clashed with those boomers.

Altman returned to the company with support from over 95 percent of the organization’s workers and formed a new board. He is now poised to play a central role in the AI ecosystem through the GPT store, a marketplace for GPT models. Going forward, he will accelerate the commercialization and technological advancement of AI.

An AI Safety Summit took place in Britain on Nov. 28, with 28 countries participating, amid growing awareness that AI is a double-edged sword that could bring both blessings and disasters. Participants discussed the risks of generative AI, including “frontier AI,” and strategies to mitigate those risks through internationally coordinated action.

The Bletchley Declaration was adopted, agreeing on guidelines and codes of conduct for the safe development and use of AI. At the recent U.S.-China summit, a ban on the use of AI-equipped nuclear warheads and drones was discussed. But we need a more effective model than non-binding declarations.

Britain and the European Union took the initiative to host the AI Safety Summit in order to keep the situation, in which the United States is dominating cutting-edge AI technologies, in check. For a massive generative AI like ChatGPT to succeed, it needs to have the best AI algorithms, massive big data, huge AI cloud centers and more.

In the United States, the top brains and capital are concentrated in Silicon Valley, and mega-platforms such as Google, Apple, Amazon and Facebook are collecting and analyzing vast amounts of data from around the world in real time. AI cloud centers have been established in collaboration with semiconductor manufacturers including graphic giant Nvidia.

China is following the United States closely. Although it has strength in AI image processing and has accumulated vast amounts of data from its 1.4 billion residents through mega-platforms such as Baidu, Alibaba and Tencent, it still lags behind its rivals in the field of generative AI. In particular, the United States controls quantum computing, AI and semiconductor exports and investment.

As countries around the world engage in AI supremacy wars, and the era of global generative AI without language barriers arrives, we are at a point in which the strategies of Korean platform companies and the government are extremely crucial. The key question is whether Korean companies can succeed in developing basic models like as ChatGPT and compete with the world.

With the government’s announcement of a series of strategies to foster the AI industry, Korea has risen rank to No. 12 in the world by its overall AI status. However, the country is ranked No. 20 in the field of AI talent, revealing a major loophole in the high-tech field where only the best can survive.

We are already sensing a narrow-minded mood in terms of utilization; already, the government is prohibiting the connection of internal systems to the GPT platform. During the AI supremacy war, the government’s policy direction must be clear. We must remember that our AI training data project, which has been pumped with billions of dollars over the years, has become almost powerless in the face of generative AI.

As seen in the recent glitches of the electronic government system, Korea lacks a clear control and management authority in the field of advanced technology. Concerns are growing that Korea will fall behind in the global technology competition and become a contest arena for global corporations due to the government’s weak policies and inadequate investments.

Translation by the Korea JoongAng Daily staff.
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)