[Column] High-performance AI, a double-edged sword

Home > Opinion > Columns

print dictionary print

[Column] High-performance AI, a double-edged sword



Yoo Chang-dong
The author is a professor of electrical engineering at KAIST.

ChatGPT — the artificial intelligence (AI)-powered language model developed by OpenAI — has become phenomenal. It is being used by more than 25 million people a day around the world to help them summarize lengthy texts or even compose songs through human-like conversation. The recently released upgraded version — or Generative Pre-trained Transformer (GPT)-4 — can accept not just text inputs, but images as well.

The latest innovation has received complaints, as well as praise. In an open letter co-written a week after the GPT-4 was released last month, a group of AI experts, researchers, donors and industry voices called for a six-month “public and verifiable” pause to all activities of training and developing AI systems like the GPT-4 to assess their potential risks to the society.

Among the more than 1,100 signatories were Elon Musk, who co-founded OpenAI, and Apple co-founder Steve Wozniak, not to mention a handful of notable scientists, like cognitive doyen Gary Marcus and engineers at Amazon, DeepMind, Google, Meta and Microsoft.

The letter comes amid a heated race involving big tech and start-ups to advance powerful AI models for commercial purposes, which triggered concerns over unregulated — and uncontrollable — AI models of novelty. Time-out has been called as a warning to the rush so as to buy time and explore the benefits and potential dangers before further advance.

AI researchers pointed out “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” urging governments to step in and institute a moratorium if their call is ignored. But other experts argued that the letter focuses too much on long-term risks, while racial and gender bias and other issues demand more urgent attention.

Microsoft co-founder Bill Gates told Reuters that the pause in the development of AI won’t “solve the challenges” ahead. “I really don’t understand who they’re saying could stop, and would every country in the world agree to stop, and why to stop,” he said. “Clearly, there’s huge benefits to these things. What we need to do is identify the tricky areas.”

Generative models and other AI programs still have many limitations before they become commonplace. Key issues are “hallucinations” and “jailbreaks.”

They can hallucinate with wrong or irrelevant information to make up plausible answers. For instance, if you ask an AI program which city on the Moon is a good place to live in, it can make up an answer that Lunar City, the capital of Moon, is the best place to live in.

Jailbreaking refers to activities that tech-savvy users employ for AI programs to produce outcomes undesired by the developers for ethical or security reasons. For instance, hackers can jailbreak AIs to crack the security code of a computer program.

Hallucination and jailbreaking stem from the limitations of the datasets or fallacies in machine learning process. The problems can occur during the processing of large texts by a large language model.

If AI systems are supplied for civilian or public use without addressing the problems, side effects may not be small. If AIs are used to spread malicious rumors and slander against a certain person, the psychological and material damage could be severe.

Some forces could train an AI program to make a certain argument in their favor and shun opposing views to harm healthy communications among members of society. 
 
[SHUTTERSTOCK]

The problems from AI’s limitations could pose even graver consequences. AI can automatically create programs and control sensors and equipment connected to computers and the internet. AI generating malicious program to control networks and equipment may not be a fictional scenario.

If evil individuals or groups abuse AI to cause paralysis to our communication and traffic systems, many lives could be in danger and the society may have to pay a dear price to fix the problem. The advances in AI models without clear ethical and safety standards can cause serious damage beyond the digital realm.

The super AI capabilities are a double-edged sword. They can contribute greatly to mankind’s advance if they are used by people with good conscience. But they can bring about a disaster if they are abused by people with malicious intentions. The spread of unregulated AI models can do more harm than good.

AI developers are required to thoroughly examine if their models are really ethical and safe before releasing them and commercializing them for profit. Society must not blindly encourage AI development. Instead, it must establish institutional mechanisms, including an education system, for people to use AI properly.

Translation by the Korea JoongAng Daily staff.
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)