[Regulating chatbots before it’s too late

Home > Opinion > Columns

print dictionary print

[Regulating chatbots before it’s too late



Hasok Chang
The author is a chair professor of history and philosophy in the science department at the University of Cambridge.

Artificial intelligence (AI) is creating another round of global hype. In 2016, it was AlphaGo beating Go masters starting with Lee Se-dol of Korea. Today’s awe is chatbots using AI and other natural language to process questions and engage in human-like conversations.

We have so far relied on search engines by typing in keywords to look up related articles, images and other materials on the internet. A chatbot can do much more, responding to questions precisely in the way a human knowledgeable on the topic does. After further evolution, it would one day become difficult for humans to decipher whether it is a human or a computer they are conversing over the internet.

U.S. start-up Open AI, co-founded by Elon Musk and backed by Microsoft, released ChatGPT in November last year to instantly shock users during trials. It was unlike the chatbots common in mobile consumer services, which cannot answer beyond simple questions. ChatGPT can answer any questions eloquently.

The chatbot digs into the sea of data on the internet to instantly come up with logical sum at the request of users. Powered by a large language model, it can respond to questions in a precise and smooth way — and in different styles according to the literary genre. It even can write poetry and lyrics. Poetry and songwriting have been confined to the creative human realm, but now machines are taking a crack at it. Although its literary ability is limited for now, how far it can go cannot be fathomed.

Those who have tested out ChatGPT in Korea seem to be less impressed, possibly due to the language barrier. Since the AI is trained on English-based data, it could face limitations in accessing the Korean language. Compared to English, Korean materials on the internet are quite limited in terms of quantity and quality. For ChatGPT’s part, translating questions in Korean into English could still be challenging for English-trained AI. But this also could change.

Google reportedly has developed a chatbot akin to ChatGPT and prepares to introduce it. AI-powered chatbots will be the next big thing companies and researchers compete over.
 
 
The evolution of chatbots should not be something to be entirely thrilled about. Educators in the English community are already worried about the chatbot doing homework now that it is capable of producing outputs upon the input of the topic and questions. Students won’t have to plagiarize in the traditional way. My colleague professors tested out ChatGPT and found it can produce reports capable of scoring at least a B. After further development, chatbots will most likely do better than most students.


But a bigger problem is that chatbots can be abused by bad actors who arbitrarily spread false information. It takes enormous work and time for humans to spread fake news and conspiracies. When you use chatbots, you can do much faster and in huge quantities. As false information has already become a threat to Western society, the reinforcement of the army of chatbots would make it worse. Chaos can result if chatbots undistinguishable from humans are mobilized for harmful missions.

Before things get out of control, there must be a set of regulations on developing, distributing and using high-performance chatbots. Some would argue that regulations going against the global trend are undesirable or feasible. But in many areas,, including science and technology, regulations are in place.

For instance, toxic chemicals from pesticides and heavy metals are heavily regulated. Biological weapons are banned by international treaty. Any countries disobeying international law fall under strong sanctions.

Even for a good cause, medical experiments on living bodies are strictly forbidden. Tests on animals also must follow strict rules. Production and operation of aircraft are meticulously supervised to prevent accidents. If so, why are we so lenient with computers and artificial intelligence? Our greed to see how the technology can go can extract a great toll and lead to various side effects.

Translation by the Korea JoongAng Daily staff.
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)