[Column] Transparent use is the key to AI

Home > Opinion > Columns

print dictionary print

[Column] Transparent use is the key to AI

Oh Hye-yun
The author is a professor at the School of Computing, KAIST, and an adviser to the JoongAng Ilbo Reset Korea Campaign.

ChatGPT — the latest artificial intelligence (AI) trained to respond to user questions in natural language on a conversational interface — has created a splash since its release by OpenAI in November last year. So far, awe over AI innovations — such as AlphaFold, a deep learning program devoted to protein structure, and OpenAI’s DALL-E2, which creates images from text description — has been restricted to people in the field. Language-smart learning machines are garnering attention from a broader audience, prompting Google to release its version, Bard, and Korean companies to scramble to join the chatbot bandwagon.

Chatbots are hardly perfect as they often answer wrongly or in bias, causing serious concerns over the spread of false information and prejudice against minorities. Since its “learning” data stopped at 2021, ChatGPT answers that South Korean president is Moon Jae-in. It also finds 45035 an even number. When similar language process algorithms apply to broader systems, they can feed more wrong information to users.

After Microsoft dropped its chatbot Tay for its racist and Nazi comments in 2016, ChatGPT has been trained to be neutral on sensitive issues. It avoids commenting on drugs and murders, and makes its response to issues like the death sentence simple and equivocal by offering general narrative on social security.

How ChatGPT has come to build defensive filtering has not been disclosed. Such technology could not have been implanted in the language process system. So, the language structure learning machine is used without clear understanding and awareness on the impact of language or with abusive or malicious intentions, the virtual tool could cause enormous social problems.

Microsoft pledging a $10 billion investment in OpenAI and the integration of ChatGPT onto its Bling search engine as well as its computer programs like Word means that AI will soon seep deeply into our lives. Korean society, however, is unready against the potential gush of adverse effects these intelligent chatbots can bring.

The European Union and many other governments have come up with guidelines on ethical employment of AI. Seoul also announced an ethics code on AI in November 2022. But they are too ambiguous.

The government guidelines should be more detailed and responsive to the technology advances. Companies developing and commercializing AI-powered instruments must strengthen internal supervision and training on programmers, as well as orienting culture and philosophy towards putting user and social interests first. Universities and other education institutions also need to reinforce ethics education in the coming of AI age.

Various experts in science, law and philosophy, as well as education, corporate and government officials must debate and research deeper, faster and more prudently on transparency, credibility and accountability on AI ethics which still lacks clear definition. Researchers call for disclosure of the data sheet and model cards to train AI to share the software limitations for greater transparency regarding AI innovation. They also demand chatbots disclose the reference materials it had based its answers on. AI used for financial and medical assistance should explain how it came to its measurement or prediction. Discussions are underway for the property rights to the data AI uses and the texts and images it creates. The research and debate are in a fledging stage, and Korea has yet to start.

Application of language assistance can be positive in many ways. Chatbots would find various information faster and easier, help people organize their text materials better, and translate into other languages or add contents. Fast adoption of helpful innovations can help the country and national competitiveness. The government, academy and companies must join global research and debate, not to mention working on establishing specific guidelines, legislations and education system that can accommodate the new changes.

Translation by the Korea JoongAng Daily staff.
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)