AI in metaverse

Home > Opinion > Columns

print dictionary print

AI in metaverse

Kim Byoung-pil
The author is a professor of business and technology management at Kaist.

AI says, “I heard cows go to college.” A human asks, “Cows go to college?” AI responds, “I heard that a cow went to Harvard.” As it sounds strange, the human asks, “What did the cow study?” “Bovine sciences.” “Do horses also go to college?” “Horses go to Hayvard.” “That’s a pretty good joke.”

Well, there is no school called “Hayvard.” The AI just made it up by combining “hay” and “Harvard” in wordplay. It’s a dad joke.

This joke is special because AI created it. The conversation was between Google’s AI chatbot Meena and a human user. Google researchers thoroughly reviewed the learning data to check whether the chatbot was repeating a joke done by humans. It turned out that AI created a new joke, and the conversation was published in an AI research paper.

AI chatbot talking like a human is no longer unfamiliar. While the Lee Luda service was suspended earlier this year in Korea over the chatbot’s unauthorized gathering of Kakao user chats, it has had great success overseas. China’s XiaoIce developed by Microsoft has 660 million users. It becomes a friend when you are board, recommending movies and playing music. It is hard to tell if you are talking to a person or AI after ten minutes of conversation.

Users found it awkward at first but gradually formed a bond. Many talk to XiaoIce every day and ask for advice. A Chinese student told XiaoIce about asking a girl out and getting rejected, and XiaoIce responded, “You are smart, cute and handsome so you will get another chance.” “Will I have another chance?” “Why not, there’s always next.” The conversation was also published in an academic paper.

The merit of an AI chatbot that users universally point out is that it responds immediately. So, it is helpful when a user doesn’t have a friend in real life or finds it difficult to bring up something to a friend. In the United States, a counseling chatbot is in development for sexual minority teens. The main purpose is suicide prevention.

People often associate a humanoid with AI. But AI will be more seen in the metaverse, an online virtual space where avatars are created and live. In this space, people create a new identity and form new relationships. Social media and games we use daily are metaverse, and it is expected to expand through virtual reality and augmented reality.

AI will be more emphasized in metaverse. It is not important whether an avatar in metaverse actually exists in real life. Many gamers already hang out and communicate with AI characters. AI in metaverse will continue to grow. With unique character and ability, AI will live alongside the users. Popularity of a metaverse may depend on how many interesting AIs exist. They can make new jokes like Google’s Meena and give advice like Microsoft’s XiaoIce. It can tell you new information or teach you new technology.

Users may not know if they are communicating with a real human or AI on metaverse. They may not want to know. Someday, people may not find it unusual that they are talking to someone who does not have a physical body in real life.

Last month, the Personal Information Protection Commission imposed a fine and penalty in the Lee Luda case. I hope it would be a chance to improve the practice of processing personal information. But I am worried whether the decision would discourage chatbot AI development. The decision should not be understood as a ban on all conversation data usage. Not all conversations are personal information. A policy to protect personal information while encouraging AI in metaverse is needed.

Translation by the Korea JoongAng Dailys staff.
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)