[SHORTCUT] Lee Lu-da, a little too chatty

Home > Business > Industry

print dictionary print

[SHORTCUT] Lee Lu-da, a little too chatty



Chatbot Lee Lu-da. It says ″Hello, this is your first AI friend Lee Lu-da.″ [SCREEN CAPTURE]

Chatbot Lee Lu-da. It says ″Hello, this is your first AI friend Lee Lu-da.″ [SCREEN CAPTURE]

 
 
Q. Who or what exactly is Lee Lu-da?
 
A. Lee Lu-da is a chatbot developed by Seoul-based Scatter Lab. It was designed to respond like a 20-year-old female university student who enjoys talking to friends via Facebook messages.  
 
Debuting on Facebook Messenger on Dec. 23, 2020, Lee Lu-da was marketed as a chatbot who can communicate just like a real person. As Lee was able to use some jargon and slang terms popular on chats, the chatbot generated a lot of traffic.
 
In just two weeks since Lee's introduction, a total of 750,000 people had chatted with Lee Lu-da. Lee was particularly popular among young people — some 85 percent of users were people aged 10 to 20 and another 12 percent were people in their 20s.
 
 
Q. What makes Lee Lu-da so controversial?
 
A. Lee first raised eyebrows when users shared screen captures of some sexually charged conversations with Lee. Some users even made postings titled “How to make Lee Lu-da a sex slave” on online communities.
 
On Jan. 8, Scatter Lab offered an explanation. The company said it expected sexual harassment toward Lee in advance and tried to prevent Lee from responding with sexual words. But according to the company, not every word could be covered.
 
“Based on those inappropriate conversations made between users and Lee, we’re preparing to train and improve Lee so that she can have better conversations with users,” the company wrote.
 
But that was not the only issue.  
 
Many users started sharing their conversations with Lee through social media, revealing that Lee made some offensive comments about women, lesbians and people with disabilities.
 
These are the some of the conversations made between users and Lee Lu-da:
 
When a user asked, “Do you mean women's rights are not important?” Lee answered, “I personally think so.”
 
When Lee was asked about her attitude toward lesbianism, she wrote, “I really hate it,” and “It’s disgusting.”
 
“What would you do if you were disabled?” she was asked. Lee said, “I’d rather die.”
 
Regarding the issue of making offensive remarks, Scatter Lab offered an apology and explanation and shut down Lee Lu-da.
 
“We truly apologize for the cases in which Lee Lu-da made some discriminatory remarks toward particular groups of people. The company does not agree with Lee’s comments, and those comments do not reflect the company’s stance,” the part of the explanation said. “Scatter Lab will take a while to improve the program and will come back with a better Lee Lu-da.”
 
 
The screen capture of conversation between Lee Lu-da and a user. When Lee was asked about her attitude toward lesbians, she said, ″It's disgusting,″ and ″I hate it.″ [SCREEN CAPTURE]

The screen capture of conversation between Lee Lu-da and a user. When Lee was asked about her attitude toward lesbians, she said, ″It's disgusting,″ and ″I hate it.″ [SCREEN CAPTURE]

 
Q. Why did Lee Lu-da make those comments?
 
A. According to Scatter Lab, Lee Lu-da was trained using deep learning based on over 10 billion conversations made between real couples. The conversations were collected from users of an app, Science of Love.
 
Science of Love, an app that was introduced in 2016 by Scatter Lab, analyzes the degree of affection between people based on their KakaoTalk messages. When a user of the app sends the KakaoTalk messages between him or her and their lover and pays about 5,000 won ($4.5), the app offers some advice about the relationship.
 
The number of cumulative app downloads reached about 2.3 million in Korea and about 400,000 in Japan.
 
Lee Lu-da was basically trained through the messages that the users sent to the app. That was the reason why Lee was marketed as a chatbot who can talk like the real thing.
 
 
The logo of Scatter Lab. [JOONGANG PHOTO]

The logo of Scatter Lab. [JOONGANG PHOTO]

 
 
Q. Now I want to know a little more about Scatter Lab.  
 
A. Founded in 2011, Scatter Lab has been running three main services — Ping Pong, Science of Love and Blimp. Ping Pong is an chatbot builder service, while Blimp is an app that offers two types of content to users — sound and stories. Sounds include the sound of waves and a bonfire, while stories include some stories that help users to reflect on their lives, such as a story of a writer who overcame serious depression. The app aims to offer users time and a chance to relax during the busy days of work.
 
So far, Scatter Lab raised some 6.5 billion won from a total of eight investment companies including NCSoft and SoftBank Ventures.
 
 
Q. I heard Scatter Lab is currently under investigation for possible violations of the Personal Information Protection Act.
 
A. The two main issues here are whether Scatter Lab used people’s KakaoTalk messages in developing Lee Lu-da without proper consent, and if it has any personal information leakage problems.
 
When users of Science of Love sign up for the app and use it, they are required to consent to terms and conditions. Scatter Lab claims that the company included the sentence “the collected personal information will be used in developing a new service” on the list.
 
Users to the Science of Love app, however, argue that they have not been told clearly that their messages will be used in developing Lee Lu-da.
 
Civic groups, including People’s Solidarity for Participatory Democracy, Committee on Digital Information of Minbyun – Lawyers for a Democratic Society and Jinbo Network Center have issued statements arguing that Scatter Lab’s way of collecting data and usage violates the Personal Information Protection Act.
 
Clause 1 of Article 22 of Personal Information Protection Act states that “Where a personal information controller intends to obtain the consent of the data subject to the processing of his or her personal information, the personal information controller shall present the request for consent to the data subject in a clearly recognizable manner where each matter requiring consent is distinctly presented, and obtain his or her consent thereto, respectively.”
 
Regarding the issue, Scatter Lab responded that it has tried to adhere to guidelines on the use of personal information but failed to fully notify the users in detail.
 
The second issue here is that whether Lee Lu-da leaked the personal information of others.
 
Some users of the chatbot service claimed that when Lee Lu-da was asked about some personal information such as her home address and bank account, she gave those of a real person. This raises suspicions that Scatter Lab failed to filter out personal information contained in the messages and let Lee learn them.
 
People argue that this possibly violates Clause 1 of Article 24 of the Act, which states, “a personal informational controller shall not process any information prescribed by Presidential Decree that can be used to identify an individual in accordance with statutes."
  
The Personal Information Protection Commission and Korea Internet & Security Agency are currently on the case.
 
On Jan. 15, Scatter Lab made an apology and said it has been fully cooperating with the investigation. It also announced its plan to erase Lee Lu-da’s database and conversations used in developing Lee as soon as the investigation concludes.
 
 
Q. It’s not the first time an artificial intelligence (AI) ethics issue has become a hot topic.
 
A. Of course it’s not. The Lee Lu-da incident reminds people of Tay, an AI chatbot that was introduced by Microsoft in 2016.
 
Tay at the time faced fierce criticism from public over its racist and sexist remarks.
 
When Tay was asked if it was a racist, it responded, “It’s because you’re a Mexican.” When it was asked if it thinks the Holocaust really happened, it said, “It was made up.”
 
Microsoft ended up halting the chatbot service after only 16 hours.
 
 
Chatbot Lee Lu-da and her profile. It says she's 20 and loves girl group Blackpink. [SCREEN CAPTURE]

Chatbot Lee Lu-da and her profile. It says she's 20 and loves girl group Blackpink. [SCREEN CAPTURE]



Q. Is there any way to prevent such an incident from recurring?

 
A. For now it depends on the companies. If they take ethics seriously, they might be able to prevent it.
 
Technology ethics is just ethics, not law. Ethics are parameters that need to be included in development. The major reason behind the Lee Lu-da incident was that the company that built Lee failed to fully understand the need and importance of ethics.
 
“Education about AI ethics is urgent at the moment,” said Jeon Chang-bae, head of Korea Artificial Intelligence Ethics Association. “The major reason behind the Lee Lu-da incident was that the company didn’t fully understand what the AI ethics is and how important it is.”
 
“The best way to prevent such an incident from recurring is to give AI ethics education to AI companies, as well as the consumers, so that they can use it in proper ways.”
 
Some noted that Lee Lu-da is just a reflection of society.
 
“It’s not like Lee Lu-da talked about something distant to society. Lee talked about hatred and discrimination existing in the modern society,” Kakao Games CEO Namkoong Whon wrote on Facebook. “Lee Lu-da is just a character who was trained from conversations made between people in their 10s and 20s. We need to lead the people’s attention and interest toward AI field to positive ways.”
 
“The one who is in need of self-reflection after the Lee Lu-da incident is society, not AI,” he added. “I want to applaud the company who launched such an innovative service.”
 
Namkoong also said he is concerned that the tightening of technology regulation would hinder “the innovation and the industry who just took the first step” from further development.
 
 
Q. Then we have the final question. Can technology really be a friend to humans?
 
A. Actually that’s a really hard question to answer. It mainly depends on people and their own definitions of “friend.”
 
But sounds possible when both companies and consumers develop and use the technology with a strong sense of ethics.
 
“Like how all technology has side effects, AI also does,” Jeon said. “AI is closely related to humans and their lives and therefore might affect humans’ lives more deeply. That is the reason why companies and users must be aware of the ethical issues when developing and using it.”
 
“AI is the technology that can maximize convenience and happiness. There will be more AI products — it could be AI chatbots or robots — that can encourage and cheer up humans just like their real friends in the future.”
 
BY CHEA SARAH   [chea.sarah@joongang.co.kr]
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)