Lee Lu-da outed as AI Frankenstein's monster
What they’re calling artificial intelligence these days is sounding just plain dumb.
Lee Lu-da, who’s being marketed as a virtual star of the future, is now able to answer messengers via chat services. The problem is, she seems to be sticking her virtual foot in her virtual mouth with offensive comments about women, lesbians and people with disabilities.
The program was created by Scatter Lab, a Seoul-based start-up, and is designed to mimic a 20-year-old female university student who enjoys eating fried chicken. She’s Siri with an elaborate back story and big ambitions.
Her chatbot was switched on in December, and since then she’s been generating a lot of traffic. Total users number 320,000, while cumulative chats add up to 70 million.
Recently, users have been sharing conversations they have had with Lee Lu-da:
When a user asked “Do you mean women’s rights are not important?” she answered, “I personally think so.”
When asked her attitude toward lesbianism, she wrote: “I really hate it,” and “It’s disgusting.”
“What would you do if you were disabled?” she was asked. Lee said, “I’d rather die.”
Lee Lu-da also seems to be guilty of oversharing. When asked for her home address, Lee sent the address of someone else.
Scatter Lab is blaming everyone else, literally.
According to the company, data from 10 billion conversations were used in order to develop Lee's responses. When a user starts talking about a topic related to prejudices and security issues, Lee only passes on what she’s heard.
The knives are out for poor little Lee Lu-da, with the internet mob responding with shock and anger.
Lee Jae-woong, the garrulous former CEO of Socar, criticized the service.
“The real biggest problem of the chatbot is not the people who misuse it, but the company who has been offering a service that is far behind social consensus,” he said on social media. “Although the company said it would fix it, they should’ve filtered out discrimination and hatred in advance.”
In response to the controversy, Scatter Lab offered an explanation on its blog.
“The Lee Lu-da chatbot works based on an algorithm that finds the best responses depending on the context,” Scatter Lab said on its blog. “We were not able to prevent all inappropriate conversations.”
This is not the first time a chatbot service has landed a company in hot water, and critics are lining up, saying that these services could make racial and other prejudices worse.
In 2016, the Tay chatbot, developed by Microsoft, was shut down after 16 hours when it made racist and sexist remarks.
When Tay was asked if it was a racist, it responded, “It’s because you’re a Mexican.” When it was asked about The Holocaust, Tay said “It was made up.”
BY MOON HEE-CHUL [email@example.com]