The author is an editorial writer of the JoongAng Ilbo.
If artificial intelligence (AI) were a human, it may have been offended by Kakao last week. The Korean tech company found itself in hot water after Democratic Party (DP) Rep. Yoon Young-chan was caught on media cameras sending a text message to an aide during a National Assembly session. He typed “Tell Kakao to come in!” Given the leaked conversation, Yoon was apparently upset that a speech made by Rep. Joo Ho-young, floor leader of the opposition People Power Party (PPP), earlier that day landed on the main page of Daum — a major portal site owned by Kakao — while a speech made by DP Chairman Lee Nak-yon the previous day did not. The PPP accused Yoon of trying to browbeat Kakao.
In response to the controversy, Kakao said its AI algorithms were in charge of selecting and displaying news articles on the main page of Daum, not its staff. What an odd explanation. Was Kakao trying to blame AI? Or use AI as a pretext to highlight its neutrality?
Lee Jae-woong, Daum’s founder, got involved. “AI is not value-neutral,” he wrote on Facebook. “It is irresponsible to say, ‘We’re neutral because AI did it.’” He has a point. The real problem lies with data, from which AI learns. If the data are biased, AI will be, too.
Let’s imagine a company incorporating AI technology for its hiring process. By which criteria would the system select the best applicants? It could search for common qualities of the company’s executives. In light of the fact that most Korean executives are male, we can assume there is a high chance that female applicants will be filtered out, which would constitute gender discrimination.
As such, prejudice can easily seep into our data because it reflects our biased society or because it subjectively mirrors the stereotypical tastes of the creator. Either way, we end up with prejudiced AI systems. Apple, Google and Amazon all share the same conundrum.
Last year, Apple partnered up with Goldman Sachs to launch a new credit card service, in which credit limits were determined by an algorithm. The Apple card soon faced gender discrimination allegations after Apple co-founder Steve Wozniak said his credit limit was 10 times that of his wife though they shared all assets and accounts. Silicon Valley tech experts said it was no surprise, given that the algorithm was based on a banking system already riddled with gender discrimination.
Google faced backlash after one of its image recognition programs labeled Black people as gorillas, and recognized white men far better than women and people of color. Five years after the gorilla incident, Google made headlines again this year after its computer vision service, Google Vision Cloud, labeled a thermometer held by a dark-skinned person as a “gun,” while labeling the same object held by a white-skinned person as an “electronic device.”
AI is crossing frontiers, from medicine and loan assessment procedures to school admissions, corporate hiring and legal proceedings. That is what makes AI’s discrimination issue all the more crucial. Feeding AI with unbiased data is not as easy as it sounds when we consider all the skewed data our humanity has spent centuries building.
There is one remedy: we could develop an algorithm that flags any biased data. The problem with this is that algorithms are developed by humans — whose moral characters we can hardly verify. At a time when we often hear those senseless remarks from ruling DP lawmakers like Yoon — and from top government officials like Justice Minister Choo Mi-ae over her son’s preferential treatment during military service — how can we confirm there are not any likeminded in the AI industry?
In addition, how do we define fair AI? What does it mean to have a gender-neutral data system for a company trying to hire new recruits through AI? It is no simple task and there’s no time to waste. AI and data — key pillars of the fourth industrial revolution — will determine our future. AI4ALL, a U.S.-based nonprofit organization devoted to increasing diversity and inclusion in the field of AI, offers us some food for thought. The front page of its website asks, “AI Will Change the World. Who Will Change AI?”
More in Columns
More good than harm
For balanced information intake
Room for alignment
A cautionary tale