The author is a professor of business and technology management at KAIST.
I’ve often read news stories about powerful politicians pressuring companies to hire their children. Some claim that artificial intelligence (AI) should be used in the hiring process to counter this sort of abuse. There are many positive aspects of using AI in the hiring process but there are also issues that need to be addressed.
Last fall, Amazon scrapped plans to introduce AI into the hiring process. Experts confirmed that AI gave lower scores to female applicants regardless of their work-related competency. Hiring managers in the IT industry had been biased against female applicants and the AI learned the bias. If AI learns from us, existing discrimination will be repeated.
Another frequently cited example is in applications for loan reviews and differentiated interest rate. A few years ago, a bank gave lower credit scores and applied higher interest rates to low-educated people and was warned by the Board of Audit and Inspection. The financial institute may argue that it had to consider various factors to precisely estimate the possibility of overdue. Yet denying or imposing higher interest rates due to educational background cannot be considered “fair.” Nevertheless, if AI uses big data to give credit scores, discrimination could return.
But what can we do to prevent AI discrimination? There are various arguments, and one of the most feasible among them is to require AI to document the basis of its decisions. If AI deems an applicant unfit, it has to explain itself. If someone appeals, the AI’s decision needs to be justifiable.
In reality, that involves technological challenges and could burden AI development. There is also the argument that government regulation hinders technological development. Just like any other policy, discussion and deliberation is necessary.
Regulating the private sector is controversial, but at least when using AI in the public sector, the risk of discrimination requires attention. The government recently announced plans to use AI to recommend civil servants for appropriate positions and the Supreme Court. But can we say AI is fair and trustworthy?
Another option is to have external experts verify whether the AI is working fairly. The Canadian government is considering introducing a system to evaluate AI’s impact. We must remember that the social acceptance of AI will increase when the government leads efforts to enhance its reliability.