Guidelines for an AI interviewer

Home > National >

print dictionary print

Guidelines for an AI interviewer

테스트

“When did you struggle the most?” “Why was it hard?” “How did you overcome it?”

An artificial intelligence (AI) interviewer asks questions, and the interviewee nervously provides answers. When asked an unexpected question, the interviewee breaks out into a cold sweat. The quivering voice, eye movements and attire are thoroughly noted.

To secure students expected to graduate next spring, Japanese companies started the recruiting process for new hires earlier this month. Recruiting consulting company Talent and Assessment (T&A) is providing an interview process using SoftBank’s humanoid robot Pepper.

Pepper is an AI interviewer who instantly sorts answers from applicants for situations, tasks, behaviors and outcomes and sends them to a database. Based on the accumulated data of thousands of past applicants, the interviewees are scored on 11 categories such as independence, responsibility, sentiment and planning skills, and predicts their performance upon joining the company. Human interviewers conduct assessments based on different standards to minimize errors.

T&A plans to send AI interviewers to 20 companies. Japanese market consulting firm Fuji Chimera Research Institute forecasts that AI-related businesses will grow to 2.12 trillion yen ($19 billion) by 2030. The industry is expected to expand 14 times in 15 years.

Last month, Kyoto University’s neuroinformatics professor Yukiyasu Kamitani’s research team published a way to use AI to presume what humans see or imagine based on brain activities in the British science journal Nature Communications.

Will AI rule humans in the future? American futurist Kurzweil predicted that AI will surpass the aggregated intelligence of mankind by 2045.

In contrast, Jean Babriel Ganascia, head of the ethics committee at the French National Science Research Center, criticized that the concept of singularity was more of an ideology lacking scientific bases. It is true that AI’s learning ability has reached an impressive level, but its judgment is based on the rules taught by humans, and AI cannot create new concepts or rules.

Excessive skepticism and optimism are equally dangerous. Fear hinders the advancement of AI, and carelessness could bring chaos due to ethical issues and a lack of legal precedent. The Artificial Intelligence Society composed of Japanese universities and corporate researchers prepared AI ethics guidelines in February.

“AI must not be used to harm others.” “Mankind should be able to fairly and equally use AI.” “AI should have ethics and follow moral guidelines.” Korean researchers should also prepare similar guidelines.

JoongAng Ilbo, June 27, Page 30

*The author is a Tokyo correspondent of the JoongAng Ilbo.

LEE JEONG-HEON
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)