AI requires trust in technologyOh Byung-joon
*CEO of SAS Korea
As artificial intelligence is applied in voice recognition, cancer diagnosis, self-driving cars and more, Korean society is increasingly affected by decisions made about AI. The Internet of Things and big data in overall industries will further enhance the potential of artificial intelligence.
Market research agency IDC predicts that the global artificial intelligence market will grow by 54.2 percent from the previous year to $19.1 billion. By 2021, the market is expected to grow to a whopping $52.2 billion.
Big data-based advanced analysis technologies such as machine learning, deep learning and prediction and analysis will dramatically develop, and in mere seconds or minutes, an AI could acquire expert knowledge and technology that a person has to learn for years. But as the AI improves, the algorithms are getting more complicated, and it is harder for humans to interpret.
As AI is used in various fields including medicine and finance, companies need to work harder to provide more accurate and reliable results. The accountability of AI error is ultimately on humans.
Therefore, realization of AI that demonstrates fairness, accountability and transparency (FAT) is highlighted in the AI industry. It is to satisfy the interests of data scientists pursuing accuracy, users who want fair treatment and regulatory agencies that need to secure transparent decision-making.
Here, the role of data analysis companies at the forefront of AI technology is important. They need to work on an analysis system that not only analyzes vast data quickly, but also provides accurate interpretation and transparent explanations for predictions and decisions. AI can be a reliable partner for humans only when we have trust in the technology.