Why autonomous vehicles should not be labeled ‘high-risk AI’
Published: 28 Jan. 2026, 00:04
Audio report: written by reporters, read by AI
The author is the head of the AI Industry Center at the LIN Law Firm.
Even if they have not yet reached the level of fully driverless robotaxis, supervised autonomous vehicles are now a common sight on Korean roads. U.S.-based Tesla sells more autonomous vehicles worldwide than any other company, generates profits from those sales and simultaneously acquires vast amounts of driving data at no additional cost. That data is then used to train and advance its AI. Tesla recently announced that starting in mid-February, it will offer its autonomous driving system through a monthly subscription model.
A rider boards a driverless Tesla robotaxi, a ride-booking service in Austin, Texas, on June 22, 2025. [AP/YONHAP]
Korea’s position as the world’s undisputed leader in industrial robot density rests largely on its strengths in electronics and automobiles. Korean manufacturing uses more robots than any other country, and robots have long been standard equipment in automobile production. The industry is now entering a phase in which AI and robots build vehicles, AI operates them and humans ride inside these machines while paying for AI usage. Tesla has explicitly redefined the automobile in the AI era as software delivered as a service.
Baidu's RT6 robotaxi, currently in operation in Wuhan, China, and other areas, is seen in this photo provided by the company. [BAIDU]
Korea’s “AI Basic Act” came into effect on Jan. 22. Although penalties for violations have been deferred, the law has already set a global precedent as the first comprehensive statute regulating so-called high-risk AI operators. Yet the European Union, which strongly influenced the legislation, has recently postponed its own high-risk AI regulations. The primary concern in Europe is that responding to rapid AI development and innovation with regulation-centered laws could leave the region further behind in global competition.
Like Korea’s law, the EU’s AI Act assigns responsibility to AI providers to manage high risks and imposes sanctions for violations. Within the EU, however, there is growing unease about how difficult it is to objectively determine what actually constitutes high-risk AI in practice.
The first draft of the EU’s law, released in 2021, was built around the idea of managing risks posed by high-risk AI systems. Its stated goal was to protect human life, physical safety and fundamental rights. The proposed method was to regulate risks differently depending on the intended use of AI across specific sectors. That framework, however, was dealt a serious blow in 2022 with the emergence of ChatGPT in the United States.
General-purpose AI systems like ChatGPT create a regulatory dilemma because their uses and purposes cannot be defined in advance. As a result, the EU shifted toward imposing transparency and safety obligations on developers at the model design stage. This approach triggered strong opposition from the United States. The EU’s original vision of protecting humans by managing AI risks in advance, sector by sector, was fundamentally disrupted by the release of general-purpose AI.
Since the shock of ChatGPT, AI has become even more deeply embedded in everyday life, and the technology has advanced rapidly. Yet the high-risk AI categories and regulatory approach adopted by the Korean government remain largely stuck in the EU’s 2021 draft framework. By becoming the first country to fully implement such rules, Korea has placed itself under intense international scrutiny.
Under Korea’s AI law, for example, Level 4 autonomous vehicles already undergoing pilot operations, as well as robotaxis that have been commercialized abroad, would fall under high-risk regulation. Amid the current enthusiasm for physical AI, it is important to note that physical AI does not refer only to humanoid robots. Autonomous vehicles that perceive their surroundings and drive in real-world traffic environments are also a form of physical AI. The intention to protect humans by classifying such technologies as high-risk in advance, assuming errors and accidents, is understandable. But if humanoid robots entering homes are treated as innovation while autonomous vehicles are labeled high-risk businesses and regulated first in the world, Korea risks undermining its own national competitiveness.
A Motional-developed Ioniq 5 robotaxi navigates the streets of Las Vegas. [HYUNDAI MOTOR]
Tesla continues to sell vehicles at relatively low prices, collect vast amounts of driver behavior data worldwide at no cost and concentrate on technological development. It has even introduced a monthly subscription model for autonomous driving systems. To compete with companies like this, Korea must fundamentally rethink what constitutes high-risk AI and how such risks should be regulated in a way that reflects domestic realities.
One alternative approach might be to adopt a system similar to that of Britain, where insurance companies compensate victims first in the event of an autonomous vehicle accident and then exercise rights of recourse. This would reduce the immediate liability burden on automakers. Other strategic perspectives are also possible, including policies that actively support the development of high-risk AI applications rather than discouraging them through premature regulation.
This article was originally written in Korean and translated by a bilingual reporter with the help of generative AI tools. It was then edited by a native English-speaking editor. All AI-assisted translations are reviewed and refined by our newsroom.





with the Korea JoongAng Daily
To write comments, please log in to one of the accounts.
Standards Board Policy (0/250자)