AI’s legal challenges in the country

Home > Opinion > Columns

print dictionary print

AI’s legal challenges in the country

 
Kim Doo-sik
The author is CEO of Tech & Trade Institute and a lawyer.

Generative AI like ChatGPT is totally new artificial intelligence. While DeepMind’s AlphaFold, designed to predict a protein’s 3D structure from its amino acid sequence, or deep learning-based AI systems like Google GNoME intended to detect the structure of 2.2 million new materials, could be called tech for tech’s sake, generative AI is close to tech for humanity. Generative AI that can demonstrate human creativity and intellectual ability is making revolutionary changes across society, not to mention economy.

Boosting productivity through AI is a key factor in determining future competitiveness of companies. McKinsey & Company expects generative AI to lift productivity in the fields of professional knowledge or administrative positions by 34 percent. Human staffers at call centers, marketing and advertising firms and IT companies are already being replaced by AI. Artificial intelligence can be used to analyze companies’ compliance risks and respond to them. That’s not all. AI can judge whether a company’s decision can violate export controls or economic sanctions, not to mention calculate a company’s carbon footprints.

The artificial intelligence market is expanding fast, as seen in ongoing investments in medical and healthcare sectors, cloud and other data businesses, and fintech. Market analysts anticipate the size of the AI solution market alone to grow to at least $50 billion by 2029. AI’s added value to the manufacturing sector alone is expected to surge to $4.4 trillion from $2.6 trillion. At CES 2024 in Las Vegas in early January, not only AI enterprises but also healthcare and cosmetics brands, consumer goods retailers and even traditional companies showcased diverse AI-applied services and products, signaling the advent of the age of ubiquitous AI.

To take AI leadership, countries compete fiercely more than ever. Not only AI powers like the United States and China but also the United Kingdom, France, Canada, India, Saudi Arabia, United Arab Emirates and Singapore are aggressively support their AI industries. But unfortunately, Korea, an IT powerhouse, is lagging behind. According to Tortoise Intelligence, a British data analysis firm, Korea’s global AI competitiveness ranks 6th, but its private investments and human competitiveness are 18th and 12th, which points to the need to spend more on raising talent.

But you can hardly develop the AI industry with investment and technology innovation alone. The more broadly AI is used, the bigger the social and security risks from developing and using AI grows. That inevitably entails corresponding regulations and responsibility. Without taking this issue into account, no country can develop AI industry harmoniously.

First of all, you must understand — and respond to — a number of legal issues related to AI to help innovate the AI habitat. Generative AI can use a massive input of texts, audio, video and even coding data without permission from their creators. But as soon as generative AI was released in the U.S. market, scores of cases were filed against generative AI developers for violations of copyright law. The question boils down to who has the ultimate rights to the AI-created work — and whether an AI-created work can be granted a copyright or patent.

If an individual suffers damage from an AI model used for personnel affairs management or job assignment in a company — or from an AI model for autonomous driving — whom should the individual hold accountable for the damage? Such problems cannot be solved clear-cut in the existing legal system. The European Union is hurrying to introduce a new set of civil liability rules after considering AI-unique problems.

Governments are also toughening regulations on the AI industry to prevent AI-related risks from shaking society. The AI industry can develop vibrantly within the boundaries of reasonable regulations.
 
 
Social risks caused by AI include fake news, deep fakes, discrimination, privacy infringement, and job losses. Fake news and deep fakes, in particular, can destroy the trust among society members and endanger the democratic system based on fair elections. With less than 10 months left until the Nov. 5 presidential election in the United States — and with less than three months to go before the April 10 parliamentary elections in Korea — law enforcement agencies on both sides are on alert against the possibility of fake news and deep fakes shaking the society.

In the movie “Terminator,” Skynet — a highly-advanced computer system possessing artificial intelligence — desperately seeks to annihilate humanity. Could AI get out of human control to do that? Some scientists claim that a totally destructive super AI beyond human control cannot appear, but many AI developers confess that in many cases, we cannot predict the creative ability the large language model-based AI may end up with. The developers agree that they don’t have the technology to fully comprehend or control AI’s internal function. Does that mean we are headed to doom?

“Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” promulgated by U.S. President Joe Biden on Oct. 30 accommodates all AI-related risks that have been raised, and ensure that more than ten departments and agencies analyze AI risks as part of their job and draw up effective responses. To the National Institute of Standards and Technology (NIST)’s AI Risk Management Framework, in particular, Biden demanded it evaluate AI’s dangers and present the procedures and standards to verify AI’s reliability within 270 days. In a noticeable development, the president also ordered developers of “dual-use foundation models” — or AI models that can be used for military security or economic security or public health purposes — to conduct “red-team tests,” or “structured tests to find system flaws.”

The U.S. executive order protecting national security against AI risks contrast with the EU’s AI Act prioritizing the protection of individuals’ rights and with China’s AI decree safeguarding socialism. But basically, there is no big difference, as they mandate government institutions or AI developers to assess and verify AI dangers. Put simply, classifying AI risks first and establishing the standards and procedures needed to evaluate AI systems’ reliability is the ABCs of AI regulations.

The U.S.-China hegemony contest over AI will determine the future of the global AI economy. As AI constitutes a classic case of “dual-purpose” technology, it is destined to stand at the center of the technology hegemony war. Despite Nvidia and other U.S. chipmakers’ vehement resistance, the U.S. government in October enforced a colossal export ban on equipment needed to produce cutting-edge chips — and a total ban on AI chips. Biden’s AI executive order also restricted Chinese AI companies from accessing U.S. cloud service providers for machine learning.

As the U.S. will most likely reinforce its technology control on China, Korean companies must always keep in mind the impact the Sino-U.S. hegemony war will have in its global business environment, including the U.S.-led rebuilding of the cutting-edge (AI) chip supplies.

AI cannot be immune from legal accountability due to its innate risks and diverse regulations at home and abroad. AI-led innovation cannot but take place within a certain regulatory environment and a legal system. AI developers and investors must understand all the relevant regulations and laws, not to mention the technology. AI-related regulations and laws are a necessary condition to build a healthy and reliable AI economy for the sustainable development of the AI industry. That calls for close cooperation from the government, AI engineers, entrepreneurs and lawyers.

Translation by the Korea JoongAng Daily staff.
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)