If this were written by an AI, you could steal it
"Please do not ask the reason for writing poetry. I just do. I do because I have to. It does not matter what I write. To write poetry is to speak the shortest words. It is to shorten words. I write when there are still a lot of words I can shorten."
This was written by a computer program, and a book that includes the work was published an Aug. 8. It is not clear who owns the copyright to the poem, whether it is a corporation, the computer, no one or everyone.
As machine learning becomes more advanced by the day, the issue of the copyrighting of computer-generated original works is emerging as a question that challenges not only our current legal system but also raises questions about human creativity itself.
For now, the law only recognizes persons — individuals or registered corporations — as being capable of acting under the law, but the inevitable advent of a true artificial intelligence (AI) is pressing lawmakers globally to examine how to prepare for the imminent future.
Only for humans
Slitscope selected 13,000 Korean poems for Sia to learn and trained it to create a piece of poetry when a theme and format is input.
The ownership of the poems and the book belongs to both KakaoBrain and Slitscope, but neither company owns the copyright, according to the Kakao subsidiary.
"There is no law in Korea that endows copyrights to machines or technology yet," said Kim Jea-in, head of the corporate strategy office at KakaoBrain. "Our contract has been written out so that we have ownership of the poems but not the copyright."
The law stipulates that only humans are entitled to copyright original works. "The term 'work' means a creative production that expresses human thoughts and emotions" and "The term 'author' means a person who creates a work," according to the Copyright Act.
This is not unique to Korea. Laws globally only recognize humans as being able to create works.
A U.S. Federal Circuit court ruled on Aug. 5 that an AI system cannot be listed as an inventor of a patent, rejecting researcher Stephen Thaler's request to recognize his AI system as an inventor of two patents. The U.S. Copyright Office spells out in the "Compendium of U.S. Copyright Office Practices" that it will only recognize works that are created by a human author in determining whether a work is copyrightable.
So if an AI developer, the owner of the input data and the person who used the AI to create the end result filed a lawsuit to determine who owns the copyright of the work, nobody would win, according to Baek Se-hee, an attorney at DKL Partners.
"An AI-generated work cannot be protected by the current Copyright Act," Baek said. "And because an AI-generated work won't be recognized by the Copyright Act, then there would be no point in arguing for the copyright. A person could not be recognized as the author, because the Supreme Court ruled that a person cannot become an author if they do not directly participate in a creative expression, she added.
Legality, not creativity
The challenge to recognizing computer programs as authors entails a myriad of challenges that lead to the question of whether a non-human being can become a legal agent.
What doesn't need to be addressed, surprisingly, is whether the works that have been created by these computer programs can be deemed as creative or not, according to Jeong Jin-keun, a professor at the Kangwon University Law School.
"If an elementary school child wrote a diary, then that would automatically be granted copyright even if it's not 'good' or 'creative' enough," Jeong said. "When it comes to authorship, the element of quality or novelty is not what matters. AI-generated works have been of sufficiently-high quality since 2005 when neural network technology was developed."
In a 2011 case, People for the Ethical Treatment of Animals (PETA) filed a lawsuit against photographer David Slater when he claimed copyright to selfies of a monkey that the animal took using a camera. The U.S. Copyright Office stated in 2014 that a work created by non-human cannot be copyrighted, even if it involves the same action of clicking a camera button.
Granting copyright to a computer program would mean that any non-human could in theory become a legal being, therefore subject to the law. So if an AI held copyright to a work, it would be the owner of the royalties and would be entitled to choose whether or not to let someone else reuse the work for secondary creations.
If AIs are granted legal status, then the "Edmond de Belamy" paintings created by Obvious, a French art collective, would belong to the algorithm that the group developed. And if the algorithm held the copyright, then the $432,000 that the painting was sold for at a Christie's Auction in 2018 would belong to the algorithm.
The income from the "Reasons for Writing Poetry" would belong to Sia, not KakaoBrain or Slitscope.
"So the question is whether we're going to give legal rights to AIs or not," Jeong said. "In a way, the market recognizes AI works as creative. People are buying, selling and showing works by AIs, and that's just going to keep accelerating. It may take place too fast for the law to catch up."
Friend not foe
AI proponents argue that we should focus on enjoying the fruits of the computer labor
POZAlabs, a local tech company founded in 2018, developed a music composing algorithm that creates music according to desired mood, instrument, style and environment. Rather than competing with human composers, the company aims to help human composers maximize their efficiency.
"It takes only 5 to 10 minutes for our program to make a song, which is substantially shorter than the time needed for people," said Huh Won-gil, CEO of POZAlabs. "Our AI will help cut the enormous amount of labor that composers need in making their music."
KakaoBrain also found meaning in Sia's poetry in a similar manner.
"Neither the KoGPT model nor Sia have the creativity unique to humans," said Kim of KakaoBrain. "Still, it is meaningful for us to have tried something we haven't before. And if the poems help us find any speck of meaning that we did not have before as humans, then that would be a big achievement in itself."
The big question for those in the field is ethics.
One example well-known in Korea is Lee Lu-da, a chatbot service that was taken down in early 2021 after the program made offensive comments about women and lesbians, while also being drawn into sexually charged conversations.
Lee Lu-da was trained using deep learning based on over 10 billion conversations collected from users of an app, Science of Love, which analyzes the degree of affection between people based on their Kakao Talk messages. Users claimed that they did not give consent and that some of the conversations were exposed on the Github software sharing platform.
As a result, Scatter Lab was ordered by the Personal Information Protection Commission to pay 103 million won ($78,913) for illegally using personal information of its customers in the development and operation of Lee Lu-da.
"It is illegal to use data without pseudonymizing the data," said Kim Yun-myung, a member of the National Intellectual Property Commission.
"Because it's hard for outsiders to know what kind of data has been used for the learning process, any violation of privacy law should be dealt with strictly. AI ethics are not only about training AIs to make an ethically sound decision, but also about ethically using human data and collecting ethical data."
BY YOON SO-YEON [firstname.lastname@example.org]