Big Tech should be legally liable for deepfakes, IBM VP says

Home > Business > Tech

print dictionary print

Big Tech should be legally liable for deepfakes, IBM VP says

Paul Burton, general manager of the Asia-Pacific region at IBM, and Christopher Padilla, vice president for government and regulatory affairs, discuss deepfakes issue during a press event in Yeouido, western Seoul, on Wednesday. [IBM KOREA]

Paul Burton, general manager of the Asia-Pacific region at IBM, and Christopher Padilla, vice president for government and regulatory affairs, discuss deepfakes issue during a press event in Yeouido, western Seoul, on Wednesday. [IBM KOREA]

A top policy official at IBM advocated the need to impose legal liability against not only the creators of deepfake material, but also platform operators like Google and Meta ahead of the upcoming elections worldwide.
 
Christopher Padilla, IBM’s vice president for government and regulatory affairs, addressed the company's stance on AI during a visit to Seoul on Wednesday alongside Paul Burton, general manager of the Asia-Pacific region.

 
“We believe that there should be legal liability both for the people who post it and for the platforms who don't take it down faster,” the vice president told reporters at IBM Korea's headquarters in Yeouido, western Seoul, addressing deepfakes.
 
“That is controversial. The platforms don't like that position because they don't want to be sued in court for not taking things down fast enough. But our CEO has said he thinks that there should be liability for failure to remove harmful AI generated content,” Padilla said.

 
IBM publicly supported the European Union’s official adoption of its sweeping Artificial Intelligence Act earlier this month, which paves the way for the union to prohibit certain uses of AI technology and require high levels of transparency from companies who provide it. 
 
“I commend the EU for its leadership in passing comprehensive, smart AI legislation. The risk-based approach aligns with IBM's commitment to ethical AI practices and will contribute to building open and trustworthy AI ecosystems,” said Christina Montgomery, IBM's vice president and chief privacy and trust officer in a statement.

 
Other competing Big Tech companies such as Microsoft, Meta and Google did not release formal statements endorsing the landmark legal framework.

 
IBM's executives spoke in favor of the EU's “risk-based approach,” which dictates that the AI Act's provisions be enforced selectively, and in stages based on the level of “risk” each system poses. Certain systems are designated as “high-risk,” including those applied to education and law enforcement and those with the potential to impact elections.
 
“We think that AI that is low-risk, such as AI that gives you a restaurant recommendation or a clothing recommendation, is low risk and should not really be regulated,” Padilla said. AI that medical professionals or banks might use to make diagnoses or loans, by contrast, “obviously has higher risk and deserves more government scrutiny.”
 
IBM emphasized that the open source models powering its “Watsonx” tools will provide its clients with transparency and reduce their risk. 
 
“When we deploy or release open source models, we release the paper that says how we built it,” said Burton, the Asia-Pacific leader.
 
“Thousands or tens of thousands of people are going to be looking at this thing, and we take their feedback,” Burton says. “We make it better, and then we release it, and the process continues.”

BY PARK EUN-JEE [park.eunjee@joongang.co.kr]
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)