Amazon Web Services promises energy efficiency through cloud computing, generative AI

Home > Business > Industry

print dictionary print

Amazon Web Services promises energy efficiency through cloud computing, generative AI

Amazon Web Services Korea Director Ham Kee-ho speaks during his keynote speech at the AWS Summit Seoul 2024 held in southern Seoul's Coex on Thursday. [AWS]

Amazon Web Services Korea Director Ham Kee-ho speaks during his keynote speech at the AWS Summit Seoul 2024 held in southern Seoul's Coex on Thursday. [AWS]

 
Korean clients using cloud computing solutions and AI chips from Amazon Web Services (AWS) can prioritize sustainability and cost optimization through generative AI, AWS said on Thursday.
 
Korean clients are particularly interested in reducing their carbon footprint, which can be achieved using the AWS Graviton processor, designed to offer a high-performance computing environment with an emphasis on energy efficiency.  
 
“Korea has the second-largest user base within the Asia-Pacific region that utilizes Graviton,” AWS Korea Director Ham Kee-ho noted at the AWS Summit Seoul 2024 hosted in southern Seoul’s Coex. The annual event gathers IT experts and business leaders to showcase instances of clients utilizing AWS services.
 
Another realm of interest for Korean companies is minimizing costs when utilizing AI. Local demand for two of AWS’s homegrown AI chips, AWS Trainium and AWS Inferentia — both designed to optimize graphics processing unit (GPU) performance and reduce costs — is swiftly on the rise, according to Ham.
 
“Domestic virtual human startup Klleon is a representative example of using Inferentia,” he said.
 
SK Telecom, a major Korean mobile carrier, is another AWS client currently utilizing Amazon Bedrock, a fully managed cloud service that offers a range of high-performing foundation models available for use through a unified application programming interface (API).
 
Using Bedrock, clients can easily develop AI applications by fine-tuning existing large language models (LLMs) with their own data and adding functions like retrieval-augmented generation (RAG) technology.
 
“We are developing telco-specific LLMs through fine-tuning LLMs and using RAG technology on Bedrock,” said SK Telecom’s Chung Suk-geun, head of global AI tech division. “We plan to launch the telco LLM and personal AI assistant service based on this product within the second half of this year.” 
 

BY LEE JAE-LIM [[email protected]]
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)