Groq raises $640 million in Samsung-led Series D round

Home > Business > Tech

print dictionary print

Groq raises $640 million in Samsung-led Series D round

Aramco's logo is pictured at the Hyvolution exhibition in Paris in February. [REUTERS/YONHAP]

Aramco's logo is pictured at the Hyvolution exhibition in Paris in February. [REUTERS/YONHAP]

 
Groq secured $640 million in a Series D funding round led by BlackRock, Samsung Catalyst Fund and Cisco Investments. The deal valued the U.S. AI chip startup at $2.8 billion, more than doubling its latest valuation from 2021.
 
The injection, which follows a large chip supply deal between Groq and a Saudi Arabian firm earlier this year, is a critical step in the unicorn's battle to weaken market stronghold Nvidia’s grip over the AI chip industry.
 
CEO Jonathan Ross told the JoongAng Ilbo on Wednesday that the firm would “supply 108,000 language processing units [LPUs] by next year’s first quarter, which can increase up to 1.5 million within the year depending on demand.”
 
Groq has tapped Samsung Foundry to manufacture its AI inference chips, pitched as a faster and more energy-efficient alternative to Nvidia's processors. They will be manufactured at the Korean tech giant's factory currently that is under construction in Taylor, Texas, on its 4-nanometer process.
 
Samsung Electronics' semiconductor foundry plant currently under construction in Taylor, Texas [SAMSUNG ELECTRONICS]

Samsung Electronics' semiconductor foundry plant currently under construction in Taylor, Texas [SAMSUNG ELECTRONICS]

 
Oil money funds Nvidia rivals
 
Startups related to hardware specialized for AI started popping up after Google’s AlphaGo raised attention in 2016. Jonathan Ross, who was involved in the early development of Google’s tensor processing units — processors designed to accelerate machine learning — founded Groq while startups such as Graphcore, Cerebras and SambaNova were growing into unicorns.
 
But having technology doesn’t secure a spot in the market. The aforementioned startups could not secure large contracts in an AI chip industry dominated by Nvidia. Even the British government selected Nvidia, rather than the domestic Graphcore, to build the country’s largest supercomputer in Cambridge. Graphcore was eventually acquired by SoftBank last month after its engineers moved en masse to Meta Platforms.
 
An opportunity came from the Middle East. Oil economies such as those of Saudi Arabia and the United Arab Emirates (UAE) have been funding infrastructure such as AI computing in preparation for a post-petroleum era. Groq announced in March that it had sealed a contract with Saudi Arabia’s state-owned Aramco to build AI infrastructure. With a market value of $1.7 trillion, Aramco is the world’s fifth most valuable firm and in recent years has been investing in infrastructure for a digital transition. U.S. AI unicorn Cerebras contracted with UAE company G42 last year to build up to nine supercomputers priced at $100 million each. The Abu Dhabi-based G42 is owned by a member of the UAE’s ruling family and facilitates the integration of AI at a national level.
 
Executives from Aramco and Groq sign a memorandum of understanding for AI infrastructure development in March. [ARAMCO]

Executives from Aramco and Groq sign a memorandum of understanding for AI infrastructure development in March. [ARAMCO]

 
The companies are also making efforts to quell the U.S. government’s concerns that AI-related technology and semiconductors could reach China via the Middle East. G42 accepted a $1.5 billion strategic investment from Microsoft; Microsoft’s president sits on the UAE company’s board of directors while Cerebras and Groq all produce their chips at factories in the United States. Groq CEO Ross told the JoongAng Ilbo in an interview last year that his firm had chosen Samsung Foundry over Taiwan’s TSMC because it was “a company that possessed a factory that could mass-produce 4-nanometer chips in the United States.”
 
Samsung betting on HBM-free AI chips
 
Groq’s LPUs are designed for inference, not training, of large language models and allow for real-time conversation with AI models without latency. For this, LPUs use SRAM integrated into the chip rather than the high bandwidth memory (HBM) interfaces used in GPUs. SRAM is fast but lacks the capacity to process large amounts of data at once. Its density can be increased when it is connected with a large quantity of LPU chips, making it faster and more cost-efficient when used for AI inference than it is when use for training, according to Groq.
 
Nvidia’s dominance over the GPU market and tight HBM supply has made not only Groq, but also chip businesses like Tenstorrent and SambaNova completely do away with HBMs or produce specialized AI chips that reduce its necessity. Samsung Catalyst Fund also invested in Tenstorrent and SambaNova. And yet, no startup has emerged that has been able to chip away at Nvidia-produced GPU’s stronghold.

BY SHIM SEO-HYUN [kim.juyeon2@joongang.co.kr]
Log in to Twitter or Facebook account to connect
with the Korea JoongAng Daily
help-image Social comment?
s
lock icon

To write comments, please log in to one of the accounts.

Standards Board Policy (0/250자)