FriendliAI, an AI inference platform startup, has raised $20 million in a seed extension round, the company told Crunchbase News exclusively.
For the unacquainted, AI inference is the “last mile” where users engage with artificial intelligence. When you query a chatbot and receive a response generated by an AI model, that process is known as inference.
AI inference has become critically important, with 80% to 90% of GPUs dedicated to inference, and only the remainder used for training, according to an estimate by FriendliAI CEO Byung-Gon “Gon” Chun.
“The AI inference market is exploding as more organizations move from AI experimentation to production deployment,” he said.
Chun founded FriendliAI in 2021 out of Seoul National University, where he had been leading research on accelerating AI model execution for over a decade. The company began focusing on AI inference in 2022, even before the release of OpenAI’s ChatGPT. The core product and engineering team also originated from Chun’s research group at Seoul National University.
In late 2023, FriendliAI moved its headquarters to Redwood City, California, said Chun, who previously was an AI researcher at Microsoft and Facebook.
In a nutshell, the startup aims to help companies run AI model inference faster, cheaper and simpler. Speed matters for LLMs for a number of reasons. Fast inference can lead to lower operational expenses, especially in cloud-based environments where computing resources are billed on usage.
“Instead of spending huge amounts of money and time building, optimizing and operating complex infrastructure for AI inference, they can use our optimized GPU platform to deploy and scale AI inference,” Chun claims.
By the numbers
Capstone Partners led FriendliAI’s raise, which also included participation from new backers Sierra Ventures, Alumni Ventures, KDB Investment and KB Securities.
The startup had raised a $6 million seed round in late 2021, also led by Capstone. While the company declined to reveal valuation, Chun noted that it is up from FriendliAI’s last raise.
FriendliAI’s technology has gained significant traction by delivering up to 90% GPU cost savings while delivering what Chun claims is “the fastest LLM inference performance on the market.”
Declining to reveal hard revenue figures, Chun said FriendliAI has seen rapid growth in both usage and revenue in 2025 so far, driven by the accelerating adoption of generative AI in production. He expects revenue to be 6x to 7x higher than 2024.
“While we are not yet profitable, our priority has been scaling efficiently, ensuring that gross margins remain strong even as we expand capacity,” he told Crunchbase News in an interview.
Diverse customer base
FriendliAI works with AI-driven companies ranging from startups to large enterprises. For example, it recently established a partnership with LG Electronics. The company is also the only API provider of EXAONE, LG AI Research’s foundation model.
The company has around 25 to 30 large clients. One customer is Scatter Lab, a social chatbot company in Korea, which Chun claims achieved a significant reduction in infrastructure costs by leveraging FriendliAI’s platform to run multiple LLMs.
The startup’s revenue model is usage-based pricing for inference. Usage is measured in two ways: by GPU hours consumed on its platform, or by the number of tokens or image steps processed.
Customers can choose from three product options, depending on their needs: dedicated endpoints, which allocate GPUs exclusively to a client; serverless endpoints, which provide APIs to popular AI models; or containers, which run directly within the client’s own infrastructure.
Chun said the company invented “continuous batching,” a technique that is pioneering the LLM (such as ChatGPT) inference field.
“LLM inference traffic is very dynamic and uneven. Requests come in at irregular times, and they don’t all take the same amount of time to finish. That’s why the traditional method isn’t enough,” he told Crunchbase News. “We invented continuous batching to solve this problem. With continuous batching, you can dynamically add new requests to the batch or remove completed ones from the batch at a fine-grained level, keeping the batch efficiently packed.”
This process, he added, allows the system to maintain high GPU utilization even when traffic is dynamic and uneven.
Proprietary inference engine
In Chun’s view, FriendliAI stands out in an increasingly competitive landscape in that it specializes in AI inference. One competitor, for example, is Fireworks AI.
FriendliAI’s platform is powered by a proprietary inference engine that applies deep algorithmic and system-level optimizations on GPUs, which he believes enables models to run at significantly lower cost and higher speed.
“We offer the broadest model coverage in the industry,” Chun told Crunchbase News, noting that FriendliAI supports both open source and custom models, with more than 420,000 models directly deployable from Hugging Face.
To Eun-gang Song, partner at Capstone Partners, FriendliAI has demonstrated “exceptional technical innovation” in the AI inference space.
“Their platform’s ability to deliver superior performance, while reducing costs, makes them an ideal partner for enterprises scaling their AI operations,” he said in a written statement. “We’re thrilled to lead this round.”
Related Crunchbase query:
Illustration: Dom Guzman
Stay up to date with recent funding rounds, acquisitions, and more with the
Crunchbase Daily.