MatX Inc., a chip startup led by former Google LLC engineers, has reportedly raised $80 million in fresh funding.
News today cited sources as saying that Anthropic PBC backer Spark Capital led the investment. The raise, which is described as a Series B round, reportedly values MatX at more than $300 million. The milestone comes a few months after the company raised $25 million in initial funding from a group of prominent investors.
MatX was founded in 2022 by Chief Executive Officer Reiner Pope and Chief Technology Officer Mike Gunter. The duo developed Google’s TPU line of artificial intelligence processors. They also worked on other machine learning projects during their stints at the search giant.
The company is developing chips for training AI models and performing inference, or the task of running a neural network in production after it’s trained. Customers will get the ability to build machine learning clusters that contain hundreds of thousands of its chips. MatX estimates that such clusters will be capable of powering AI models with millions of simultaneous users.
The company’s website states that it’s prioritizing cost-efficiency over latency with its chips’ design. Nevertheless, MatX expects that the processors will be “competitive” in the latter area. For AI models with 70 billion parameters, it’s promising latencies of less than a 100th of a second per token.
MatX plans to give customers “low-level control over the hardware.” Several existing AI processors, including Nvidia Corp.’s market-leading graphics cards, provide similar capabilities. They allow developers to modify how computations are carried out in a way that improves the performance of AI models.
Nvidia’s chips, for example, provide low-level controls that make it easier to implement operator fusion. This is a machine learning technique that reduces the number of times an AI model must move data to and from a graphics card’s memory. Such data transfers incur processing delays, which means that lowering their frequency speeds up calculations.
MatX says that another contributor to its chips’ performance is their no-frills architecture. The processors lack some of some of the components included in standard GPUs, which makes it possible to add more AI-optimized circuits.
Earlier this year, MatX told Bloomberg that its chips will be at least 10 times better at running large language models than Nvidia silicon. The company further claims that AI clusters powered by its silicon will be capable of running LLMs with 10 trillion parameters. For smaller models with about 70 billion parameters, MatX is promising training times of a few days or weeks.
The company expects to complete the development of its first product next year.
Photo: MatX
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU