After a few hectic months in which the rumors that its sale to Intel, now its ally, was being preparedand to close a financing round in which they have obtained 350 million dollars, SambaNova has its SN50 chip readywith which it promises drastic improvements in AI inference performance and aims to stand up to NVIDIA.
In the financing round that has just closed, SambaNova has had strong participation from the Intel Capital division, a company with which it will collaborate in the development of new high-performance AI inference systems.
In addition to Intel, the financing round that SambaNova has closed has had a large presence of investors, which highlights the interest in its chips for work with AI. Among them are Assam Ventures, Battery Ventures, Gulf Development Public Company Limited, Mayfield Capital, QIA, Saudi First Data, Seligman Ventures, T. Rowe Price, &E, 8Square, Atlantic Bridge, BlackRock, GV, Nepenthe, Nuri Capital y Redline Capital.
The objective of the collaboration between SambaNova and Intel is to offer companies an alternative to GPUs, which are currently responsible for supporting the majority of AI-related workloads. In fact, Intel’s investment will be used, among other things, to accelerate the deployment of a new AI cloud, which will be powered by Intel and will be based on the existing SambaNova Cloud platform. Intel will integrate its Xeon CPUs into SambaNova Cloud, with the aim of making it easier for the company to create a more efficient and optimized infrastructure for large multimodal language models.
One of the main differentiators of SambaNova chips, such as the SN50, is their energy efficiency, as they can apparently generate more tokens per kilowatt hour than similar processors from their rivals.
The SambaNova SN50 is optimized to quickly process large data sets and perform complex calculations. Its creators assure that its computing power is five times greater than that of other chips, in addition to having a network bandwidth than its predecessor, the SN40.
It will allow up to 256 accelerators to be connected through a high-speed interconnection of several terabits per second. This way you can work with larger AI models and broader context, and do so with superior performance and responsiveness. All without increasing its calculation costs, according to SambaNova.
It is based on a three-tier architecture that can work with AI models of up to 10 billion parameters and 10 million context lengths. This makes it possible to achieve deeper reasoning, and the systems that integrate it are more “intelligent.”
Its resident multi-model memory and agency caching capabilities optimize energy efficiency, making its cost per token lower. All of this means that it is intended, among other applications, for AI voice assistants, which require very low latency, in order to operate in real time. In addition, SambaNova assures that it can feed several thousand sessions at the same time.
When combined with Intel Xeon CPUs in the SambaNova cloud, platform tasks can be distributed more efficiently, improving the speed and performance of AI workloads. In addition, Intel has commented that it will be able to accelerate the expansion of the SambaNova cloud through reference architectures, among other things. Also for its relationships with software providers and system integrators.
CWhen the cloud is ready, Intel and SambaNova will jointly market itfor which they will take advantage of Intel’s business relationships. Advantages for both parties, therefore. SambaNova would take advantage of Intel’s business relationships, built over decades, and Intel’s use of high-powered AI chips, an area in which it does not have much of a presence.
