A specialist artificial intelligence chipmaker called EnCharge AI Inc. said today it has closed on a more than $100 million Series B round of funding, bringing its total amount raised to date to more than $144 million.
The company says it will use the funding to accelerate the commercialization of its first client computing AI accelerators, which are expected to launch later this year.
The latest round was backed by a host of investors, with Tiger Global leading the way and other new investors such as Maverick Silicon, Capital TEN, SIP Global Partners, Zero Infinity Partners, CTBC VC, Vanderbilt University and Morgan Creek Digital also participating. Previous investors such as RTX Ventures, Anzu Partners, Scout Ventures, AlleyCorp, ACVC and S5V joined the round too, as did the likes of Samsung Ventures and HH-CTBC, which focus specifically on the semiconductor industry. Another investor was In-Q-Tel, which specializes in backing startups that develop technologies for the U.S. national security community.
EnCharge is looking to change the reality that today, the vast majority of AI inference computation is performed by enormous clusters of extremely powerful and energy-intensive graphics processing units in cloud data centers. The startup believes this is unsustainable, both from an environmental and an economic perspective. It believes it can provide significant advantages by moving a lot of these cloud-based workloads onto local devices, where they will benefit from superior security and lower latency.
Founded by a team of engineering Ph.D.s and incubated at Princeton University, the startup has created powerful analog in-memory-computing AI chips that it says will dramatically reduce the energy requirements for many AI workloads. Its technology has been in development for more than eight years. It’s essentially a highly programmable application-specific integrated circuit that features a novel approach to memory management.
In a 2022 interview with News, EnCharge co-founder and Chief Executive Naveen Verma explained that the chips use “charge-based memory.” It differs from traditional memory design in the way it reads data from the electrical current on a memory plane, as opposed to reading it from individual bit cells. It’s an approach that enables the use of more precise capacitors, as opposed to less precise semiconductors.
This is what enables EnCharge’s chips to deliver enormous efficiency gains during data reduction operations involving matrix multiplication, Verma explained.
“Instead of communicating individual bits, you communicate the result,” Verma said. “You can do that by adding up the currents of all the bit cells, but that’s noisy and messy. Or you can do that accumulation using the charge. That lets you move away from semiconductors to very robust and scalable capacitors. That operation can now be done very precisely.”
The increased efficiency means that EnCharge’s chips require an astonishing 20 times less energy than typical GPUs. What’s more, they’re extremely versatile, with EnCharge having built an entire suite of software tools for developers to optimize the chips for efficiency, performance and fidelity.
They’re mounted onto cards that can plug directly into a PCIe interface, which means they can be fitted onto a wide range of devices and machines. They can even be used in tandem with GPUs, the company says.
Holger Mueller of Constellation Research Inc. said it’s nice to see EnCharge targeting improvements in the runtime cost of AI, as most of the cost-cutting efforts in the industry thus far have been focused on the costs of training AI models. There’s also a need to address challenges posed by AI at the edge, which faces power constraints, he said.
“EnCharge is doing things differently from traditional AI runtime architectures, and its approach looks to be promising,” Mueller noted. “The question will be how well can existing models run on its silicon? If they don’t run so well, then retraining efforts will push up the total cost of ownership and potentially negate any gains it delivers.”
EnCharge’s efficiency gains have gotten a lot of attention from companies in the defense and aerospace industries, said RTX Ventures Managing Director Dan Ateya. “EnCharge’s analog in-memory architecture can be transformative for defense and aerospace use cases, where size, weight and power constraints limit how AI is deployed today,” he said.
In addition, its technology should appeal to the broader AI industry at a time when customers are becoming increasingly concerned about the enormous energy demands required by the most powerful generative AI applications.
EnCharge’s roadmap calls for the company to move to advanced technology nodes as it prepares to start shipping a portfolio of its analog in-memory chips that will cater to different AI workloads spanning the data center to the edge.
Image: News/Dreamina
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU