Reflection AI Inc., a new startup led by former Google DeepMind researchers, launched today with $130 million in early-stage funding.
The company raised the capital over two rounds. The first, a $25 million seed investment, was led by Sequoia Capital and CRV. The latter firm also co-led the subsequent $105 million Series A raise with Lightspeed Venture Partners.
Reflection AI’s funding rounds drew other high-profile backers as well. Its investors reportedly include Nvidia Corp.’s venture capital arm, LinkedIn co-founder Reid Hoffman and Scale AI Inc. Chief Executive Officer Alexandr Wang. The company is valued at $555 million.
Reflection AI is led by co-founders Misha Laskin (pictured, right) and Ioannis Antonoglou (left). Laskin, the company’s CEO, helped develop the training workflow for Google LLC’s Gemini large language model series. Antonoglou worked on Gemini’s post-training systems. Post-training is the process of optimizing an LLM after it’s trained to boost output quality.
Reflection AI is seeking to develop so-called superintelligence, which it defines as an artificial intelligence system capable of performing most work that involves a computer. As an initial step toward that goal, the company is building an autonomous programming tool. Reflection AI believes the technical building blocks necessary to create such a tool can be repurposed to build a superintelligence.
“The breakthroughs needed to build a fully autonomous coding system — like advanced reasoning and iterative self-improvement — extend naturally to broader categories of computer work,” Reflection AI staffers wrote in a blog post.
Initially, the company will focus on developing AI agents that automate relatively narrow programming tasks. Some agents will focus on scanning developers’ code for vulnerabilities. Others will optimize applications’ memory usage and test them for reliability issues.
Reflection AI also plans to automate a number of adjacent tasks. According to the company, its technology can generate documentation that explains how a particular snippet of code works. The software will also help manage the infrastructure on which customer applications run.
According to a job posting on Reflection AI’s website, the company plans to power its software using LLMs and reinforcement learning. Historically, developers trained AI models on datasets in which each data point was accompanied by an explanation. Reinforcement learning removes the need to include such explanations, which makes it easier to create training datasets.
The job posting also reveals Reflection AI plans to “explore novel architectures” for its AI systems. That suggests the company might look beyond the Transformer neural network architecture that underpins most LLMs. A growing number of LLMs use a competing architecture called Mamba that is more efficient in certain respects.
Another job posting, for an AI infrastructure expert, suggests that Reflection AI plans to train its models using up to tens of thousands of graphics cards. The company also detailed it will work on “vLLM-like platforms for non-LLM models.” Developers use vLLM, a popular open-source AI tool, to reduce the memory usage of their language models.
“As the team advances model intelligence to increase its scope of capabilities, Reflection’s agents take on more responsibilities,” Sequoia Capital investors Stephanie Zhan and Charlie Curnin wrote in a blog post. “Imagine autonomous coding agents working tirelessly in the background, handling workloads that slow teams down.”
Photo: Sequoia Capital
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU