Intel Corp. today announced a new artificial intelligence processor aimed at inference workloads, as it renews its efforts to break into the lucrative market for AI accelerators.
The new graphics processing unit, codenamed Crescent Island, is optimized for energy efficiency and will be able to support a wide range of AI applications, said Intel Chief Technology Office Sachin Katti. His comments came at the Open Compute Summit today, where he announced the new chip.
“It emphasizes that focus that I talked about earlier, inference, optimized for AI, optimized for delivering the best token economics out there, the best performance per dollar out there,” Katti said.
The Crescent Island GPU is based on Intel’s new Xe3P architecture, which was announced just one week earlier at a special event where it revealed its upcoming Panther Lake Intel Core Ultra series 3 processors for laptops. Xe3P is an upgrade over Intel’s existing Xe3 architecture, optimized for both power and cost, designed for air-cooled data centers and targeted at AI inference workloads, the company said. It added that the architecture will be able to support a broad range of data types, making it ideal for “token-as-a-service” providers and inference applications.
The new GPU will feature a massive 160 gigabytes of memory capacity, but interestingly Intel has decided to base this on the LPDDR5X standard, rather than the higher-end high-bandwidth memory or HBM that’s used in the GPUs of rivals such as Nvidia Corp. and Advanced Micro Devices Inc.
Those rivals currently empoy HBM3E in their most current chips, and are already talking about upgrading to HBM4 for their next-generation processors, such as Vera Rubin and the MI400. But Intel’s choice reflects the difficulties those companies have in sourcing HBM memory chips, which have been in short supply due to the increased demand. HBM prices have also skyrocketed in recent months. By leveraging LPDDR5X memory in its new GPUs instead, Intel may well gain an edge in terms of cost efficiency.
Intel has not yet released any detailed specifications, but said it’s currently targeting customer sampling for Crescent Island for the second half of 2026, so we can expect more details to emerge in the coming months.
What is clear now is that Crescent Island is going to be a key part of its renewed attempt to capitalize on an AI frenzy that has already generated billions of dollars in revenue for Nvidia and other chipmakers. The company completely missed the boat on generative AI and has fallen far behind its competitors, and faces a huge struggle to capture a meaningful share of the market for AI accelerators, but it’s not something it can just ignore.
Indeed, Intel Chief Executive Lip-Bu Tan, who took over the chipmaker earlier this year, has vowed to revitalize its fortunes in the AI market. He’s taken some drastic steps already, effectively mothballing the company’s earlier efforts, such as the Gaudi GPUs.
At the OCP Summit, Katti said Intel plans to release new versions of the GPUs every year, matching the annual release cadence of Nvidia and AMD, as well as cloud infrastructure providers such as Amazon Web Services Inc. and Google LLC, which design their own processors for AI.
Intel faces a tough task in dislodging Nvidia from its dominant position in the AI chip market, but Katti said the company will focus its efforts on creating chips specifically for AI inference, which means the silicon that runs AI models in production, as opposed to training AI models. “Instead of trying to build for every workload out there, our focus is increasingly going to be on inference,” he stated.
Holger Mueller of Constellation Research Inc. said it’s encouraging to see Intel making a renewed effort to bring its processing expertise to the AI market. “It’s never too late to make your mark on the AI world, and Intel will look to show that’s true when it launches Crescent Island next year,” the analyst said. “Enterprises want to see more competition in the GPU market and that’s what Intel is bringing, though of course there will be a healthy dose of skepticism about what its new chips can do.”
Interestingly, Nvidia itself could yet help boost Intel’s fortunes in the AI market, since it now has a vested interest in doing so after investing $5 billion into its ailing rival last month. That deal saw Nvidia take a roughly 4% stake in its rival, meaning it has become one of its largest shareholders alongside the U.S. government. The two companies have also agreed on a partnership that will see them co-develop chips for data center servers and personal computers.
Speaking about that deal, Katti said it will help to ensure that Intel’s central processing units will be installed in every AI server and PC that’s sold in future.
Image: Intel
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
- 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
- 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About News Media
Founded by tech visionaries John Furrier and Dave Vellante, News Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.