Verdict
Intel’s really played a blinder on this one. Although its latest Core Ultra series has certainly had a few snags since it debuted, the B580 and the GPU line has absolutely come out swinging. Intel’s second generation of budget GPUs really has ramped up the pressure on the competition. With average frame rates in our testing topping out at 61.63 fps in 1080p and 44.21 fps in 1440p, at its retail price, it’s easily one of the best-value graphics cards out there.
-
Exceptional value for money -
Twin slot, quiet design -
Impressive AI showing
-
In high demand -
Limited AIB options
Key Features
-
Battlemage Architecture
Intel’s second generation of GPUs packs in some impressive tech to really give it an edge. Built on TSMC’s N5 process, it weighs in at 2,560 cores, running at an outstanding 2,850 MHz under load. -
12GB GDDR6
It’s not exactly the fastest memory on the block nor the largest capacity, but it doesn’t need to be. With 12GB, the B580 is more than well equipped for both 1080p and 1440p gaming, although you’re going to struggle at higher resolutions. -
Paradigm Design Shift
Less cores and a smaller GPU don’t hold back the B580 compared to last gen as Intel shifts how it’s designing its graphical platforms, favoring latency over brute force.
Introduction
Well, well, well, thank you to Intel and thank you to the B580 GPU. It’s about time we finally had a bit of competition back in the budget graphics card scene.
For too long, the battle for value supremacy has been hidden away under lock and key in AMD’s ironclad fortress.
As Nvidia focuses solely on the high-end and mid-range, the fight for the lower end of the spectrum has felt a little sterile. GPU prices haven’t fallen for some time, and although the performance jumps have been reasonable year-on-year, there hasn’t been a huge amount for us to celebrate when it comes to entry-level PC gaming.
Intel’s Arc B580, though, has redefined what it means to be a budget graphics card. Compared to AMD’s RX 7600 XT, it delivers exceptional price-to-performance combined with some phenomenally stellar AI clout, even beating AMD’s own offerings while utilising FSR. It’s just phenomenal. I can happily say, without even a second thought, that this is one of the best graphics cards I’ve seen in a long time.
Specs
I had the pleasure of reviewing both the Arc A750 and the A770 last year. Both cards were curious additions to Intel’s lineup, launching initially at the tail end of 2022. It was Intel’s first outing into the world of dedicated gaming graphics cards, and I was nothing if not impressed. At least with the Intel Arc A750.
For the price, it was one of the best value GPUs I’d seen for some time, and although not perfect, it gave AMD’s 7000 series a run for its money. The way Intel was building out its graphics card lineup, with a modular design comprised of its own packaged Xe scores, complete with traditional GPU hardware alongside modern ray tracing units with tensor cores, execution units, and render slices, really did give me hope that it was on the right pathway to building out excellence in that field, and it looks like that’s come to fruition.
GPU | BMG-G21 | ||
INTERFACE | PCIe 4.0 x8 | ||
DIE SIZE | 272 mm2 | ||
LITHOGRAPHY | TSMC N5 | ||
TRANSISTORS | 19.6 Billion | ||
CORES | 2,560 | ||
BOOST CLOCK ADVERTISED | 2,670 MHz | ||
BOOST CLOCK RECORDED | 2,850 MHz | ||
MEMORY | 12GB GDDR6 | ||
MEMORY BUS | 192-bit | ||
MEMORY BANDWIDTH | 456 GB/s | ||
TGP | 190 W |
The B580 is a natural successor to the A580, (that launched in October 2023), yet rather interestingly, on the surface at least, it looks like the GPU has had a downgrade from last gen, not an increase in overall performance. Certainly, if you only glanced at the hardware count. But a lot has changed here. Let me explain.
Despite a slight price bump, the B580 features fewer Xe cores, fewer shader cores, and generally a lot less hardware across the board (bar a slight increase to memory capacity, albeit with less memory bandwidth overall). That’s even more curious given Intel’s utilisation of TSMC’s far denser N5 manufacturing process. So less for more then? Well, not quite.
Although yes, it is true that the B580 only has 20 Xe cores vs. the A580’s 24, along with significantly less internal hardware and slightly less memory bandwidth, the way these Xe units have been designed is radically different from that of its predecessor. Yes, there are fewer of them, but the internal vector engines have effectively doubled in overall size, increasing from 256-bit engines all the way up to 512-bit engines.

The same goes for the XMX engines too, which have doubled in size too. What this effectively means is that we have an architecture that sort of straddles the line between monolithic and modular design styles, and with that, it significantly reduces cross-core latency while dramatically increasing bandwidth across each Xe core.
If you have a task that requires extensive calculations, for instance, Intel can now allocate that directly to a single core unit rather than split that task up across multiple cores and then reassemble the completed calculations back together again after they’re finished. Effectively, it’s made the cores far more efficient than their predecessors on the A series, and we’re seeing a big uptick in performance as a result, with less hardware. The perfect blend of modularity and single-design die.
Test Setup
Over the last few months, I’ve redesigned our GPU testing methodology from the ground up. I wanted to ensure that with this newest generation of cards, we were testing with commercially available drivers, with some of the latest titles, software, and programs, that could really push these GPUs to the absolute limit.
I’m using a mixture of in-game benchmarks and synthetic programs to drive that stress, along with a few tests thrown in to understand how these cards operate under AI workloads as well, particularly important given how prominent that has become in recent months.
Throughout my benchmarking runs, I constantly monitor complete system power draw via a wall-plug power meter and temperatures via both HWMonitor and an ambient room thermometer as well to get a real good feel for how these little beauties handle the heat.
For gaming, I’m testing five titles (Cyberpunk 2077, Black Myth Wukong, Metro Exodus: Enhanced Edition, Final Fantasy XIV: Dawntrail, and Total War: Warhammer 3) across all three of the primary resolutions, with minimum and average frame rates recorded.


Each test at each resolution is performed three times at a minimum, and an average is taken from those results. This then forms the basis of our performance analysis and also provides us with metrics that we can then use to establish how many fps you’re getting per dollar spent as well as some other intriguing stats and data, vital information, particularly in this graphics card-sparse era.
For synthetic tests, I’m utilising 3D Mark’s Steel Nomad and Speed Way tests, along with UL’s Procyon benchmark suite for its AI Computer Vision inference test, plus AI Image Generation as well, to really round out the suite.
As for AI upscaling, for this we’re using either DLSS or FSR, depending on the GPU, with Black Myth Wukong and Cyberpunk 2077 being our main titles for that, tested at 4K, with runs done with just DLSS or FSR and then a second set with frame generation active as well. AI upscaling quality is set to “quality” or 67 in Black Myth Wukong.
- CPU: AMD Ryzen 9 9900X
- RAM: 32GB (2x16GB) Team Group T-Create Expert DDR5 @ 6000 C34
- Motherboard: ASUS ROG Strix X870E-E Gaming WiFi
- CPU Cooler: Tryx Panorama 360mm AIO
- Cooling: 10x NZXT RGB Duo 120mm fans
- PSU: 1500W NZXT C1500 2024 80+ Platinum PSU
- SSD 1: 2TB Crucial T705 PCIe 5.0 M.2 SSD
- SSD 2: 4TB Crucial T500 PCIe 4.0 M.2 SSD
- Case: NZXT H9 Elite
As for the hardware picks, given how tight gaming performance is currently between both Intel and AMD chips, I’ve actually opted to go for the AMD Ryzen 9 9900X. It might not be the ultimate, perfect, best gaming CPU of all time, but it’s more than potent enough to avoid any major bottlenecks.
Yes, technically an X3D chip might have been faster in some titles, but as that 3D V-Cache is so volatile in terms of what games support it, I opted instead to go for the stock X variant and lean more on the higher clock speeds and greater multi-threaded performance.
Gaming Performance
- Solid 1080p and 1440p performance
- Price ratio really matters
- Leaves AMD in the dust for RT titles
At this point, you’ve probably already guessed it, but the B580 is just dominant at 1080p. Of course, something like the RTX 5080 is going to smoke it, no doubt there, but for the $250 outlay, what you’re getting is absolutely staggering.
In Cyberpunk 2077, at 1080p on the Ultra preset with ray tracing and no FSR, the B580 scored an outstanding 39 fps on average frames, versus the 7600 XT’s 25.7 fps. That’s a huge 52% improvement comparatively, and it’s even nipping at the heels of the RX 7800 XT as well, with that only scoring 44.8 fps in that exact same test.
In less RTX-heavy titles, the scores even out a touch. Black Myth Wukong: the B580 managed 28 fps to the 7600 XT’s 29; Total War: Warhammer 3 saw 66 fps versus 67.9 on the 7600 XT; and Final Fantasy XIV: Dawntrail equally landed at 122.6 fps versus 125.2 fps.
Similar to Cyberpunk, Metro Exodus, with its ray tracing enabled, scored 52.5 fps versus the 7600 XT’s 43.5. Just really solid across the board, clearly showing how much Intel’s managed to improve on its ray tracing performance gen-on-gen.
That, of course, doesn’t sound that impressive, but given there’s an RRP difference of about $80 between the two, it’s nothing to be sniffed at.
1440p follows a similar trend as well, with the ray tracing titles taking a fairly healthy lead and the traditional rasterised titles level pegging between the two of them. Ultimately, the Arc B580 is a far more well-rounded card than any anything AMD has on the low end right now. That plays out in the 3D Mark scores as well, with Speed Way landing a score 400 points higher and Steel Nomad sliding in with an additional 600 points as well.
Ray Tracing, DLSS & AI Performance
- Cyberpunk 2077 at 4K was playable with frame generation and FSR 3 enabled
As we’ve already well established, Intel’s B580 Battlemage architecture is far more potent in ray tracing operations than anything AMD can currently muster on the budget front.
Although it might be a bit cruel to pit these cards up against each other in two very aggressive graphical titles at 4K, it’s nonetheless eye-opening, to say the least. Particularly as both cards use FSR for these results.
In fact, Cyberpunk 2077 at 4K was practically playable with frame generation and FSR 3 enabled and set to quality, with a 39.9 average framerate versus the 7600 XT’s 26.2.
What was really dominant though, was the Black Myth Wukong showing. Now, in these tests, unlike the main-game testing, we do enable ray tracing fully to really push the limits of the cards.
With just FSR enabled, the 7600 XT managed 3 fps, and the B580 7 fps. With frame generation on as well, the 7600 XT doubled its score to 6, and the B580 managed, oh yeah, 36 fps. So impressive, I had to re-test it several times to confirm.
Power Consumption & Temperature
- Not the coolest, but still impressive
As for T&Ps, they were all well within parameter, even with the crazy test-bed setup I had running; max power draw tapped out at 466.4W for the entire benchmarking run, and max temp hit a cozy 82.0°C.


Alright, that’s not the coolest in the world, but it’s comfortable enough that you shouldn’t run into any major problems, even in some more airflow-restricted chassis.
Should you buy it?
You’re looking for the ultimate budget GPU
This is it. Right now the B580 is the card to beat on the budget front; with an outstanding price to performance ratio, exceptional ray tracing prowess, and dominant average fps at 1080p and 1440p, it’s very hard to justify buying anything else, particularly for $250
You want to game above 1440p comfortably
It’s good, but it’s not that good. If you’re looking for something with the legs for 1440p and 4K, Nvidia still holds the crown on that front, and its latest DLSS 4 is leaps ahead of both FSR and Intel XeSS for the time being.
Final Thoughts
Well, folks, there you have it, Intel’s B580 in all its glory. In short, if you can find one in stock and are looking for an exceptional 1080 or 1440p graphics card, right now this is the one to get. What Intel has achieved by re-aligning its graphics card architecture strat is nothing short of incredible. It’s utilizing less hardware and less space, charging more money, and yet still, the B580 absolutely delivers and then some.
If this is a sign of things to come for Intel’s GPU line, it’s looking real bright, at least as long as no political current affairs pulls the rug out from underneath it.
How we test
Every graphics card we get in for testing is put through the exact same rigorous procedure as its competitors. Each card is benchmarked through a number of both in-game and synthetic benchmarks, with power draw and temperature monitored throughout the process. We collect a total of 67 data points per graphics card, across all manner of titles and tests, benchmarking every mainstream resolution while also considering price-to-performance alongside AI performance too.