I built my PC around two years ago and I went with a modest, above‑average build. I had always used Nvidia cards, but I decided to give something new a try. I’d seen how well AMD cards (and CPUs) performed, but I was too afraid to switch from the familiar.
I broke the spell and got an RX 6700 XT 12 GB. It’s a neat card; it had everything I wanted in a GPU back then, and for the price I paid, it was a real steal. I enjoyed this card thoroughly. Video games ran like butter, and the performance was great. I played Cyberpunk with ray tracing enabled, you know.
But that was all until now. Now, I regret my purchase and wish I had gotten an Nvidia card instead. Not because I feared the unfamiliar, but because my AMD card just isn’t enough anymore for what I want to do.
- Brand
-
XFX
- Cooling Method
-
Fan
- GPU Speed
-
2622MHz (oc)
- Interface
-
PCI-Express x16
AMD has great hardware
But flawed software
I believe in my RX 6700 XT. I love it. I’ve run all sorts of video games on it and it hasn’t ever said no. My first issue, even before I started regretting AMD, was with one of their driver releases (24.12.1). The bug was small but pretty frustrating: the Radeon software wouldn’t auto‑start properly. It would sit in the tray, but double‑clicking it did nothing. I had to kill the task in Task Manager and relaunch it manually to get it working. I lost so many recordings to this. I’d tapp my shortcut to capture the last 30 seconds only to find out the AMD software wasn’t running.
That’s OK. Bugs happen. But what was infuriating was how long it stayed unfixed. I saw no new updates coming through for months. I later discovered that updates had been released, but my install wasn’t fetching them automatically. I had to manually download and do a clean install to resolve it. My only proof for this is the low-quality screenshot below.
Nvidia, by comparison, pushes out driver updates with nearly every prominent game release. AMD doesn’t. That’s fine. But when a glaring bug breaks a core feature, shouldn’t that patch be prioritized? They eventually fixed it, but my problems with AMD didn’t end there. This was just my first taste of the software frustration.
AMD is late to the computing game
Too little, too late
My first real disappointment came when I got into local AI and self‑hosting. It started with trying a local LLM in Obsidian, which led me down the rabbit hole of GPU‑accelerated projects. Before that, I had noticed the cracks in blender. Rendering options with AMD are noticeably more limited compared to Nvidia. Blender works with AMD, but it’s slower and less optimized. Nvidia’s CUDA and OptiX backends simply do it better and faster.
But ever since the local LLM integration with Obsidian, I’ve been playing with more models. I’ve set up Whisper on my own computer to run locally, and guess what? I can’t use my GPU for it. I found out that the only way I can run this model with hardware acceleration is through Linux / WSL and using ROCm. I set up WSL and installed ROCm only to realize my card isn’t supported for the full SDK. That means that in many real use-cases, you’d get better compute results with a weaker Nvidia card, simply because Nvidia’s CUDA has been around longer.
AMD’s ROCm is intended to be equivalent in goal to Nvidia’s CUDA. But while you’ve heard CUDA before, ROCm is less known. Because ROCm is newer (and less mature in many parts), there are still lots of gaps, missing features, driver fragility, and fewer third party tools built for it.
So, even though now AMD does have a computing solution, it’s not as useful for many tasks, because no third-party developer assumes full AMD support by default. Nvidia’s CUDA has been around long enough that many tools, models, workflows assume CUDA first, and AMD becomes a fallback.
If I want to use Chatterbox as a local TTS on my machine, I’m forced to do it with my CPU, which is frustratingly slow. I have a perfectly healthy 12 GB graphics card sitting in my case, but it’s just useless in many frameworks.
AMD lags in software innovation
Always a step behind
AMD makes great hardware—I’ve said it before, and I’ll say it again. In fact, my next build will almost certainly have a Ryzen chip at its heart. That said, it’s hard to ignore the fact that AMD just isn’t as quick or innovative as Nvidia when it comes to software-first features.
Back when I was toying with the idea of streaming, Nvidia’s Broadcast app seemed very appealing. AMD didn’t really have an answer for it at the time, and even now, while they’ve rolled out some noise suppression tools, the features feel simpler and far less integrated.
The same thing happens whenever new technology gets revealed. When Nvidia showcased styleGAN or other image-generation tools, I was stuck on the sidelines watching the demos. They keep rolling out new models like NeVa, and all are fine-tuned to run best on their stack. Meanwhile, with my RX 6700 XT, I can’t reliably use them, and even when I can, performance is disappointing because support for AMD hardware just isn’t there.
It would help the pain if AMD had any plans to extend its ROCm technology to the RX6000 series, but I’m pessimistic. And without ROCm support, my 6700 XT is basically a brick for any serious compute work—a great gaming card but crippled outside of that.
I’d still get an AMD card for gaming
But not for anything else
I’ve been keeping an eye on GPU prices, but they’re enough to make my eyes water. Nvidia cards are expensive, and if I were to trade my RX 6700 XT straight across, I’d be forced into something like an 8GB RTX 3060. That’d be a downgrade in raw performance, and while I don’t want to lose gaming horsepower, I can’t shake the feeling that I’m missing out on so much by sticking with AMD.
The problem is, a GPU upgrade would drag me down the slippery slope of upgrading everything else. My Intel 13400 is already a sore point—I’m really not happy with it. Realistically, I’ll just have to save and plan a full rebuild with an AMD CPU and an Nvidia GPU. Until then, I’ll keep regretting and lamenting the decision that got me here.