Desktop PCs were the first computer category PCMag, in its infancy, started out covering, and they’re still one of the most important. (They’re in our very name, after all.) And we’ve been testing them with the same tried-and-true care for more than 40 years, starting with the establishment of PC Labs in 1984: We compare each system to others in its category based on price, features, design aspects, and hands-on, repeatable performance tests so we can make smart comparisons across a broad range of competing products.
(Credit: Joseph Maldonado)
To evaluate performance, we use an array of benchmark software, real-world applications, and games, carefully chosen to highlight the strengths and weaknesses of a desktop PC’s mix of components. That evaluation ranges from the processor and the memory subsystem to the machine’s storage hardware and graphics silicon. We test only sale-ready production units with the latest updates and drivers available. (Pre-production or prototype systems may appear in our hands-on or preview stories, but not reviews with benchmark results and star ratings.)
(Credit: Joseph Maldonado)
We regularly evaluate new benchmark solutions as they hit the market and overhaul our testing procedures to keep pace with the latest technologies. In late 2024, we rolled out a new suite of benchmarks. The downside of changing our benchmarks is that it resets the database of tested PCs that we can use for comparisons in reviews. (That said, at the time, we retested every recent desktop we could lay our hands on to build the database back up.) The upside is that the new tests give us more current, accurate comparisons that will improve as we add more and more data.
(Credit: Joseph Maldonado)
Our desktop benchmark testing focuses on three roughly divided aspects of performance: general productivity, content creation, and graphics rendering. We also add specific tests to measure the capabilities of gaming PCs and desktop workstations. For all-in-one PCs, we add the same display benchmarks that we perform when testing laptops. Here’s a breakdown of each.
Productivity Tests
Our suite of productivity benchmark tests simulates the broadest, most popular use cases of computers, such as writing, editing, information management, and multimedia communication. With these benchmarks, we also stress-test CPUs to cover performance with applications that run better with more (and more powerful) cores.
PCMark 10
Our first (and arguably most important) benchmark test is UL’s PCMark 10. This wide-ranging suite simulates various Windows programs to give an overall performance score for office workflows. The tasks involved include such everyday staples as word processing, web browsing, videoconferencing, and spreadsheet analysis.
(Credit: UL)
(Credit: UL)
We run the primary PCMark 10 test (not the Express or Extended versions), which yields a proprietary numeric score. Results over 4,000 or 5,000 points indicate excellent productivity for everyday Microsoft Office or Google Workspace tasks.
(Credit: UL)
The PCMark 10 test results let us compare systems’ relative performance for everyday tasks. (Large organizations also use PCMark 10 to gauge how well potential new hardware handles workloads, compared with existing installed hardware.) Remember that PCMark 10 results, like those from most of our benchmark tests here, are susceptible to the specific configuration of the PC running the benchmark. Changing key components will change the score.
PCMark 10 Full System Drive Storage Test
We also run PCMark 10’s Full System Drive storage subtest, which measures the program load time and the throughput of the desktop’s boot drive. Nowadays, that is almost always a solid-state drive rather than a spinning hard drive.
(Credit: UL)
Like the productivity test, the PCMark 10 Storage test delivers a numeric score, with higher numbers indicating quicker response.
(Credit: UL)
The benchmark aims to factor in lower-end Serial ATA bus architectures and higher-end PCI Express/NVMe ones alike, quantifying the real-world performance differences attributable to these different drive types. (An earlier version of the test tended not to differentiate much between various SSD implementations.)
Cinebench 2024
Maxon’s Cinebench is a component-specific rendering test that uses the company’s Redshift engine to render a complex scene using the CPU or GPU. We run the CPU version of the test in a multi-core benchmark that works across all of a processor’s cores and threads—the more powerful the chip, the higher the score—and in a single-core variant. Cinebench’s multi-core test scales well with more cores and threads and higher clock speeds. And because the latest version of the test is available for x86, Arm, or Apple Silicon, we can use the same test for most hardware platforms and compare numbers.
(Credit: Maxon)
Cinebench is a raw test of a PC’s number-crunching ability, measured via a computer-aided design and 3D rendering task. The score reflects how well a desktop will handle processor-intensive workloads. For the number it kicks back, higher is better.
Geekbench 6.3 Pro
Primate Labs’ Geekbench is another processor workout. It runs a series of CPU workloads designed to simulate real-world applications, such as PDF rendering, speech recognition, and machine learning. We run Geekbench 6.3 Pro, which was the latest Pro version of the test when we switched over to our new suite of benchmarks.
(Credit: Primate Labs)
We record Geekbench’s Multi-Core and Single-Core scores. (Higher numbers are better.) Geekbench is especially handy because it has versions for many platforms (including Apple’s macOS and iOS, and Qualcomm’s Snapdragon X processors), enabling valuable cross-platform comparisons.
HandBrake 1.8
Video-file transcoding is one of the most demanding tasks for a PC, and we test it with HandBrake, a free, open-source video transcoder for converting multimedia files to different resolutions and formats. We record the time HandBrake takes, rounded to the nearest minute, to convert a 12-minute 4K H.264 video file (the Blender Foundation movie Tears of Steel) to a more compact 1080p version via transcoding. We use the software’s Fast1080p30 preset for this conversion.
(Credit: HandBrake Team)
This benchmark is primarily a CPU test. Like Cinebench, it scales well with more cores and threads and in systems with robust thermals to handle heavy, sustained processing loads over several minutes. And, with compatibility across x86, Arm, and Apple Silicon, we can use this test to compare Windows, Windows on Arm, and macOS systems directly. Because this is a time-to-completion test, lower times are better.
Content Creation Tests
Content creation testing overlaps with general productivity, but it’s also a distinct portion of our testing. We use the last three of these tests mainly for machines equipped with discrete GPUs, built for tasks like image editing, video work, and rendering. For these more demanding use cases, we use a handful of tests built around popular content creation tools, simulating the tasks that stress a system the most.
Adobe Photoshop 2024 CC (via PugetBench for Creators)
On all desktops that will run it, we use the popular Adobe Photoshop 2024 Creative Cloud (version 25.0) for photo editing, testing it with workstation maker Puget Systems’ PugetBench for Creators 1.2.20 utility. PugetBench runs the Photoshop program through an automated series of real-world tasks.
(Credit: Puget Systems)
This software combination lets us run the image editor through a fixed regimen, from essential functions like resizing, rotating, and applying filters to images, to applying more advanced rendering effects like blurs and shadows. AI features like subject selection and content-aware fill are also incorporated into the testing, measuring performance against the latest features Adobe offers.
Puget Systems’ Photoshop benchmark exercises a computer’s CPU, GPU, memory, and storage subsystems. The final PugetBench score is a 50/50 split of overall performance and task-specific scores; higher numbers are better.
The broad availability of Photoshop lets us use this same test on both Windows and Mac systems, though occasional compatibility issues arise. Low-memory and budget systems (such as those with just 4GB of RAM) may also be unable to complete this test.
Adobe Premiere Pro 24 (via PugetBench for Creators)
We also use PugetBench for Creators 1.2.20 with Adobe Premiere Pro 24 to test video editing performance on content creation desktops and tower workstations.
(Credit: Puget Systems)
This automated extension tests real-world tasks like live playback, file export, and encoding at 4K and 8K resolutions with different codecs; processing and decoding different types of source media; and applying common GPU-accelerated special effects. The whole test stresses the CPU and GPU to measure a system’s best performance while still using real-world workflows. We report the test’s overall score. Like with the PugetBench Photoshop test, higher numbers are better.
DaVinci Resolve Studio 18.6.6 (via PugetBench for Creators)
We again use Puget Systems’ PugetBench for Creators to test DaVinci Resolve Studio 18 video editor performance on systems suitable for that challenging app: gaming, content-creation, and workstation desktops. This benchmark includes a range of video codecs, GPU effects tests, and a dedicated suite for AI features.
(Credit: Puget Systems)
As with Adobe Premiere, these automated tasks and features push the CPU and GPU, letting us gauge the upper limits of performance in real-world media creation. The results serve up as a single numeric score, with a higher number indicating better, faster performance.
Blender 4.2 (via Blender Benchmark 3.1.0)
An additional creative tool we use for tests is Blender, an open-source 3D content creation suite for modeling, animation, simulation, and compositing. We use the standalone Blender Benchmark utility (version 3.1.0) to record the time it takes for the Blender 4.2 core program to render three scenes (Monster, Junk Shop, and Classroom), using the different scenes to measure CPU and GPU rendering performance. The final score (based on samples per minute) is a number; higher scores are better.
(Credit: Open Data)
Graphics Testing
Whether a desktop relies on integrated graphics or a discrete GPU, its graphics capabilities affect everything from playing the latest games to how promptly application windows appear on the screen. We use benchmarks from UL that report proprietary scores and others that measure frames per second (fps), translating to how smooth the scene looks in motion. Each test yields an overall score, which is what we report. (We don’t break out the graphics and CPU scores separately.) Higher numbers are better.
3DMark Steel Nomad and Steel Nomad Light
UL’s 3DMark is a graphics test suite that contains benchmarks for different GPU functions and software APIs. Steel Nomad and Steel Nomad Light are two flavors of a cross-platform test that utilizes DirectX 12, Vulkan, and Metal, depending on the platform you’re testing. They can run on Windows 11 (Arm or x86), Android, and iOS. Steel Nomad and Steel Nomad Light also run on macOS.
(Credit: UL)
Both are non-ray-traced benchmarks, and as the name implies, Steel Nomad Light is the less demanding of the pair. Steel Nomad is built for high-end gaming systems, delivering the visual flair you’d expect to find in the latest games and, crucially, rendering at 4K. Steel Nomad Light is a more accessible version for less powerful devices, rendering at 1440p while reducing some of the more advanced graphics techniques from the core variant.
3DMark Wild Life and Wild Life Extreme
This pair of 3DMark tests is less demanding than Steel Nomad—most suitable for cross-platform testing of PCs, tablets, and phones—but it will still stress the machines significantly. That’s particularly true for Wild Life Extreme: It renders at 4K rather than Wild Life’s 1440p, with additional geometry and particle effects active. (UL estimates Extreme is roughly three times more demanding than Wild Life.) Note that the Wild Life Extreme test is one of our go-to cross-platform benchmarks for comparing performance across Windows and macOS systems. (Wild Life non-Extreme doesn’t run on Macs.)
(Credit: UL)
3DMark Solar Bay
Ray tracing—an advanced real-time lighting technique—is usually the domain of powerful gaming and content-creation PCs, but Solar Bay tests the technology on all devices. Built on Vulkan 1.1 for Windows and Android (and Metal for Apple devices), Solar Bay subjects 3D scenes to increasingly intense ray-traced workloads at 1440p.
(Credit: UL)
Real-World Gaming Tests
The synthetic tests above help measure general 3D graphics aptitude, but it’s hard to beat real video games for quantifying gaming performance. To that end, we’ve chosen three games that each represent a major genre that gamers play and that stress your hardware in different ways: a sports/racing sim, a big-budget open-world title, and a hyper-popular multiplayer first-person shooter.
All three games include built-in benchmark tests that run through a consistent scene or snippet of gameplay and measure frame rates, delivering results in frames per second. Additionally, gaming monitors come in more sizes and resolutions than ever, so we account for these variations in our testing.
Our real-world gaming testing comprises F1 24, Cyberpunk 2077, and Call of Duty: Modern Warfare 3. These three games are all benchmarked at full HD (1080p), 2K (1440p), and 4K (2160p) resolution.
F1 24
The first game we test is F1 24, a sports and simulation game. These games emphasize a mix of realistic visuals and fast frame rates. F1 24, in particular, is a Formula One racing sim, and it benefits from as many frames as possible.
(Credit: EA)
F1 2024 represents DLSS effectiveness (or FSR on AMD systems) in our testing suite, demonstrating system capability with the frame-boosting upscaling technology running. We run F1 2024’s built-in benchmark test on the Miami test track at the Ultra High quality preset with DLSS/FSR off and then again with DLSS/FSR active. These upscaling and frame generation technologies can help less powerful PCs achieve faster frame rates than they otherwise could.
As a bit of background: AMD’s FSR, Intel’s XeSS, and Nvidia’s DLSS are all graphics rendering software technologies that iterate on and manipulate the frames generated by the GPU to increase performance. Early versions of these tools primarily artificially upscaled lower-resolution frames, sharpening them to higher-resolution equivalents. More recently, these software suites use AI processing hardware to generate new game frames between each pair of original frames, boosting effective frame rates.
Recommended by Our Editors
Cyberpunk 2077
Cyberpunk 2077 is our representative for big-budget (or “AAA”) single-player games and is the modern “Can it run Crysis?” equivalent. AAA titles are usually action-adventure or RPG games that test the limits of visual realism and may incorporate big, open worlds. These games push cutting-edge character models, textures, and lighting, requiring serious processing and graphics power.
(Credit: CD Projeckt Red)
Cyberpunk is one of the most demanding such games, with various visual options and detailed ray tracing. Our test settings aim to push PCs fully: We run this game at each resolution on the Ultra graphics preset, and again at the all-out, killer Ray Tracing Overdrive preset without the aid of DLSS or FSR.
Call of Duty: Modern Warfare 3
Finally, this Call of Duty title represents the competitive multiplayer shooter field. First-person shooter is one of the most popular (and most frame-rate-dependent) genres; players want a competitive edge from smoother gameplay. Call of Duty is a massive franchise, and while the visuals are a step above some other competitive games, you can expect to see high frame rates on this test. We run the in-game benchmark at the Extreme graphics preset; however, we recognize that many competitive players play at the lowest visual settings to maximize frame rates.
(Credit: Activision)
Special Cases: Macs and Workstations
With desktops encompassing far more than simple Windows machines, we adjust our testing to accommodate other hardware and OS combinations. Wherever possible, we use the tests outlined above to ensure that our test results are comparable from one system, and one platform, to the next.
However, some outliers, like workstations and Macs, require additional or modified testing. Apple Macs (which use macOS and Apple Silicon hardware) expressly necessitate some tweaks, omissions, or additions to our testing regimen.
(Credit: Joseph Maldonado)
Apple Macs
Given Apple’s different operating system, chip architecture, and software catalog, our tests for Apple desktops aren’t always the same as the ones we run for Windows machines. Most notably, we can’t run PCMark 10 on macOS, eliminating our most crucial productivity and storage scores.
(Credit: Joseph Maldonado)
Many of our other tests are cross-compatible: Cinebench 2024, HandBrake 1.8.0, Geekbench 6.3 Pro, and Adobe Photoshop all have native Apple versions that let us compare Mac and Windows directly.
Some of the graphics tests in 3DMark are also cross-platform, but game testing is another story. We try to stick with a similar selection of games and features in our testing, but the specific titles and tests may vary as hardware changes and game support evolves.
ChromeOS Desktops
Our ChromeOS testing has very little in common with our Windows or Mac tests, since Google’s ChromeOS is a browser-first experience. While you can download and run Android apps and use Chrome extensions and apps to get the same functionality as standard apps, the underlying system and software selection are too different for most comparisons. Note that ChromeOS desktops are rare on the market; Chromebook laptops are far more common.
To benchmark ChromeOS desktops and compare performance, we use a handful of Chrome-specific tests: Principled Technologies’ CrXPRT 2, Basemark Web 3.0, and Android versions of Geekbench (for CPU testing), GFXBench (for graphics), and the Work 3.0 benchmark from UL’s PCMark for Android.
(Credit: Joseph Maldonado)
CrXPRT 2 measures how quickly a system performs everyday tasks in six workloads (applying photo effects, finding faces in images, encrypting and displaying offline notes, calculating and graphing views of a stock portfolio, analyzing DNA sequences, and generating 3D shapes using WebGL). The performance test yields a numeric score; higher numbers are better.
Basemark Web 3.0 measures how well Chrome can run web- or browser-based applications on ChromeOS devices. It combines low-level JavaScript calculations and tests using popular JavaScript frameworks, Document Object Model, and CSS features, with WebGL graphics content that exercises the GPU. The test also yields a numeric score; higher numbers are better.
The Android-based tests we run include Geekbench (for CPU testing), GFXBench (for graphics), and the Work 3.0 benchmark from UL’s PCMark for Android. In all of these tests, higher scores indicate better performance.
Desktop Workstations
One additional outlier in the desktop category is the tower workstation. Armed with best-in-class processors and enterprise-grade GPUs, these are the most powerful PCs you can buy, built for professionals who need the extra muscle for 3D design, engineering, and animation. Unlike Macs and ChromeOS desktops, these workstation desktops are fully compatible with our standard tests. Still, we must go above and beyond to properly gauge the upper limits of performance in these high-powered, often expensive machines.
(Credit: Joseph Maldonado)
With these professional-oriented desktops, we run our standard benchmarks, followed by all of our content creation tests: Adobe Photoshop, Adobe Premiere, DaVinci Resolve Studio, and Blender.
We also add SPECviewperf 2020 (version 3.1), a widely recognized testing suite that measures graphics performance for professional applications. The software renders, rotates, and zooms in and out of solid and wireframe models using “view sets” from popular independent software vendor (ISV) applications. We run the 1080p resolution tests based on PTC’s Creo CAD platform; Autodesk’s Maya modeling and simulation software for film, TV, and games; and Dassault Systemes’ SolidWorks 3D rendering package. Results appear in frames per second (fps), and higher numbers are better.
The Result? Authoritative Guides to Buying the Best Desktops
All this testing is a key part of how PC Labs informs the content and choices within our roundups and reviews on PCMag.com. Performance is just one factor in our overall picture of a desktop’s worth; we also include assessments of its design, feature set, value for money, and build quality.
(Credit: Joseph Maldonado)
Our best desktops guide is the place to start for a wide-ranging overview of the top choices across desktop categories (as well as a deep explainer on the key facets of desktop buying). Other large niches for which we update guides regularly include gaming desktops, business desktops, and all-in-ones. We update these guides in sync with major releases to the market, and soon after we bestow an Editors’ Choice award to a given machine.
Get Our Best Stories!
Your Daily Dose of Our Top Tech News
By clicking Sign Me Up, you confirm you are 16+ and agree to our Terms of Use and Privacy Policy.
Thanks for signing up!
Your subscription has been confirmed. Keep an eye on your inbox!
About Our Experts
Brian Westover
Principal Writer, Hardware
Experience
From the laptops on your desk to satellites in space and AI that seems to be everywhere, I cover many topics at PCMag. I’ve covered PCs and technology products for over 15 years at PCMag and other publications, among them Tom’s Guide, Laptop Mag, and TWICE. As a hardware reviewer, I’ve handled dozens of MacBooks, 2-in-1 laptops, Chromebooks, and the latest AI PCs. As the resident Starlink expert, I’ve done years of hands-on testing with the satellite service. I also explore the most valuable ways to use the latest AI tools and features in our Try AI column.
Read Full Bio
Matthew Buzzi
Principal Writer, Hardware
Experience
I’ve been a consumer PC expert at PCMag for 10 years, and I love PC gaming. I’ve played games on my computer for as long as I can remember, which eventually (as it does for many) led me to build and upgrade my own desktops to this day. Through my years at PCMag, I’ve tested and reviewed many, many dozens of laptops and desktops, and I am always happy to recommend a PC for your needs and budget.
Read Full Bio