When you set up or manage a PC with GNU/Linux, sooner or later you have to deal with hardware problems, strange performance, or bottlenecksAnd this is where benchmarking and diagnostic tools come into play: they help you find out what you have installed, how it really performs, and whether the system is working as it should.
The Linux ecosystem offers no shortage of options, but many searches still return utilities from the 90s or abandoned projects. The reality is that today we have a good number of modern, reliable and in many cases open source tools to measure CPU, GPU, RAM, disk, network or even energy consumption, some designed specifically for Linux and others multiplatform.
What is a benchmark and why does it matter in Linux?
A benchmark, test bench, or reference point is simply a program (or set of programs) designed to measure performance of a component or the entire system: processor, memory, storage, graphics card, network, etc., performing controlled and comparable tasks.
These tests are usually based on repeatable operations that simulate real workloadsMathematical calculations, data compression, encryption, 3D rendering, compilation, disk access, and network traffic. Since they always run the same tasks, you can compare your results with public databases or other machines you have available.
In Linux this is especially useful because the combination between kernel, distribution, drivers, firmware and hardware This can lead to curious situations: two teams with similar components can behave very differently depending on driver support, system configuration, or the maturity of the GPU in Mesa.
In addition to raw performance, many current benchmarks also allow you to measure power consumption, performance per watt, memory latencies, or bandwidth of buses, something key if you work with servers, HPC or want to fine-tune the battery life of a laptop.
When does it make sense to use benchmarking tools
You don't need to be running tests all day, but there are several scenarios where they are incredibly useful for making decisions based on data rather than feelings:
- Before buying new hardwareto compare CPUs, GPUs or SSDs and see what best suits your needs.
- When updating components (more RAM, another hard drive, graphics card upgrade), to verify the actual performance jump.
- If you overclock or undervoltusing prolonged stress tests to check stability and temperatures.
- When you suspect a hardware failure (dying hard drive, faulty RAM, GPU that hangs under load).
- To compare distributions or configurations of kernel, graphics drivers, or file systems.
However, it's important to be clear that No benchmark is the absolute truth.Many companies manipulate results, some tools favor certain architectures, and manufacturers have even gone so far as to optimize hardware and compilers to perform better in specific tests that do not fully reflect real-world use.
Prepare your Linux system before running benchmarks
If you want the results to make sense, it's important to at least minimally prepare the system so that the figures reflect the real capacity of your team, not the chaos of background processes or mid-test updates.
Some basic guidelines worth applying:
- Update to the latest stable version of your distribution and Verify that the drivers are up to dateespecially those with GPUs.
- Close heavy applications (browsers with a thousand tabs, development environments, virtual machines, game clients).
- Temporarily disable automatic update services that can start in the middle of the tests.
- If you can, run the benchmarks from a console session or TTY without a graphical environment or using a lightweight desktop environmentespecially for CPU and disk testing.
- Check that the power plan does not limit the CPU frequency and that no thermal throttling from the first minute.
In many cases it is also advisable to restart the system right before starting and note the exact configuration (kernel version, drivers, resolution, disk type, etc.) if you want to compare later with other computers or hardware changes.
Tools for identifying and auditing hardware in Linux
Before starting to measure pure performance, it's important to know exactly What hardware has your distribution detected and how is the kernel seeing it?This helps to locate problems with drivers or components that are not being used.
This is where utilities like HardInfo, CPU-X, HWInfo, or Mission Center come into play, in addition to a good arsenal of classic commands that remain essential.
HardInfo: detailed system information and small benchmarks
HardInfo is a graphical tool available in most distributions that offers a A very comprehensive summary of the hardware and software. installed: CPU, RAM, motherboard, PCI and USB devices, storage, network, kernel versions, loaded modules, etc.
It is easily installed from the software center or with:
sudo apt install hardinfo
When you run it, you'll see a category tree on the left, and within it, cards with fairly detailed information. It also includes a section for Basic benchmarks (Zlib, Fibonacci, MD5, SHA1, Blowfish…) which allow for quick CPU tests and comparison with other systems using a relative scale.
CPU-X and CPU-Fetch: Lightweight alternatives for CPU and memory
If you're interested in focusing specifically on the processor, memory, and motherboard, CPU-X is a very convenient option. It offers an interface very similar to Windows CPU-Z, with tabs for CPU, caches, motherboard, RAM, system and graphics, and is based on open source code, available via repositories or Flatpak.
CPU-Fetch, for its part, is a command-line utility designed for Quickly display processor architecture, manufacturing technology, maximum frequency, cores/threads, AVX instruction support, and cache sizeIdeal if you only want details about the processor without opening a graphical interface.
HWInfo and KInfoCenter: KDE systems and pure terminal

For those who prefer the terminal, hwinfo is a veteran tool that lists all hardware recognized by the system directly in the console. It is installed on Debian/Ubuntu with:
sudo apt install hwinfo
And you can use commands like:
hwinfo | more
hwinfo -cpu
In KDE desktops, KInfoCenter comes integrated and provides a central panel with Information about CPU, RAM, storage, USB ports, motherboard, and other devices without needing to install anything additional if you use a modern version of KDE.
Mission Center: Continuous Monitoring on Linux
Mission Center is a kind of A "supercharged" task manager for LinuxIt features panels for monitoring CPU, GPU, memory, disks, and network usage. While more geared towards real-time monitoring than synthetic benchmarking, it's very useful for seeing how the hardware responds while running other stress tests or real-world workloads.
Essential commands for recognizing hardware
Along with graphical tools, the command line remains the fastest and most accurate way to obtain detailed information about your computer. Here are some basic commands you should have in your arsenal:
- lscpuCPU summary, cores, threads, architecture, flags.
- lshw-short: compact listing of virtually all detected hardware.
- lspci: devices connected to the PCI bus (graphics card, network, USB, controllers…).
- lsusb: connected USB devices.
- lsblk: disks and partitions with their hierarchy.
- lsscsiSCSI and similar devices.
- df-H: partitions and used/available space.
- free -m: total, used and free RAM memory.
- dmidecode: information obtained from the DMI/SMBIOS tables.
- hdparmSATA disk parameters and quick tests.
Combining these utilities with data from HardInfo or CPU-X allows you to have a A fairly accurate snapshot of what hardware you have and how Linux sees it., something essential before starting to compare performance.
Phoronix Test Suite: the reference suite for benchmarking in Linux
When we talk about comprehensive Linux performance testing, Phoronix Test Suite (PTS) is the name that always comes up. It's about a very broad, open-source benchmarking platform, with hundreds of tests and predefined suites, covering everything from CPUs and disks to games, OpenGL, OpenCL, web servers, databases, and video encoding.
PTS works like a "wrapper": you tell it which test or set of tests you want to run and it takes care of the rest. Download the necessary programs, resolve dependencies, run the benchmarks in automated mode, and save the results.Everything controlled by a single command: phoronix-test-suite.
Furthermore, it integrates with OpenBenchmarking.org, where you can Upload your results and compare them with those of the community.This is very useful if you are validating new hardware or want to see how your equipment performs against similar configurations.
Phoronix Test Suite Installation and Basic Commands
Many distributions offer pre-built packages, and in Debian/Ubuntu you can also use the official .deb file directly. If you encounter dependency errors during installation (for example, missing dependencies) php-gd), you can solve them with:
sudo apt-get -f install
Once installed, when you run phoronix-test-suite Without arguments, a long list of available commands is displayed: test installation, execution, results, analysis utilities, modules, etc. In practice, for most users, the most commonly used pattern is:
phoronix-test-suite run nombre-test
phoronix-test-suite run nombre-suite
Before launching a test that you haven't downloaded, PTS will warn you that the test is not installed and ask if you want to stop to install it and resolve dependenciesThis process may take a few minutes the first time, because it downloads necessary binaries, data, and packages.
List of tests available in PTS
One of the great strengths of Phoronix Test Suite is the number of test profiles available. You can view them with:
phoronix-test-suite list-tests
The output includes benchmarks for almost everything you can imagine: disk tests (fio, IOzone, Dbench, FS-Mark), CPU (C-Ray, OpenSSL, SciMark, N-Queens, kernel compilation), graphics (glmark2, GpuTest, Unigine, Xonotic, ET:QW, Nexuiz, OpenArena, SuperTuxKart, Lightsmark, LuxMark), memory (RAMspeed, Stream), network (network-loopback), servers (Apache, Nginx, PostgreSQL), audio/video encoding, encryption, HPC, OpenCL, etc.
To avoid going crazy, the authors group these individual tests into themed “suites”These suites focus on specific use cases: CPUs, gaming, servers, desktops, laptops, databases, video, audio, graphics workstations, etc. You can list the suites with:
phoronix-test-suite list-available-suites
Recommended suites based on equipment usage
Depending on the team role you want to evaluate, PTS offers very practical combinations of tests. Some of the most useful are:
- Portables: tests like pts/battery-power-usage for battery consumption, pts/unpack-linux for disk performance with many small files and pts/byte as a generic benchmark.
- HTTP Serversindividual tests pts/apache y pts/nginx to measure requests per second that each server on your machine can serve.
- Databases: the suite pts/database It includes tests like SQLite and pgbench, ideal for seeing how the system responds to transactional loads.
- Development and compilationSuites like pts/compiler along with type tests pts/unpack-linux They simulate intensive compilations and handling of very large code trees.
- Gaming on Linuxsuites pts/ioquake3-games, pts/gaming, pts/gaming-free y pts/unigine They run batteries of games and 3D demos at different resolutions, yielding average, minimum, and maximum FPS.
- Video/audio encodingsuites pts/video-encoding y pts/audio-encoding They use x264, FFmpeg, mencoder and audio encoders (FLAC, MP3, Ogg, WavPack, Monkey Audio) to measure transcoding speed.
- SSL encryptionthe test pts/openssl It measures signatures or operations per second, very useful for servers that will handle a lot of HTTPS traffic.
- Overall performance: the suite pts/linux-system It's a sort of "all-in-one" tool for getting a comprehensive overview of your system, combining tests of CPU, disk, network, encryption, servers, and more.
During execution, PTS always shows you progress, number of repetitions, standard deviation, and a final meanIf any test fails (due to dependencies, bugs, or occasional incompatibilities), you can decide whether to spend time debugging it or simply discard it and rely on the rest of the results.
Real-world test examples with Phoronix
To give you an idea of the type of information that can be extracted, here are some typical usage examples:
- With pts/apache Figures close to 38.000 requests/second have been obtained on modern desktop computers, allowing comparison between machines or between versions of Apache.
- The proof pts/nginx On the same machine, it can yield values of more than 56.000 requests/second, illustrating differences between HTTP servers under equivalent conditions.
- Read more pts/compilation It compiles Apache, the Linux kernel, MPlayer, PHP, and ImageMagick and measures the time in seconds; the lower the time, the better the CPU and disk performance.
- The tests of Gaming They launch titles such as ET: Quake Wars, Nexuiz, OpenArena, Urban Terror or Unreal Tournament 2004 at 1080p, measuring average FPS and rendering stability.
- With pts/video-encoding You can see how many frames per second your CPU encodes with x264 or how long FFmpeg and Mencoder take to process certain clips.
On desktop computers, it's common for some specific tests to fail (for example, very old demos that no longer compile correctly or outdated binaries), but the value of PTS is that You can repeat the exact same tests on different devices and to have reasonably fair comparisons.
Specific benchmarks for CPU, memory and disk in Linux
Beyond Phoronix, Linux offers specialized utilities that allow you to fine-tune the measurement of specific components, such as the CPU, caches, or storage subsystem. These are very useful when you want to isolate a potential bottleneck or to verify that an SSD or RAM is performing as stated in the specifications.
Lmbench: Low-level latencies and bandwidth
Lmbench is a set of tools designed to measure latencies and bandwidths of basic system operations: copying, reading and writing to memory, pipes, TCP sockets, context switches, process creation, I/O operations, etc.
Upon installation, several binaries are added to a directory. /usr/lib/lmbench/bin/<arquitectura>which you can run individually, or launch the wizard lmbench-run, which guides you through the configuration, runs the tests, and generates a readable report.
For example, to check the RAM read bandwidth:
./bw_mem 256m rd
268.44 3913.68
In this case, it is instructed to read 256 MB of memory and returns MB read and MB/s bandwidth, which gives you an idea of the effective memory performance on that platform.
Measuring disks with dd, hdparm, iozone and company
For the storage subsystem, in addition to PTS tests, you can always resort to classic tools such as dd, hdparm or IOzoneThey are not as "pretty" as CrystalDiskMark in Windows, but they are still a reference for measuring sequential and random read and write speeds.
With dd You can force low-level reads and writes and see the actual speed achieved. For example, to measure write speed to a temporary file:
dd if=/dev/zero of=/tmp/output bs=1M count=128
Here, 128 MB of zeros are copied from /dev/zero to the destination file, and at the end dd will show total time spent and MB/s write speedReversing the command:
dd if=/dev/sda of=/dev/null bs=1M count=128
You can measure read speed by copying 128 MB from the /dev/sda disk to /dev/null. The idea behind using /dev/zero and /dev/null is that The origin or destination adds virtually no latency, concentrating the measurement on the actual disk.
For deeper analysis with multiple block sizes, random access, and write patterns, tools such as hdparm, iozone or fio They provide much greater granularity and allow the simulation of database loads or virtualization environments.
Graphics benchmarking on Linux: GPU, games and Vulkan
In recent years, Linux has become a serious gaming platform thanks to SteamOS, Proton, and Valve's push with devices like the Steam Deck. This has sparked a surge of interest in... Measure GPU performance, driver stability, and compatibility with modern APIs like Vulkan.
In addition to the tests integrated into Phoronix (Xonotic, OpenArena, Unigine Heaven, Valley, Tropics, SuperTuxKart, glmark2, LuxMark, SmallPT GPU, etc.), there are several external tools that are very popular in the graphics world.
Unigine Heaven and other 3D demos
Unigine's demos (Heaven, Sanctuary, Tropics, Valley, Superposition) are already classics in 3D benchmarking. Heaven, for example, is available for Windows, macOS and GNU/Linux, and subjects the GPU to a demanding 3D scene, measuring FPS, temperature, and stability.
On Linux, you can launch these demos either through PTS profiles or independently, by adjusting resolution, graphics quality, antialiasing and API (OpenGL in most cases)They are excellent for detecting overclocking instabilities, cooling problems, or driver bugs.
GpuTest, glmark2, LuxMark and other graphics tests
Tools like GPUTest, glmark2, j2dbench, x11perf or LuxMark They cover different aspects of the graphics stack: from general OpenGL performance to specific OpenCL tests for GPU computing.
For example, glmark2 offers a suite of OpenGL scenes that are easy to launch from the terminal, with comparable overall scores. LuxMark focuses on rendered using OpenCL, ideal if you want to evaluate the GPU for computing tasks rather than for gaming.
Vulkan on Linux: current status and testing
As for Vulkan, its support on Linux relies primarily on Mesa for AMD and Intel GPUs and on NVIDIA's proprietary drivers. Many modern benchmarks integrated into PTS or third-party suites are beginning to incorporate it. Vulkan backends alongside OpenGL or Direct3Dallowing for comparison of performance between APIs.
Although the original content we analyzed didn't list specific Vulkan tools, nowadays you have access to cross-platform benchmarks like GFXBench (for OpenGL/Vulkan), as well as technical demos and native games that showcase performance counters using this API. The key is to ensure that:
- Use updated drivers (Recent desktop or latest NVIDIA drivers).
- Check the Vulkan capabilities with tools like vulkaninfo.
- Run repeatable tests (same resolution, quality, and scene) to obtain consistent comparisons.
Synthetic benchmarks, standards and their pitfalls
Beyond the specific tools for Linux, it's also helpful to understand how they work. synthetic benchmarks and industry standardsbecause you will often see Geekbench, 3DMark, Cinebench or SPEC scores mixed with Linux results.
A benchmark can take several forms:
- Real programTypical user software (compressors, video editors, video games) is used, measuring real times; this is the most representative.
- Microbenchmark: small fragments of code to measure a specific operation (memory latency, floating-point operations, etc.).
- Synthetic tests: programs designed only to measure performance, not being applications for daily use (3DMark, PassMark, PCMark…).
- I/O, database or parallel benchmarks: focused on specific subsystems such as I/O, SQL or HPC.
Organizations like SPEC have created standardized suites such as SPEC CPU2006 or SPEC CPU2017which group together dozens of representative programs (compilers, simulators, scientific processing) and produce a global metric (SPECratio) to compare processors and architectures.
The problem is that, in practice, many manufacturers and media outlets end up with a single “pretty” number and they forget the context: compiler version, flags used (LTO, PGO, specific optimizations), firmware version, memory type, thermal dissipation, etc., all of which can dramatically change the result.
How to interpret the results and not be fooled
Once you have a mountain of numbers, it's time to interpret them critically. It's not enough to just look at the biggest figure: you have to understand. what each test is measuring and under what conditions.
Some key aspects:
- Check if the benchmark indicates “higher is better” or “lower is better”In FPS, more is better; in rendering time in seconds, less is better.
- Look at metrics relevant to your use: video frame drops, minimum FPS in games, GB/s in encryption, MIPS at comparable frequency, compilation time, etc.
- Distinguishes between single-core and multi-core performanceMany games and lightweight applications scale poorly across many threads, so good per-core performance is crucial.
- Be wary of single-pass, low-temperature tests: thermal throttling over long sessions can occur. reduce sustained performance.
- Don't trust comparisons where hardware and software are not aligned: Different RAM, different frequencies, unequal firmwares, or drivers with incompatible versions.
It is also worth remembering that some companies have gone so far as to Optimize compilers and hardware specifically to perform well in specific benchmarksHowever, this doesn't necessarily translate into equivalent improvements in real-world applications. Hence the importance of combining synthetic testing with real-world load testing: your compiler, your web server, your containers, your games.
In practice, the most sensible approach in Linux is to rely on tools such as Phoronix Test Suite, Lmbench, IOzone, HardInfo, and classic commands, and repeat the same scenarios on each machine or configurationThis way you can reliably determine whether a new kernel, a driver change, an NVMe SSD, or a different GPU is worthwhile for your specific situation.
This whole ecosystem of benchmarks and utilities in Linux can seem overwhelming at first, but once you choose a small set of tools and test suites that fit your workflow, it becomes a very powerful way to diagnose problems, validate hardware, and get the most out of your system without letting yourself be swayed solely by specification sheets or current marketing.