The line between the PC and the data center is rapidly blurring, and with it comes a new category: desktop supercomputers for AIThis leap not only democratizes access to massive computing, it also changes how we prototype, train and infer advanced models without always depending on the cloud.
In parallel, the planet is experiencing a real race for the exascale, from national infrastructures of hundreds of megawatts to compact computers capable of reaching petaFLOPS in a researcher's office. In this article, we bring together, in one place, all the key data from the sources consulted: a global overview, European and Asian players, historical lists, leading centers, and, of course, the new desktop star, Nvidia DGX Spark.
What is a supercomputer and why does it matter in AI?
Un supercomputer It is a system with computing capabilities far superior to a conventional PC. Its performance is expressed in FLOPS (floating point operations per second), with units such as petaFLOPS (1015) and, in the current elite, the exaFLOPS (1018).
They operate as a set of thousands of nodes (each with CPU, Dedicated GPUs, memory and storage) connected by high-speed networks and switches, to work as a single machine. While a powerful desktop computer can run in the tens of TFLOPS, these systems can reach hundreds of PFLOPS or more.
Its applications cover almost everything: weather forecasting, astrophysics, biomedicine, drug design, nuclear simulations, geophysics, sustainability, and AI research. Thanks to massive computing, they can process billions of data points in seconds and solve problems that would take years with traditional equipment.
- Featured uses: weapons and national security, pharmaceutical industry, big data, bioinformatics, climate and air quality, engineering simulation, smart cities, education, and cloud computing.
Due to their size and consumption, they require advanced cooling (often liquid), specific rooms with temperature control and fire protection. There are even centers that reuse the heat generated, as is the case in Swiss facilities that heat university buildings.
Europe accelerates: EuroHPC, InvestAI and large systems

Europe has 162 supercomputers registered in 2025 and plans new facilities. The EU has also promoted a €200.000 billion investment under the InvestAI initiative to become a global leader in artificial intelligence.
The coordination and financing of high computing falls to the "European Joint Venture for High Performance Computing» (EuroHPC JU), which sponsors and operates a network of 9 systems spread across the continent. These include LUMI (Finland), Leonardo (Italy) and MareNostrum 5 (Spain), pillars of the European digital sovereignty.
Spain contributes the Barcelona Supercomputing Center (BSC-CNS), which in 2004 built the historic MareNostrum 1 and presented MareNostrum 5 in December 2023. The latter, with 314 maximum PFLOPS, Intel Xeon processors and a consumption of 4.158,90 kW, will occupy the 11th position in the TOP500 in 2025 and is oriented towards AI, medical research, drug discovery and meteorology.
Italy shines with Leonardo (Cineca + EuroHPC), installed in 2022 in Bologna. It combines AMD and Intel technology, consumes 7.493,74 kW, and reaches 315,74 PFLOPS and is ranked 9th worldwide. It is key for universities and companies to compete globally in biomedicine, energy, climate, and, above all, AI.
Finland hosts LUMI (CSC + EuroHPC), powered by AMD and HPE. It opened in 2023 in Kajaani, and is expected to reach 386 PFLOPS, consumes 7.106,82 kW and holds 8th place worldwide. Within EuroHPC, it is one of the most powerful bastions.
In parallel, Switzerland operates the supercomputer at the CSCS ALPS/ALPS 5, which with 7.124,00 kW of consumption and 434,90 PFLOPS ranked 7th in the world. It focuses on meteorology, AI, biomedicine and energy, and is part of a program with 13 projects where ALPES is the most emblematic.
The energy sector is also pushing: ENI (Italy) launched in 2024 the HPC-6 with AMD and HPE, which achieves 606,97 PFLOPS with a consumption of 8.460,90 kW. It is linked to ENI's Green Data Center to accelerate the energy transition, and ranks 5th worldwide.
Asia and America: exascale, records, and shadow systems
Japan maintains a symbol of excellence with Fugaku (RIKEN R-CCS, Kobe). Based on the Fujitsu A64FX and ARM architecture, it achieves 442 PFLOPS with 26.248,36 kW, and continues to be a benchmark for its efficiency, to the point of leading the Green500 According to the sources consulted, it applies to medicine, climate, AI, and energy efficiency.
Russia, despite the sanctions, deployed in 2023 the MSU-270 at Lomonosov State University (Moscow). It integrates around 100 cutting-edge graphics accelerators (it is unknown whether they are from AMD or Intel) and is estimated to be 400 PFLOPS, integrated into a network of Russian centers for AI, physics, chemistry, mathematics and medicine.
China combines discretion and muscle. The series Sunway (Wuxi) was born in 2016 with TaihuLight (125 PFLOPS) and evolved in 2021 to OceanLight, considered exascale (>1 exaFLOPS), although without official figures due to technological tensions with the US. In 2024/2025, the Tianhe-3 (Xingyi) It would have achieved between 1,57 and 2,01 exaFLOPS in tests, with rumors of being able to surpass El Capitan.
The United States plays in the "major leagues" with several exascale models. Aurora (ANL + DOE), designed to reach 1,9–2 exaFLOPS, was installed in 2023 and reached its peak in 2024; today it ranks 3rd in the TOP500 and serves science, medicine, climate, AI, astrophysics and particle physics. Parallel, El capitán (LLNL + NNSA) targets 2–2,8 exaFLOPS, leads the TOP500, and will be dedicated to national security, with applications in nuclear simulations, cybersecurity, healthcare, climate change, and astrophysics.
Beyond the public list, there are specific AI initiatives at the country level. In Wuhan, China Telecom operates the Central Intelligent Computing Center, built with domestic hardware and software and liquid cooling, intended for training giant models; some sources even point to 5 exaFLOPS, although without official confirmation.
India is turning on: GPUs, cloud, and the exascale horizon
India does not want to be left behind. The initiative IndiaAI Compute Capacity (within the IndiaAI Mission) committed around $1.240 billion in 2024 for a new supercomputer with at least 10.000 GPUs for AI in collaboration with Nvidia. In addition, Microsoft announced $3.000 billion in January 2025 for cloud and AI infrastructure in the country.
The local ecosystem is heating up: Bhavish Aggarwal (CEO of Ola) invested $230 million in the Krutrim-2 LLM. There are 34 supercomputers, and the C-DAC is driving, together with the National Supercomputing Mission (NSM), a national network that could deliver India's first exascale system between 2025 and 2026. It is planned to build more than 70 supercomputers in the next years.
Colossus, xAI's supercomputer, and the energy controversy
In United States, xAI (Elon Musk) deployed Colossus in Memphis in just 122 days in 2024. It started with 100.000 Nvidia GPUs and plans 200.000, targeting Grok 3.0 AI and future versions. In benchmark tests it would have reached 10,6 exaFLOPS of AI, a figure that would place it among the most powerful on the planet.
Not everything is applause: the use of natural gas as an energy source has drawn criticism for its impact on local air quality. Still, the project illustrates the speed with which the private sector can build world-class AI-focused infrastructure.
DGX Spark: The "desktop supercomputer" that brings advanced AI home
Nvidia has set the bar high with DGX Spark, a compact system awarded by TIME as one of the "Best Inventions of 2025" and available for general purchase starting October 15. Its heart is the Grace Blackwell GB10, capable of reaching 1 petaFLOPS, with ConnectX-7 networking and the entire Nvidia AI software stack so that researchers and startups can use it “plug and play”.
At the hardware level, Spark combines a 20-core ARM CPU (10 Cortex-X925 + 10 Cortex-A725), 128GB LPDDR5x Unified GPU Memory, 4TB Self-Encrypting M.2 NVMe SSD, 4x USB-C, HDMI, WiFi 7, Bluetooth 5.4, 10GbE LAN and System DGX OS. It is designed for agentic AI, reasoning, and modern complex payloads.
Nvidia maintains that they can be fit models with up to 70.000 billion parameters, run local inference, and keep sensitive data on-prem without relying on the cloud. Other reports indicate that it can handle LLMs of up to 200.000 billion parameters depending on the configuration and model, underlining its ambition as a desktop “mini data center.”
In terms of functionality, the possibility of link two Sparks in a mini cluster to create a "personal cloud." Its integration is easy: wired and wireless networks, Bluetooth peripherals, and the stack CUDA/cuDNN, Triton and company to deploy Agent prototypes, fine-tuning, isolated inference, and data security.
The starting price is set at $3.999, and major brands such as Acer, Asus, Dell, Gigabyte, HP, Lenovo and MSI will be marketing variants. Important: this is not a typical Windows PC; it is a local supercomputer for AI compatible with models of DeepSeek, the Chinese AIMeta, Nvidia, Google, and Qwen, among other open-source applications. Even Elon Musk has already received his unit from Jensen Huang.
The arrival of the Spark coincides with a change in priorities: according to industry leaders, Users and businesses will look for systems that support the next wave of smart charging.Orders are open at Nvidia.com and through authorized partners and distributors.
AI PCs and Workstations: When You Need Specific Hardware

If you are going to train models or develop, it is advisable invest in specialized hardware; if you only consume AI, a balanced team may suffice or resort to EC2 instances on the cloud.
Additionally, there are teams that bring powerful AI to the local market without relying on the cloud, as we've seen with Spark. And if you're unsure about the choice, some offer personalized support: IbericaVIP promises to advise you for choosing the ideal PC for your AI projects.
Community and news: not everything you read is official
The internet is full of Nvidia-focused forums and subreddits where drivers, GPUs, and rumors are discussed. Note: These communities are managed by fans and do not represent Nvidia unless expressly stated. It's good to keep this in mind when evaluating leaks or unconfirmed figures.
What they look like inside: architecture, scaling, and cooling
A supercomputer is essentially a set of thousands of computers linked by low-latency, high-bandwidth networksEach node integrates CPU, GPU, RAM, and storage; the system adds power through optimized software and libraries.
The reigning metric is FLOPS: we went from TFLOPS on home PCs to PFLOPS and exaFLOPS in HPC. Thus, 1 TFLOPS = 1012 FLOPS and 1 PFLOPS = 1015Supercomputers take up entire rooms and are used by multiple teams at once, with resources often running at their limits.
Cooling is critical. There are CPUs and GPUs that exceed 80 ºC, which is why tempered or liquid water, heat exchangers, and custom designs are used. Some installations apply creative solutions, such as reuse heat for heating buildings.
Where they are and how to visit them
There are more than a thousand supercomputers in the world. China and the United States lead in number and muscle, with historical data such as 226 Chinese systems among the 500 most powerful. However, the US has accumulated more total PFLOPS (644) compared to China (565) in certain editions.
In Spain, MareNostrum of the BSC-CNS (Barcelona) is the most powerful in the country. Its first versions were housed in a glass case with micro-mist fire fighting system and a unique location: the chapel on the UPC North Campus. Virtual tours and, occasionally, guided tours are available.
Historical documents placed the entry into operation of Mare Nostrum 5 between 2020 and 2021; it was finally introduced at the end of 2023 with the aforementioned performance increase. This evolution clearly illustrates how calendars in HPC suffer adjustments due to technical complexity.
Historical lists and other featured systems
My list TOP500 It has been around since 1993 and is updated twice a year. In 2021, for example, the top 10 included Fugaku, Summit, Sierra, Sunway TaihuLight, Perlmutter, Selene, Tianhe-2A, Juwels Booster Module, HPC5, and Frontera. Although several have already been surpassed, they remain technological milestones for its impact.
Some additional relevant data from systems cited in the sources: Frontier (HPE Cray EX, ORNL) was the first to officially break exascale; Summit (IBM POWER9 + Nvidia V100) excelled in Alzheimer's, genetics and meteorology; Sierra (IBM + Nvidia + Mellanox) worked for the NNSA on nuclear security.
In Europe, in addition to those already mentioned, there are Juwels Booster y SuperMUC (Lenovo, direct water cooling), with tens of petabytes and powerful visualization environments. Switzerland operated Piz Daint (Cray), with DataWarp as burst buffer to accelerate I/O.
Italy incorporated HPC5 (Dell, on ENI) with Xeon Gold 6252 and Nvidia V100; Marconi-100 (IBM POWER9 + Volta V100) at Cineca; and Perlmutter (Berkeley Lab, USA) was one of the most powerful in AI processing with 6.000 A100 GPUs, capable of 180 PFLOPS and, in certain AI scenarios, several effective exaFLOPS.
In the US, Selene (Nvidia, A100) shone for efficiency (1.344 kW); Border (Dell, Univ. of Texas) stood out for its storage (50 PB HDD + 3 PB SSD, 12 Tbps) and reboots in 17 s; Trinity (Cray XC40) served the NNSA with Haswell and Knights Landing; Let (IBM Power9) strengthened the LLNL ecosystem.
Japan also promoted ABCI (Fujitsu) for AI in the cloud. And in Spain, Mare Nostrum 4 (2017) reached 13,7 PFLOPS before the jump to MN5, with applications in genetics, chemistry, paleontology, meteorology or air quality (CALIOPE).
This entire map, from exascale installations to the desktop, draws a near future in which Testing, tuning, and inferring advanced models will become increasingly local, with the cloud as a complement. Europe is stepping on the gas with EuroHPC, the United States and China are holding the exascale pulse, India is emerging with massive investments, and on the desktop side, DGX Spark is opening a tangible door to high-level AI without leaving the lab, office, or even home.