- Supercomputing and AI make it possible to create digital twins of the planet, cities, and human organs to simulate and anticipate complex scenarios.
- Europe is promoting projects such as Destination Earth, LUMI and the BSC, combining supercomputers, networks of centers and the development of its own chips.
- The focus of AI is shifting from massive training to inference, with new servers, PCs, and even desktop supercomputers being prepared for AI.
- Spain is participating in this race with MareNostrum, the Spanish Supercomputing Network and systems like Picasso, providing services to science, industry and society.
La supercomputing and artificial intelligence They have become the trendy pairing of today's technology. We're no longer just talking about large data centers hidden in scientific bunkers, but about machines capable of creating digital twins of the planet, of the human heart or of an entire city, and even of desktop supercomputers that fit (more or less) under an office desk.
At the same time, we are experiencing a phase change in AI: from the boom of training of gigantic models we have moved to an obsession with the inferenceThat is, by using those models at full capacity on a daily basis. Hardware manufacturers, research centers, and universities are now competing to offer everything from supercomputing centers like the Barcelona Supercomputing Center (BSC) or the European LUMI, to compact servers and AI-ready PCs that bring a kind of "mini supercomputer" to the desktop.
What is supercomputing really and how is its power measured?
When we talk about supercomputing, we're not referring to a souped-up PC, but to sets of thousands of computers that work in a coordinated manner as if they were a single machine. Each of these computers is a node, with its CPUs, GPUs, RAM, and storage, linked by ultra-fast interconnection networks that minimize the latency, the great enemy of performance.
The power of these machines is expressed in FLOPS (floating-point operations per second)At home, a powerful computer can operate in the teraFLOPS (TFLOPS) range. In supercomputing, we're playing in a different league: it's normal to talk about petaFLOPS (1015 operations per second) and, in the most advanced systems, of exaFLOPS (1018). Frontier, in the United States, has been the first to officially break the exascale barrier.
To give you an idea, a modern supercomputer can do in one hour what a home computer would take. years to calculateThis brutal computing power is what allows us to simulate everything from hurricanes to protein dynamics, or to train AI models with billions of parameters.
What do supercomputers physically look like and why do they need so much cooling?
Visually, a supercomputer looks nothing like a typical desktop PC. It usually resembles more a room full of metal cabinetseach with hundreds or thousands of processors, GPUs, and disks. The power is such that energy consumption can reach several megawattsAnd a good part of that consumption is used up as heat.
That's why these systems need dedicated rooms with extreme coolingIndustrial climate control, hot and cold aisles, direct-to-chip liquid cooling, and even creative solutions for harnessing that heat. In Switzerland, for example, they reuses the heat from a supercomputer to heating university buildingsturning a problem into an advantage.
In some cases, highly sophisticated security and protection systems are used, such as glass urns with special fire suppression systems which use micronized water capable of extinguishing fires without damaging electronics. This is the case of the original MareNostrum in Barcelona, installed inside the chapel of the Polytechnic University of Catalonia: probably one of the supercomputers with the most unusual locations in the world.
The digital twin revolution: from Earth to the human heart
The combination of supercomputing and AI is triggering a key concept: digital twinsThey are not simple virtual models, but dynamic replicas that integrate real data in near real time to simulate, anticipate and optimize what happens in the physical world.
In Europe, the European Commission is promoting the program Destination Earth (DestinE)whose goal is to develop a highly accurate digital twin of the Earth within a few years. Thanks to supercomputers like LUMIThe most powerful in the European Union, it is possible to perform very high resolution and long-term climate simulations, incorporating the atmosphere, oceans and land surface with a level of detail that until recently was only possible with very short-term weather models.
According to Utz-Uwe Haus, head of the HPE HPC/AI EMEA Research Lab, this capability allows better understand extreme phenomena for disaster management, studying climate change scenarios, or assessing the impact of glaciers, sea ice, vegetation, and aerosols on the global climate. But it also allows for something very practical: predict local effects with enormous precision, such as average rainfall, droughts or floods on a regional or city scale.
This has direct consequences on the agricultural planning (which crops are viable in an area and with what risk), in the investment in renewables (forecasting hours of sunshine and wind over decades) or in infrastructure design. It is a clear example of how supercomputing ceases to be something abstract and begins to influence very concrete economic decisions.
Digital twins in cities, rivers and ports
Digital twins don't stop at global climate. Barcelona metropolitan area It has a digital twin of its 164 municipalities that allows for the simulation of urban, economic, mobility, housing, and knowledge scenarios for the coming decades. Policies and plans can be tested on this virtual replica before making decisions in the real world.
In the port and river sector, the Port of Seville Guadaltwin, a digital twin of the Guadalquivir Eurovia, is being developed as part of its digitalization plan. This system integrates AI and machine learning to Improve predictions and decisions on river traffic, draft management, tides, infrastructure and safety.
Even in seemingly disparate fields, such as high-energy physics and fashion, digital twins have begun to make their way into the market. CERN He investigates how to use these models in his experiments in particle physics, robots, and refrigeration systems, and in parallel, companies like H&M have created digital replicas of human models for advertising campaigns, generating debates about image rights and the future of creative work.
The human body as the next great digital twin
One of the most ambitious challenges lies in health. Teams like the one from Steven Niederer At Imperial College London, researchers are working on digital twins of individual hearts, with their specific shape, size, and function. These models allow them to simulate surgeries and treatments without risk to the patient, and are already used in clinical trials and in the planning of interventions.
Researchers such as Andreu Climent and María de la Salud Guillem, from the Polytechnic University of Valencia, believe that these digital cardiac twins They will be key to treating complex arrhythmias, deciding who benefits from an implantable defibrillator, or anticipating the risk of sudden death. And the long-term goal is even more ambitious: to build a complete digital twin of the human body that allows for testing therapies, adjusting drug doses, and personalizing medicine to the maximum.
AI, supercomputing, and the shift from training to inference
For years, the bulk of AI investment has gone toward training increasingly large models, especially in generative AI. Today, the focus is clearly shifting toward... inference: to use those models on a large scale in production, continuously and with a lower cost per operation.
At the CES 2026 This change has been clearly seen. Manufacturers like Lenovo have introduced servers specifically designed for inference, such as the ThinkSystem SR675i, SR650i and the ThinkEdge SE455i, prepared to run AI models near where the data is generated, in the so-called edge.
Origin PC, now integrated into the Corsair ecosystem, has shown the S-Class Edge AI Developer KitA compact, ready-to-use platform for developing AI at the network edge. The idea is that small development or research teams can Test and deploy AI without always relying on the cloud or from huge external data centers.
Most PC manufacturers present at CES have followed the same line: Acer with its RA100 AI mini-station and its updated Veriton desktops; LG GRAM with dual AI capabilities (local + cloud); Asus with a battery of new Vivobook and convertible ProArt PX13 geared towards creators working with AI; Dell refreshing the range XPS for AI workloads; and HP updating EliteBook, EliteBoard, Omnibook, and OmniStudio, all of them with AI acceleration and data power.
Supercomputing that comes down to the desktop: the “desktop supercomputer”
One particularly interesting movement is that of the local supercomputingwith machines that, while not on the scale of a national center, offer incredible capacity in a desktop format. At CES 2026, Gigabyte (through its subsidiary Giga Computing) presented the Gigabyte W775-V10a true “desktop supercomputer”.
This team integrates the NVIDIA AI stack and the accelerator NVIDIA GB300 Grace Blackwell Ultra Desktop...among other top-tier components. Its goal is to enable AI-focused working groups to... train and infer complex models without relying on the cloud nor from external data centers, maintaining complete control over the data and the execution environment.
Alongside it, CES has served to refresh the ecosystem of components: new CPUs intel core ultra, new AMD Ryzen, the chip Snapdragon X2 Plus Qualcomm's BG7 SSDs, Kioxia's advanced DDR5 memory, and updated MSI motherboards are all designed to support data-intensive and AI-intensive workloads.
In the peripherals arena, brands like Corsair showcased their latest high-end mice and keyboards, while Anker, eufy, and soundcore focused on connected devices. Even some curious gadgets appeared, such as the Plaud Notepin S, a small device for taking notes using AI.
What are supercomputers being used for today: from COVID to air quality
Supercomputers are almost always used for advanced research in fields where a normal PC would literally take ages to calculate. Among its classic uses are the meteorology and climateEarthquake simulation, research in astrophysics, geophysics, biology, medicine, drug design, or aerospace engineering.
During the COVID-19 pandemicSeveral supercomputers were used to simulate the behavior of viral proteins, test combinations of molecules, and accelerate the search for drugs. The massive simulation allowed researchers to discard unpromising approaches and focus their efforts on the compounds with the highest probability of success.
Centers like the Barcelona Supercomputing Center They have shown very concrete examples: using sensor data and fluid dynamics models, they trained neural networks to control incinerators and improve fuel efficiency by reducing emissions; or to predict air quality in large cities with remarkable accuracy, based on years of historical data.
Another striking example is Alpha FoldDeepMind's system for predicting protein folding from its amino acid sequence. This problem, considered Nobel Prize-level, has benefited from an explosive combination of AI, Big Data, and SupercomputingThe impact on biomedicine and drug design is proving enormous, to the point that tens of thousands of researchers worldwide are already using its results on a daily basis.
Common uses of supercomputing
- Weather and climate forecasting in the medium and long term.
- Simulation of earthquakes, tsunamis and natural hazards to reduce damage.
- Design and testing of aircraft, vehicles and rockets using aerodynamic models.
- Drug discovery and design and molecular interaction studies.
- Astrophysics and cosmology: formation of galaxies, stars and black holes.
- Air quality and atmospheric composition in regions and cities.
- Big Data and social simulation: cultural evolution, population movements, smart cities.
- Security and defense: from nuclear weapons simulation to digital twins of radars and complex systems.
Where are the large supercomputers located and what role does Spain play?
My list Top 500 It compiles and ranks the world's 500 most powerful supercomputers twice a year, a list it has held since 1993. Although China leads in the number of systems on that list, the United States maintains its lead in total added powerespecially with machines like Frontier.
Among the current giants we find Fugaku in Japan, which was in the lead for years; Summit y Sierra in the United States; or Sunway TaihuLight y Tianhe-2 to in China, which once also held first place. Italy hosts systems such as HPC5 o Marconi-100and Switzerland has Piz Daint, a long-time protagonist in Europe.
In Spain, the benchmark is the MareNostrum from the Barcelona Supercomputing Center. Since its first version in 2004, with around 42,4 teraFLOPS, it has scaled up to the current MareNostrum 4, with around 13,7 petaFLOPSThe next generation, MareNostrum 5, will represent a leap of several orders of magnitude in power and energy consumption, and is part of the European strategy to equip itself with exascale infrastructures.
A very relevant network is the Spanish Supercomputing Network (RES)This network brings together centers and machines distributed across different autonomous communities, enabling it to serve researchers throughout the country. Furthermore, a Ibero-American Supercomputing Network, which connects resources from countries like Mexico and other Latin American partners for joint projects.
At the regional level, facilities such as the Picasso supercomputer from the University of Malaga, with some 40.000 computing cores and 180 TB of RAM. Picasso serves both researchers from the university itself and Andalusian users through the Andalusian Bioinformatics Platform and scientists from all over Spain through the RES.
All these systems almost always work with Linux or derivatives, due to their open-source nature, stability, and low resource consumption compared to other commercial operating systems. On this foundation, an ecosystem of scientific tools, programming environments, and AI libraries, specialized and refined over years, has been built.
Reference centers: Barcelona Supercomputing Center and the European race for its own hardware
El Barcelona Supercomputing Center (BSC) is one of the major European players in supercomputing and computer architecture research. Directed for decades by Mateo Valero, the BSC has gone from managing a single supercomputer to becoming a center with more than a thousand people from more than 50 countries, organized into departments of Computer Sciences, Life Sciences, Earth Sciences, and Social Applications.
One of the distinguishing features of the BSC is that it does not limit itself to operating machines, but rather develops proprietary software, algorithms, and even processorsHe has been involved in European projects for years, such as EuroHPC and in initiatives such as the European Processor Initiative (EPI) or European chips based on open architectures such as RISC-V, with the aim of reducing Europe's dependence on American and Asian manufacturers.
In collaboration with other partners, the BSC has promoted vector processor prototypesARM- and RISC-V-based platforms, and a whole family of designs with names like Turtle, Lizard or Chameleonwhich have become increasingly complex with each generation. The idea is to create, in the medium term, chips capable of powering “MareNostrum 6” supercomputers with critical computing technology developed in Europe.
This effort is framed within an uncomfortable reality: Europe used to design part of the ARM architecture, but the sale of ARM to non-European companies and the lack of large foundries of its own have left the continent in a delicate positionFaced with moves like the United States guaranteeing TSMC's advanced production in Arizona, or the chip factories that Germany and France are attracting with large public subsidies, Spain faces the challenge of combining design, manufacturing and industrial ecosystem with comparatively fewer resources.
In this context, the Spanish strategy involves consolidating centers like the BSC, promoting networks like the RES, supporting open chip projects and form very rare profiles in computer architecture and AI. It is no coincidence that, as the center's own directors acknowledge, PhDs specializing in these fields receive offers from private industry with salaries difficult to match from academia, which complicates talent retention.
Meanwhile other Spanish universities They are strengthening their infrastructure, as demonstrated by the Picasso project in Málaga or the RES nodes distributed throughout the country. In many cases, these systems serve both particle physics such as climate change studies, engineering, bioinformatics or smart city projects, demonstrating that supercomputing is no longer a luxury reserved for a few laboratories.
Looking at the whole picture, it's quite clear that supercomputing has gone from being a laboratory curiosity to becoming a critical infrastructure for climate, health, security, the economy, and even for how we design cities, vehicles, or medicines. At the same time, the leap of AI from the laboratory to production and the rise of digital twins are bringing some of this power to specialized servers and even to the desktops of engineers and scientists, opening up a scenario in which, far from disappearing, supercomputing is becoming increasingly integrated into our daily lives, even if we often don't see it.
Table of Contents
- What is supercomputing really and how is its power measured?
- What do supercomputers physically look like and why do they need so much cooling?
- The digital twin revolution: from Earth to the human heart
- AI, supercomputing, and the shift from training to inference
- Supercomputing that comes down to the desktop: the “desktop supercomputer”
- What are supercomputers being used for today: from COVID to air quality
- Where are the large supercomputers located and what role does Spain play?
- Reference centers: Barcelona Supercomputing Center and the European race for its own hardware