Data Centers and the Insatiable Demand for Electricity

by | Feb 26, 2025

In a recent blog we discussed how artificial intelligence (AI) could help realize fusion energy through the modeling of such things as the complex magnetohydrodynamics of plasmas, the impacts of neutron damage on the materials making up things like first walls and thermal breeding blankets. What we didn’t explore in detail is the amount of energy that AI data centers require to do all of this incredible modeling. That is the subject of today’s article.

The Evolution of the Datacenter

Data centers have been around as long as computing; after all, the first computers (especially mainframes) were massive, expensive machines that needed significant power and cooling. Moreover, many of these systems ran military and/or intelligence programs, making physical security very important. There was also a focus on “timesharing”, or running thousands of jobs on these mainframes, often for a charge. These “computer rooms” were the first data centers.

This concept evolved further with the mass deployment of minicomputers, and then microcomputer servers, in the 1970s and 1980s. In this case, the focus of the IT groups managing these data centers was on organizing the networking equipment and the wiring that went along with it, and which connected the thousands of servers that could be held in a data center. As things progressed into the “dot-com bubble” of the late 1990s, many of the larger data centers began to be built at the nexus of internet backbones. These data centers, often run by managed service providers (MSPs), focused on minimizing the impact of connectivity problems on compute availability by accessing multiple backbones simultaneously.

As data centers grew even larger, the concept of the cloud data center began to emerge in the late 2000s. In nearly all cases, these data centers provided computing-as-a-service, running jobs for client companies and agencies. The largest of these were known as “hyperscale data centers”, and were run by companies such as Amazon Web Services (AWS), Microsoft Azure, Google Cloud, Oracle Cloud Infrastructure, and other extremely large IT providers. These hyperscale providers provided a wide variety of infrastructure and service options, enabling companies with specific requirements to build their own virtual data centers, that could be grown or shrunk effectively without limit. As these hyperscale data centers started running artificial intelligence and machine learning models, the grew even larger and became known as AI data centers.

Throttling Up Data Center Power

As the size and number of large data centers increased, the electricity consumed by these data centers has increased exponentially, with hyperscale data centers holding tens of thousands of servers easily consuming over 100MW of power. Some of the largest data centers now consume over 1GW of power. Not surprisingly, many of these are being sited not based on the availability of internet backbones, but based on the availability of adequate electricity, either in the form of high-voltage transmission lines or actual power plants.

In AI data centers, the primary driver of power consumption is the use of GPGPUs, also known as general-purpose computing on graphics processing units. These chips, primarily built by NVIDIA, typically have thousands of computing cores, special-purpose architectures for large language model (LLM) inference engines, and extremely high-speed networking. While each generation of GPGPUs is more energy-efficient on a “watt per inference” basis, the total power consumed by each GPGPU also increased from generation to generation as well (as did the number of inferences that each GPGPU could run).

In addition to the electricity required to run the GPGPUs, storage systems, and their associated networking equipment, cooling is becoming a growing consumer of data center power. While the average rack of data center equipment consumes roughly 7kW of power, some can consume as much as 15kW of power at peak usage. This pushes the need for more exotic cooling techniques such as immersion cooling, in-rack liquid cooling, and cold-plate cooling. These techniques, while moving heat away from the rack, also consume a significant amount of electricity.

Fusion Energy Electrify the Next-Gen Data Centers

Data centers benefit from power sources that are located as close as possible. There are only a few power sources that meet that requirement: fission small modular reactors (SMRs), and fusion machines. However, SMRs are generally limited to a few hundred megawatts (MW); with data centers breaking the 1GW barrier, SMRs are not an option. Additionally, expanding transmission lines is generally a long-term process, and includes efforts such as siting, permitting, construction, and commissioning, each of which can take years to perform. Luckily, fusion power plants can fit within the footprints of existing fossil fuel power plants, as well as onsite at larger data centers. This makes fusion machines even more compelling for hyperscale data centers, which is why hyperscalers are also investing in fusion energy. 

The Electricity Needs of Data Centers Aren’t Going Away

To put an exclamation point on the demand for electricity for data centers, the US Department of Energy sponsored a study by the Lawrence Livermore National Laboratory (LLNL) on this subject. The research showed that the growth of data center electricity demand from 176 terawatt-hours (TWH) in 2023 (4.4% of total US electricity consumption) to between 6.7% and 12.0% of total US electricity consumption by 2028 (between roughly 320 TWH and 580 TWH). This represents a compounded annual growth rate (CAGR) of between 13% and 27%. In states like Virginia, the problem is even more acute. Fusion energy represents a method to fulfill these requirements within the existing footprint of utility transmission lines by providing a unique level of siting flexibility that other power sources do not have.