Highlights from NVIDIA GTC 2025, Washington D.C

by | Nov 4, 2025 | Data Centers, Fusion Energy

The primary business, or more correctly the purpose, of fusion energy is the generation of electricity. While fusion does have other uses such as industrial heat (and potentially even space travel in the future), its use for generating electricity is the overwhelming force behind the commercialization of fusion energy; hence The Fusion Report’s interest. And this is exactly why we are here at Nvidia’s GPU Technology Conference (GTC) session in Washington, D.C. While this session has much in common with Nvidia’s keystone GTC developer’s conference that occurs every spring in San Jose, California, it also is more focused on government-related topics – essentially happening at the crossroads of artificial intelligence (AI) and public policy.

NVIDIA and Artificial Intelligence – Friend, Enemy, or Something Else?

Nvidia, a company which originally was founded in 1993 to develop graphics processing units (GPUs) for video games. As Nvidia gained dominance in that market, they started focusing on using the cores in their GPUs for parallel processing tasks, and invested nearly a billion dollars into it. The result was CUDA, Nvidia’s platform for GPU computing, which was introduced in 2007. The result is the AI-focused world in which we live in today, which was largely enabled from Nvidia’s work on CUDA.

Interestingly, AI has been in the headlines a lot these days; a sample of these include:

The Potential (and the Potential Problems) With AI

As a user of AI and someone who has been involved in this field for over a decade, I can personally testify to the value that AI (and its predecessor, machine learning or “ML”) can provide for a number of use cases. A sample of these include: cybersecurity (most of the modern cybersecurity tools use AI, ML, or both); radiometry for oncological uses (cancer detection and diagnosis); automating customer service outreach; and processing call transcriptions to identify action items, etc. In these “micro-economic” use cases, AI simplifies labor-intensive tasks, improving productivity.

The potential problems with the use of AI lie not in AI itself (other than perhaps increased impact on the electrical grid). Rather, the problems that AI potentially enables stem from the macro-economic impacts of these changes. These include large-scale job displacements (there are those who believe that AI could replace most white collar jobs within a decade), increases in electrical consumption/costs, or even the inadvertent use of weapons of mass destruction (interestingly, the Skynet AI instance was located in a prominent skyscraper in San Francisco). While all these are in some respects dystopian (or apocalyptic) scenarios, they highlight the impact of the “lack of governance” on AI’s use, and the lack of business/political strategies to mitigate those negative impacts.

And Now to The Fun Stuff from NVIDIA GTC 2025 in Washington, DC

As you can guess, these sorts of issues were not directly addressed at GTC (at least not directly). Most of the presentations, including Jensen Huang’s 2-hour keynote (and it

was a very interesting 2 hours) dealt with not only the advances NVIDIA has made in computation, but how these have transformed industries. From a purely technical standpoint, the picture at the right shows the compute “efficiency” improvement between the last-generation H200 GPGPU and the new Blackwell NLV72 GPGPU – 10X better! This mirrors NVIDIA’s strategy of “exponential growth” rather than “percentage growth”. This is not simply due to improvements in the GPGPUs themselves, but also in the interconnects (the “fabric”) between the GPGPUs.

From a more “macro” perspective, NVIDIA views the software industry prior to AI as essentially building tools to improve human productivity, whether Word, Excel, databases, etc. In contrast, AI allows the creation of “workers”, which can use the same tools or new ones. This is a concept that NVIDIA puts into practice themselves – augmenting their $200K-$300K engineers with AI companion workers that improve productivity of their human staff. Indeed, NVIDIA’s philosophy on human capital is markedly different than that of most of their high-tech brethren – the company avoids large-scale layoffs, even when missing revenue targets. This also extends to where NVIDIA makes its products – they are now 100% onshore-based, manufacturing in Texas, Arizona, and other US sites.

Perhaps the coolest pieces were NVIDIA’s ‘factorization’ of data centers, and their entry into telecom. Using a digital twin approach, they have sped up the deployment of new data centers, as well as improving their efficiency. They have essentially modularized data center design, enabling partners to build and ship fully-tested pieces to the site. On the telecom side, NVIDIA announced a deal with Nokia to build 6G AI-native radio access network (RAN) nodes based on NVIDIA embedded AI computers. The goal is not only to enter telecom – it is also to return telecom manufacturing to the US. Overall, very impressive for a middle-class kid from Taipei who worked his way through high school and college, and founded NVIDIA on the back of a napkin at an east San Jose Denny’s before he was thirty years old.