The Need for Data Centers to Support AI Trends
AI is an energy-intensive technology, requiring data centers to have both ample computing power and electrical capacity to sustain operations.
A recent study by Sweden’s RISE Research Institute demonstrated how AI adoption is quickly changing the landscape. For example, ChatGPT achieved 1 million users within just five days of its release in November 2022, reaching 100 million users within two months. By comparison, TikTok took 9 months, and Instagram took over two and a half years to reach the same user milestone.
For context, a single Google search consumes 0.28 Wh of energy, which is the equivalent of keeping a 60W light bulb on for 17 seconds. In contrast, training GPT-4 involves 1.7 trillion parameters and 130 trillion tokens, demanding significantly more power. To accomplish this, 25,000 Nvidia A100 GPUs—each with a power consumption of around 6.5 kW—are needed. OpenAI reports that the training took 100 days, consuming 50 GWh of energy and costing $100 million.
Clearly, AI will drastically reshape data center operations, demanding computing capabilities and energy levels beyond what we’ve previously seen.
48V Architecture in Data Centers
Early data centers used a centralized power architecture (CPA) to convert grid voltage to 12V for distribution to servers, with local converters stepping down the voltage further to 5V or 3.3V for logic components.
However, as power demands grew, the current on the 12V bus became too high, resulting in unacceptable losses. This led to a shift to 48V bus layouts. According to Ohm’s Law, the current is reduced by a factor of four, which reduces losses by the square of this factor. This configuration is known as Distributed Power Architecture (DPA).
At the same time, the voltage for processors and other components decreased to sub-volt levels, necessitating multiple secondary voltage rails. To address this, two-stage conversion technologies were developed, where DC-DC converters (Intermediate Bus Converters, or IBCs) transform 48V into 12V, which is then further split into the necessary lower voltages.
The Demand for High-Efficiency MOSFETs
Power losses within data centers are a significant challenge for operators. First, they are paying for electricity that doesn't contribute directly to server operation. Second, any wasted energy is converted to heat, which must be managed. With AI servers demanding up to 120 kW of power—likely increasing in the future—even under 50% load, a 2.5% loss at 97.5% efficiency results in 1.5 kW of wasted energy per server, equivalent to running an electric heater continuously.
To handle this heat, cooling systems such as heat sinks or fans are necessary, which occupy space that could otherwise be used for additional computational power. These cooling systems also consume electricity and add to operating costs. As the temperature in data centers must be tightly controlled, excessive losses raise the internal temperature, requiring more air conditioning and driving up both capital and operational expenditures.
Thus, the efficient conversion of grid voltage to the necessary voltage levels for AI GPUs and other devices is crucial for data center operators.
Over the years, significant effort has gone into optimizing power topology, such as introducing Totem-Pole Power Factor Correction (TPPFC) in the front-end power correction stage to improve efficiency. Additionally, MOSFETs have replaced diode rectifiers, and technologies like synchronous rectification have been incorporated to further boost efficiency.
Optimizing topology is only part of the solution; the components themselves must also be as efficient as possible, particularly MOSFETs that play a crucial role in power conversion.
MOSFETs in switch-mode power supplies face two main types of losses: conduction losses and switching losses. Conduction losses occur due to the resistance (RDS(ON)) between the drain and source, which is present as long as current flows. Switching losses arise from gate charge (Qg), output charge (QOSS), and reverse recovery charge (Qrr), all of which are replenished during each switching cycle. As switching frequencies increase to reduce the size of magnetic components, switching losses become increasingly significant.
Clearly, the lower the conduction and switching losses of a specific MOSFET, the more efficient the overall power conversion system will be.
Overview of PowerTrench T10 MOSFET
Synchronous rectification is now a critical technology for high-performance, high-current, low-voltage power conversion applications, especially in data center servers. In these systems, parameters like RDS(ON), Qg, QOSS, and Qrr directly impact conversion efficiency. Device manufacturers are working to minimize these impacts.
ON Semiconductor's PowerTrench T10 MOSFET utilizes a new shielded gate-channel design to achieve ultra-low Qg values, with RDS(ON) less than 1mΩ. The latest PowerTrench T10 technology not only reduces ringing, overshoot, and noise but also includes industry-leading soft-recovery body diodes that reduce Qrr. This results in a well-balanced trade-off between conduction resistance and recovery characteristics, allowing for low-loss, fast switching with excellent reverse recovery.
The improvements in PowerTrench T10 MOSFETs increase the efficiency of low-to-mid voltage, high-current switching power supply solutions. Switching losses can be reduced by up to 50% compared to previous generations, while conduction losses can be reduced by 30%-40%.
ON Semiconductor has introduced 40V and 80V series products based on PowerTrench T10 technology. The NTMFWS1D5N08X (80V, 1.43mΩ, 5mm x 6mm SO8-FL package) and NTTFSSCH1D3N04XL (40V, 1.3mΩ, 3.3mm x 3.3mm source-down dual cooling package) provide excellent efficiency for power supply units (PSUs) and intermediate bus converters (IBCs) in AI data center applications. They meet Open Rack V3 specifications, achieving 97.5% PSU efficiency and 98% IBC efficiency.
Conclusion
The AI revolution has arrived, and it is difficult to fully predict the future energy demands of data centers. However, it is clear that new challenges have emerged. The scarcity of real estate and the limitations of the electrical grid make it difficult to find new, suitably sized locations for data centers. The surge in power demand from critical IT components places a significant burden on energy costs. To meet these demands, data center owners must not only build new facilities but also maximize the capacity of existing ones, striving for high-density configurations of megawatts per square foot.
As power demands exceed 100 kW, power conversion will be a key focus for ensuring efficient operation, reliable heat dissipation, and increased power density while saving space in modern data centers.
ON Semiconductor's PowerTrench T10 technology offers industry-leading RDS(ON), higher power density, reduced switching losses, and improved thermal performance, which ultimately reduces system costs. Innovations like PowerTrench T10 power semiconductor technology will be a key component in the future.