As more businesses in the Asia-Pacific region adopt artificial intelligence and drive their operations, pressure on data centers is rapidly increasing. Traditional facilities built for previous generations of computing struggle to keep up with the heavy energy use and cooling demands of modern AI systems. By 2030, GPU-driven workloads push the rack’s power density to 1 MW and no longer adequately upgraded incremental. Instead, operators are now turning to dedicated “AI factory” data centers designed from the ground up.
AI News Speaking with Paul Churchill, Vice President of Vertiv Asia, we got a better understanding of how the region prepares for this shift and what infrastructure changes are ahead.
Explosive market growth is setting the pace
The AI data center market is projected to surge from $236 billion in 2025 to about $934 billion by 2030. This growth is driven by the rapid adoption of AI in industries such as finance, healthcare and manufacturing. These sectors rely on high-performance computing environments with dense GPU clusters that require much more energy and cooling capabilities than traditional servers.
In the Asia-Pacific region, this demand will be amplified by government investment in digitalization, expansion of 5G, and the deployment of cloud-native and generated AI applications. All this is to raise computing needs at a pace the region has never seen before.
Churchill explained that to meet this demand it would require more than just a larger facility. A scalable, sustainable, smarter infrastructure strategy is needed. “Infrastructure leaders need to move beyond fragmentary upgrades. Future prepared strategies include adopting A-Optimized Infrastructure, combining large-capacity power systems, advanced thermal management and integrated, scalable designs,” he said.
Cooling and power challenges are rising
As rack density increases from 40 kW to 130 kW and up to 250 kW by 2030, cooling and power supply are becoming a key issue. Traditional air cooling methods are no longer sufficient under these conditions.
To address this, Vertiv is developing a hybrid cooling system that mixes liquid cooling directly to the chip with air-based solutions. The system can adapt to workload changes, reduce energy usage, and maintain reliability. “Our coolant distribution units allow for direct liquid cooling to the tip, ensuring reliability and maintainability in high density environments,” Churchill said.
Power supply is also becoming more complicated. AI workloads fluctuate rapidly, so infrastructure needs to respond in real time. Vertiv has evolved its rack-based power distribution units and busway systems to handle higher voltages and improve load distribution. Intelligent monitoring helps operators manage loads more efficiently, reduce wasteful capacity and extend uptime. This is an important consideration in parts of Southeast Asia where the power grid is not stable.
Data Centers are redesigned for AI
As planned by Hyperscalars such as AMD, Microsoft, Google, and Meta, the rise of liquid-cooled GPU pods and 1 MW racks indicates a deeper architectural shift. Instead of renovating older facilities, they are specially designed to support AI.
“The future of data center architectures is hybrid and these infrastructures need to build facilities around liquid flow,” Churchill said. This includes new floor layouts, advanced coolant distributions and more sophisticated power systems.
Next-generation facilities integrate chip-level to grid cooling, power and monitoring. In the Asia-Pacific region where hyperscale campuses are expanding rapidly, this type of integrated design is essential to meet performance expectations and sustainability goals.
Incremental upgrade to AI factory data center
By 2030, the Asia-Pacific region is expected to overtake the US in data center capacity, reaching nearly 24 GW of consignment power. To handle this growth, businesses are moving from ad hoc upgrades to full-stack AI factory data centers.
Churchill said the transition should take place in stages. The first step is IT management rather than treating integrated planning, power-connecting, cooling, and IT management as separate systems. This approach simplifies deployment and provides a powerful base for scaling.
The second step is to employ modular and prefabricated systems. These allow businesses to add capacity to the stages without major disruption. “Companies can deploy factory-tested modules along with existing infrastructure, allowing them to gradually migrate their workloads to AI readiness capabilities without disruptive overhauls,” he said.
Finally, sustainability must be incorporated into every stage. This includes the use of lithium ion energy storage, grid interactive UPS systems, and high voltage distributions to improve efficiency and resilience.
DC Power acquires new connections to AI data centers
Vertiv recently introduced PowerDirect Rack, a DC power shelf designed for AI and high performance computing. Switching to DC power reduces energy loss by reducing the number of conversion steps between the grid and the server. It also combines renewable energy and battery storage systems, which are more common in the Asia-Pacific.
This is especially useful in energy-constrained markets such as Vietnam and the Philippines. In these regions, flexible power solutions are essential to keeping facilities running smoothly. As Churchill pointed out, DC Power “is not just a play of efficiency, it’s a strategy to enable sustainable scalability.”
Sustainability is becoming a central priority
With AI promoting energy use, data center operators are facing more stringent regulations and rising grid constraints. This is especially true in Southeast Asia. In this case, power reliability and tariffs differ greatly.
Vertiv works with operators to integrate alternative energy sources such as lithium-ion batteries, hybrid power systems, and microgrids. These can reduce reliance on the grid and increase resilience. There is also a growing interest in solar-assisted UPS systems and advanced energy storage technologies. This helps in managing load balance and costs.
Cooling efficiency is also a major focus. Hybrid liquid cooling systems can reduce both energy and water usage compared to older methods. “Our focus is on providing infrastructure that meets performance demands to meet ESG goals,” Churchill said. “We are working with our partners to ensure that AI-driven growth in the region is responsible, sustainable and align with long-term digital and environmental goals.”
Modular solutions support rapid expansion
Many emerging economies in the Asia-Pacific region face challenges such as limited land, unstable electricity supply and lack of skilled labor. In these settings, modular and prefabricated data center systems provide a practical solution.
Prefabricated modules can reduce deployment time by up to 50% while improving energy efficiency and scalability. Allows operators to gradually expand, adding capacity, if necessary, without large upfront investments. Flexibility is particularly valuable for AI workloads and can grow quickly and predictably.
Combining compact design and energy-efficient operation, modular systems provide operators with a way to build AI readiness faster and less risk-free. This is an important advantage as the region’s digital economy grows.
Preparation for a tough future
AI Surge is redesigning how data centers are built and how they operate in the Asia-Pacific region. As workloads grow and sustainability pressures increase, businesses can no longer rely on old infrastructure. The transition to AI factory data centers with advanced cooling, DC power and modular systems reflects a change in how the region prepares for the next era of computing.
(Photo by photo)
Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London. The comprehensive event is part of TechEx and will be held in collaboration with other major technology events. Click here for more information.
AI News is equipped with TechForge Media. Check out upcoming Enterprise Technology events and webinars here.