Imagine connecting thousands of powerful AI chips scattered across dozens of server cabinets and making them work together as if they were a single huge computer. That’s exactly what Huawei demonstrated at Huawei Connect 2025, and the company is able to announce breakthroughs in AI infrastructure architectures, rebuilding worlds and rebuild artificial intelligence systems of scale.
Instead of the traditional approach in which individual servers operate somewhat independently, Huawei’s new SuperPod technology allows the company’s executives to “learn, think, reason” as a single logical machine made up of thousands of individual processing units.
This meaning represents a change in the way AI computing power is organized, expanded and deployed beyond impressive technical specifications.
Technical foundation: UnifiedBus 2.0
The core of Huawei’s infrastructure approach is Unified Bus (UB). “Huawei has developed a groundbreaking superpod architecture based on the UnifiedBus Interconnect Protocol. The architecture allows physical servers to be connected deeply to think, think and reason like a single logical server,” said Yang Chaobin, board director and CEO of ICT Business Group at Huawei.
The technical specifications reveal the scope of this achievement. The UnifiedBus protocol addresses two challenges that historically limit AI computing. This is the reliability and bandwidth latency of long distance communications. Traditional copper connections offer high bandwidth, but usually it is probably only possible to connect two cabinets over a short distance.
Optical cables support longer ranges, but suffer from reliability issues that become a problem with the larger distance and scale. Huawei’s vice-chairman and turn-up chairman Eric Xu said solving these fundamental connectivity challenges is essential to the company’s AI infrastructure strategy.
Xu detailed the breakthrough solution from the perspective of the OSI model. “We have built reliability on every layer of the interconnection protocol, from the physical and data link layers to the network and transmission layers. We create 100ns level of fault detection and intermittent fragmentation of optical paths or fault-type fragments of faults.
Superpod Architecture: Scale and Performance
The Atlas 950 SuperPod represents the flagship implementation of this architecture. It consists of a rising 950DT chip up to 8,192 in the configuration described by Xu to deliver 8 EFLOPS on FP8 and 16 EFLOPS on FP4. Overall peak internet bandwidth across the globe. ”
The specifications are more than a gradual improvement. The Atlas 950 SuperPod occupies 160 cabinets at 1,000m2128 computing cabinets and 32 Comms cabinets are linked to all optical interconnects. The system’s memory capacity reaches 1,152 TB, maintaining what Huawei claims is 2.1 micrond latency across the system.
Later in the production, the pipeline will be the Atlas 960 SuperPod. It is set up to incorporate 15,488 Ascend 960 chips into 220 cabinets covering 2,200m.2. Xu said it would provide 30 EFLOPS on the FP8 and 60 EFLOPS on the FP4, with 4,460 TB of memory and 34 PB/S interconnect bandwidth.
Beyond AI: General-purpose computing applications
The SuperPod concept is extended to general purpose computing beyond AI workloads through the Taishan 950 SuperPod. Built on a Kunpeng 950 processor, the system addresses enterprise challenges in replacing legacy mainframes and midrange computers.
Xu positioned this as being particularly relevant to the financial sector. “The Taishan 950 SuperPod, combined with a distributed GaussDB, serves as an ideal alternative, allowing you to replace mainframes, midrange computers, and Oracle’s Exadata database ceramics.”
Open Architecture Strategy
Perhaps most importantly in the broader AI infrastructure market, Huawei has announced the release of the technical specifications for UnifiedBus 2.0 as an open standard. This decision reflects both strategic positioning and practical constraints.
Xu acknowledged that “mainland China has lagged behind semiconductor manufacturing process nodes for a relatively long time,” emphasising that “sustainable computing power can only be achieved with actual available process nodes.”
Yang framed an open approach to building an ecosystem. “We are working on an open hardware and open source application approach that will help more partners develop superpod solutions based in their own industry.
The company uses open source hardware and software components with hardware, including NPU modules, air-cooled and flow-cooled blade servers, AI cards, CPU boards, cascade cards, and more. For software, Huawei has promised a fully open-sourcing Cann compiler tool, Mind Series Application Kit, and OpenPanguan Foundation model by December 31, 2025.
Market development and impact on the ecosystem
Actual deployments provide validation of these technical claims. Over 300 ATLAS 900 A3 Superpod Units have already been shipped in 2025 and are deployed to more than 20 customers in multiple sectors, including the Internet, Finance, Carrier, Power, and Manufacturing sectors.
The impact on the development of China’s AI infrastructure is extremely important. Huawei is tackling the challenge of building competitive AI infrastructure within parameters set by constrained semiconductor manufacturing and availability by creating an open ecosystem centered around domestic technology. That approach allows a wide range of industry participation in developing AI infrastructure solutions without requiring access to the most sophisticated process nodes.
In the global AI infrastructure market, Huawei’s open architecture strategy introduces an alternative to the dominant hardware and software approach that is closely integrated and integrated among Western competitors. It has not been demonstrated on a large scale whether the ecosystem proposed by Huawei can achieve comparable performance and maintain commercial viability.
Ultimately, the SuperPod architecture represents more than an advance before the AI computing advance. Huawei proposes the fundamentals of how large computational resources are connected, managed and scaled. The open source release of its specifications and elements tests whether joint development can accelerate AI infrastructure innovation in partner ecosystems. This could potentially restructure competitive dynamics in the global AI infrastructure market.
reference: Huawei promises training 30,000 Malaysian AI experts as the local technology ecosystem expands

Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California and London. The comprehensive event is part of TechEx and will be held in collaboration with other major technology events. Click here for more information.
AI News is equipped with TechForge Media. Check out upcoming Enterprise Technology events and webinars here.

