Meta and Oracle are upgrading their AI data centers with NVIDIA’s Spectrum-X Ethernet networking switches, a technology built to meet the growing demands of large-scale AI systems. Both companies have adopted Spectrum-X as part of an open networking framework designed to improve the efficiency of AI training and accelerate deployment across large-scale computing clusters.
NVIDIA founder and CEO Jensen Huang said models with trillions of parameters are turning data centers into “gigascale AI factories,” adding that Spectrum-X acts as a “neural system” connecting millions of GPUs to train the largest models ever built.
Oracle plans to use Spectrum-X Ethernet and Vera Rubin architecture to build a massive AI factory. Mahesh Thiagarajan, executive vice president of Oracle Cloud Infrastructure, said the new setup will allow the company to connect millions of GPUs more efficiently, allowing customers to train and deploy new AI models faster.
Meanwhile, Meta is expanding its AI infrastructure by integrating Spectrum-X Ethernet switches into Facebook Open Switching System (FBOSS), an internal platform for managing large-scale network switches. According to Gaya Nagarajan, vice president of networking engineering at Meta, the company’s next-generation network must be open and efficient to support increasingly large AI models and serve billions of users.
Building a flexible AI system
According to Joe DeLaere, who heads NVIDIA’s portfolio of accelerated computing solutions for data centers, flexibility is important as data centers become more complex. He explained that NVIDIA’s MGX system offers a modular building block design that allows partners to mix and match different CPU, GPU, storage, and networking components as needed.
The system also promotes interoperability, allowing organizations to use the same design across multiple generations of hardware. “It gives us flexibility, faster time to market and future readiness,” Delaire told the media.
As AI models grow larger, power efficiency becomes a central challenge for data centers. DeLaere said NVIDIA is working “from chip to grid” to improve energy usage and scalability, working closely with power and cooling vendors to maximize performance per watt.
One example is the move to an 800-volt DC power supply, which reduces heat loss and increases efficiency. The company has also introduced power smoothing technology to reduce spikes in the power grid. This is an approach that can reduce peak power demand by up to 30%, enabling more computing power within the same footprint.
Scale up, scale out, scale across
NVIDIA’s MGX system also plays a role in how you expand your data center. Gilad Shainer, the company’s senior vice president of networking, told the media that MGX racks host both computing and switching components and support NVLink for scale-up connectivity and Spectrum-X Ethernet for scale-out growth.
He added that MGX can connect multiple AI data centers as an integrated system, which is necessary for companies like Meta to support large-scale distributed AI training operations. Depending on distance, sites can be linked via dark fiber or additional MGX-based switches to enable high-speed connectivity between regions.
Meta’s AI adoption of Spectrum-X reflects the growing importance of open networking. Shainer said the company uses FBOSS as its network operating system, but said Spectrum-X also supports several other OSes through partnerships, including Cumulus, SONiC and Cisco’s NOS. This flexibility allows hyperscalers and enterprises to standardize their infrastructure using the system that best fits their environment.
Expanding the AI ecosystem
NVIDIA sees Spectrum-X as a way to make AI infrastructure more efficient and accessible at scale. Shainer said the Ethernet platform is specifically designed for AI workloads such as training and inference, offering up to 95% effective bandwidth, significantly exceeding traditional Ethernet.
He added that NVIDIA’s partnerships with companies like Cisco, xAI, Meta, and Oracle Cloud Infrastructure are helping bring Spectrum-X to a broader range of environments, from hyperscalers to enterprises.
Preparing for Vera Rubin and beyond
DeLaere said NVIDIA’s next Vera Rubin architecture will be commercially available in late 2026, with Rubin CPX products expected by the end of the year. Both work with Spectrum-X networking and MGX systems to support the next generation of AI factories.
He also revealed that Spectrum-X and XGS share the same core hardware but use different algorithms as distance changes. Spectrum-X is used for intra-datacenter communications, and XGS is used for inter-datacenter communications. This approach minimizes latency and allows multiple sites to work together as a single large AI supercomputer.
Collaboration across the power chain
To support 800 volt DC migration, NVIDIA is working with partners from the chip level to the grid. The company collaborates with Onsemi and Infineon on power components, Delta, Flex and Lite-On at the rack level, and Schneider Electric and Siemens on data center design. A technical whitepaper detailing this approach will be presented at the OCP Summit.
DeLaere describes this as a “holistic design from silicon to power delivery” to ensure all systems work together seamlessly in the dense AI environments run by companies like Meta and Oracle.
Performance benefits for hyperscalers
Spectrum-X Ethernet was purpose-built for distributed computing and AI workloads. Shainer said it provides adaptive routing and telemetry-based congestion control, eliminates network hotspots and provides stable performance. These features speed up training and inference, and allow you to run multiple workloads simultaneously without interference.
He added that Spectrum-X is the only Ethernet technology proven to scale at extreme levels, helping organizations achieve the best performance and recoup their GPU investments. For hyperscalers like Meta, that scalability helps manage growing AI training demands and maintain infrastructure efficiency.
Interaction of hardware and software
Although NVIDIA often focuses on hardware, DeLaere said software optimization is just as important. The company continues to improve performance through co-design, which coordinates hardware and software development to maximize the efficiency of AI systems.
NVIDIA is investing in algorithms like FP4 kernels, frameworks like Dynamo and TensorRT-LLM, and speculative decoding to improve throughput and performance of AI models. These updates will ensure that systems like Blackwell continue to deliver better results over time against hyperscalers like Meta, which rely on consistent AI performance, he said.
Networking in the era of trillion parameters
The Spectrum-X platform (including Ethernet switches and SuperNICs) is NVIDIA’s first Ethernet system purpose-built for AI workloads. Designed to efficiently link millions of GPUs while maintaining predictable performance across AI datacenters.
Spectrum-X uses congestion control technology to achieve up to 95 percent data throughput, significantly outperforming standard Ethernet, which typically only reaches about 60 percent due to flow collisions. The company’s XGS technology also supports long-range AI data center links, connecting facilities across regions into an integrated “AI super factory.”
Spectrum-X brings together NVIDIA’s full stack (GPU, CPU, NVLink, software) to deliver the consistent performance needed to support the next wave of trillion-parameter models and generative AI workloads.
(Photo provided by NVIDIA)
SEE ALSO: OpenAI and Nvidia plan $100 billion chip deal for the future of AI

Want to learn more about AI and big data from industry leaders? Check out the AI & Big Data Expo in Amsterdam, California, and London. This comprehensive event is part of TechEx and co-located with other major technology events. Click here for more information.
AI News is brought to you by TechForge Media. Learn about other upcoming enterprise technology events and webinars.

