BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Meta Unveils 24k GPU AI Infrastructure Design

Meta Unveils 24k GPU AI Infrastructure Design

This item in japanese

Meta recently announced the design of two new AI computing clusters, each containing 24,576 GPUs. The clusters are based on Meta's Grand Teton hardware platform, and one cluster is currently used by Meta for training their next-generation Llama 3 model.

Meta designed the clusters to support their generative AI efforts. The two cluster variants differ in their networking fabric. The Llama 3 cluster uses remote direct memory access (RDMA) over converged Ethernet (RoCE) while the other uses NVIDIA's Quantum2 InfiniBand. The storage layer is based on Meta's custom-built Tectonic filesystem, which supports the synchronized I/O needed to handle checkpoints from thousands of GPUs. According to Meta,

These two AI training cluster designs are a part of our larger roadmap for the future of AI. By the end of 2024, we’re aiming to continue to grow our infrastructure build-out that will include 350,000 NVIDIA H100s as part of a portfolio that will feature compute power equivalent to nearly 600,000 H100s.

Meta has a history of open-sourcing their hardware platform and rack designs. In 2021, InfoQ covered Meta's ZionEX cluster. InfoQ covered the development of the Grand Teton platform and Meta's open rack design in 2022. As part of that effort, Meta contributed their work to the Open Compute Project, which Meta founded in 2011. In late 2023, Meta and IBM launched the AI Alliance "to support open innovation and open science in AI."

One of the big challenges Meta faced with the new clusters was the difficulty of debugging at that scale. Meta worked with Hammerspace to build interactive debugging tools for their storage system. Meta also worked on a "distributed collective flight recorder" for troubleshooting distributed training.

While developing the new clusters, Meta ran several simulations to predict their intern-node communication performance. However, "out of the box" the clusters did not perform as well as smaller, optimized clusters; bandwidth utilization during benchmarking was extremely variable. After tuning job schedulers and optimizing network routing in the cluster, this metric was consistently greater than 90%.

Meta also worked on their PyTorch framework implementation to better utilize the cluster hardware. For example, the H100 GPUs support 8-bit floating point operations which can be used to accelerate training. Meta also worked on parallelization algorithms and initialization bottlenecks, reducing init time from "sometimes hours down to minutes."

In a Hacker News discussion about the Meta clusters, several users lamented that hardware costs can make it difficult to compete in the AI space with "hyper-scale" companies like Meta. AI developer Daniel Han-Chen remarked:

Another way to compete with the big tech incumbents is instead of hardware, try maths and software hacks to level the playing field! Training models is still black magic, so making it faster on the software side can solve the capital cost issue somewhat!

Besides Meta, other AI players have also released details of their large compute clusters. Google recently announced their AI Hypercomputer, based on their new Cloud TPU v5p accelerator hardware. Microsoft Azure's Eagle supercomputer, which contains 14,400 NVIDIA H100 GPUs, recently placed third on the HPC Top500.

About the Author

Rate this Article

Adoption
Style

BT