Meta Unveils Plans for AI Training Infrastructure with Nvidia H100 GPUs

Miguel Ángel Delgado Escobar
1 min read

News Content

Meta has revealed insights into its AI training infrastructure, indicating its current dependence on nearly 50,000 Nvidia H100 GPUs for training its open source Llama 3 LLM. By the end of 2024, Meta anticipates the deployment of over 350,000 Nvidia H100 GPUs, along with additional computing power from other sources. This transition aligns with Meta's commitment to expand its AI capabilities by manufacturing its own AI chips, like the Artemis, and producing custom RISC-V silicon. Furthermore, Meta is also focused on optimizing storage solutions for AI training, developing a Linux Filesystem in Userspace backed by a version of its 'Tectonic' distributed storage solution. The company's progressive approach towards AI infrastructure reflects its determination to enhance AI research and development across various domains. The revelation of Meta's strategies presents notable insights into the company's future endeavors towards AI training and hardware development.

You May Also Like

This article is submitted by our user under the News Submission Rules and Guidelines.The cover photo is computer generated art for illustrative purposes only; not indicative of factual content. If you believe this article infringes upon copyright rights, please do not hesitate to report it by sending an email to us. Your vigilance and cooperation are invaluable in helping us maintain a respectful and legally compliant community.

Subscribe to our Newsletter

Get the latest in enterprise business and tech with exclusive peeks at our new offerings