Inflection

Announcing our collaboration with NVIDIA and CoreWeave on MLPerf

Palo Alto, CA – June 27, 2023

Palo Alto, CA, June, 27, 2023 – Along with our partners, Inflection AI is building one of the largest computing clusters in the world, today comprising thousands of NVIDIA H100 Tensor Core GPUs.

We’re excited to announce that this cluster has delivered state-of-the-art performance on the open source benchmark MLPerf, completing the reference training task in just 11 minutes.

In a joint submission with CoreWeave and NVIDIA, the Inflection AI cluster—which today stands at over 3,500 NVIDIA H100 Tensor Core GPUs—was shown to be the fastest on this benchmark in training large language models. We plan to dramatically expand the size of this computing infrastructure over the next few months.

We worked with NVIDIA in close collaboration with our partner CoreWeave, to run the MLPerf tests and fine-tune and optimize the cluster. MLPerf is the industry-standard benchmark for both model training and inference and provides fair and useful insights into workloads representing the state of the art in AI.

This follows our unveiling of Inflection-1, our in-house LLM, as the best model in its compute class, outperforming GPT-3.5, LLaMA, Chinchilla, and PaLM-540B on a wide range of benchmarks commonly used for comparing LLMs. Inflection-1 enables our users to interact with Pi, our first personal AI, in a simple, natural way and receive fast, relevant and helpful information and advice. This means that anyone is able to experience the power of a personal AI today.

At Inflection, we are deeply proud of these achievements, having started the company just over a year ago. We expect to have further milestones to announce in the coming weeks as we continue to deliver on our mission to build the most capable and safe AI products, accessible to millions of users.