AI is coming. Language models will transform our everyday lives. Society will be irrevocably changed by the greatest proliferation of intelligence and raw power in history.
At some point over the next decade, without real guardrails and new technical advances, it's possible that an AI might quickly get better than us at every conceivable cognitive task. The great challenge before us is to contain the power of new systems like this, such that they always remain under meaningful human control and always operate within the boundaries we set.
Speaking plainly, there's a huge amount of safety work ahead. So far AI safety has been stuck in the space of ideas and meetings. Stuck in bureaucratic wrangling. Stuck in academic journals. Stuck in breathless op-eds and Twitter threads. The amount of tangible progress versus hype and panic has been insufficient. At Inflection we find this both concerning and frustrating. That’s why safety is at the heart of our mission, why we founded the company in the first place, and why we are working flat out to make it happen.
Today we are proud to announce that working alongside the White House, we are partnering with other major AI labs, Amazon, Microsoft, Google, OpenAI, Meta and Anthropic to take a proactive and precautionary approach to the development of the very largest AI models.
This announcement is a small but positive first step, but let’s also be honest, the project of making truly safe and trustworthy AI is still only in its earliest phase. Concrete progress on technical safety and societal oversight needs to keep up. We see this as simply a springboard and catalyst for doing more.
Today’s commitments include the following:
Challenge and red-team everything. Our internal Safety team continuously pressure tests our models for risks, and works with outside experts on comprehensive red-teaming of our technologies.
Go public, be honest. We maintain a public list of known limitations to our product, and have a forthcoming paper exploring the safety of our systems in more detail.
Share information. We will share key findings regarding trust and safety risks, dangerous or emergent capabilities and attempts to circumvent safeguards with companies and governments.
A full list of commitments is available here.
We will soon publish further work on safety and announce a number of significant collaborations. One example of where we need to go further and faster is in elections. At Inflection, we believe it is vital that AI is kept out of the democratic process. Powerful but still nascent technology cannot be a part of electioneering; we should legislate to ban the use of AIs and chatbots around the ballot box. Next year is an election year in America, and it will be the first in the era of generative AI. Preserving the stability and integrity of our electoral system means starting work right now.
These voluntary measures are the right next step and in time we hope they pave the wave for comprehensive regulation.