AI is progressing fast. In the past few months alone we have seen new and larger models giving rise to impressive new capabilities. Governments and policymakers have rightly taken a keen interest, and we welcome their proactive steps to understand and manage the arrival of this new wave of powerful AI technologies.
In July we signed up to the White House voluntary commitments and today we welcome President Biden’s Executive Order on AI. We are attending the UK’s AI Safety Summit and we support their efforts to establish an “IPCC-style” body focused on establishing the scientific consensus around the current and future capabilities of frontier AI models. In a recent Financial Times article our CEO, Mustafa Suleyman, outlined the case for this approach, along with a similar take in Foreign Affairs magazine in August.
Today the G7 Hiroshima Process on Generative Artificial Intelligence, a major international collaboration to develop standards for safe, secure, ethical AI, reaches a critical milestone. Japan has used its presidency of the G7 this year to instigate a wide-ranging review of what commitments are needed for this to happen.
We at Inflection, alongside G7 governments and a group of leading AI companies, support the International Code of Conduct for Organizations Developing Advanced AI Systems that has come out of the process. Crucially it builds on existing commitments, like those made at the White House in July, ensuring they operate not just in the US but globally, harmonizing standards into a clear set of international and actionable principles.
They involve evaluating the safety of models both prior to and post launch. Identifying and mitigating risks is something for every stage of the AI lifecycle; these make that clear. This is only the beginning of what the Code of Conduct specifies. It extends to everything from adequate sharing mechanisms to report harms and vulnerabilities to secure systems to avoid sensitive material or powerful systems leaking or being stolen.
It requires companies to draft and disclose policies around AI governance and risk management, including in areas like privacy. It commits us to both establishing and upholding the highest standards across the board while also seeking to use AI for good, addressing our most urgent challenges. You can read more on Inflection’s safety work as part of the published reports from other technology companies ahead of the AI Safety Summit.
There’s much further to go - but efforts like the Hiroshima Process are showing how consensus can be reached when governments, companies, academics and civil society all come together, working with urgency and purpose. It offers a crucial foundation, a set of clear commitments that each contribute to securing a better, safer roll out of AI.
We look forward to doing our part in making it happen.
For the record, the Code of Conduct calls on organizations to take the following actions: we support them all.