We welcome the G7 Hiroshima Code of Conduct for developing advanced AI systems

AI is progressing fast. In the past few months alone we have seen new and larger models giving rise to impressive new capabilities. Governments and policymakers have rightly taken a keen interest, and we welcome their proactive steps to understand and manage the arrival of this new wave of powerful AI technologies.

In July we signed up to the White House voluntary commitments and today we welcome President Biden’s Executive Order on AI. We are attending the UK’s AI Safety Summit and we support their efforts to establish an “IPCC-style” body focused on establishing the scientific consensus around the current and future capabilities of frontier AI models. In a recent Financial Times article our CEO, Mustafa Suleyman, outlined the case for this approach, along with a similar take in Foreign Affairs magazine in August.

Today the G7 Hiroshima Process on Generative Artificial Intelligence, a major international collaboration to develop standards for safe, secure, ethical AI, reaches a critical milestone. Japan has used its presidency of the G7 this year to instigate a wide-ranging review of what commitments are needed for this to happen.

We at Inflection, alongside G7 governments and a group of leading AI companies, support the International Code of Conduct for Organizations Developing Advanced AI Systems that has come out of the process. Crucially it builds on existing commitments, like those made at the White House in July, ensuring they operate not just in the US but globally, harmonizing standards into a clear set of international and actionable principles.

They involve evaluating the safety of models both prior to and post launch. Identifying and mitigating risks is something for every stage of the AI lifecycle; these make that clear. This is only the beginning of what the Code of Conduct specifies. It extends to everything from adequate sharing mechanisms to report harms and vulnerabilities to secure systems to avoid sensitive material or powerful systems leaking or being stolen.

It requires companies to draft and disclose policies around AI governance and risk management, including in areas like privacy. It commits us to both establishing and upholding the highest standards across the board while also seeking to use AI for good, addressing our most urgent challenges. You can read more on Inflection’s safety work as part of the published reports from other technology companies ahead of the AI Safety Summit.

There’s much further to go - but efforts like the Hiroshima Process are showing how consensus can be reached when governments, companies, academics and civil society all come together, working with urgency and purpose. It offers a crucial foundation, a set of clear commitments that each contribute to securing a better, safer roll out of AI.

We look forward to doing our part in making it happen.

For the record, the Code of Conduct calls on organizations to take the following actions: we support them all.

  • Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle.

  • Identify and mitigate vulnerabilities, and, where appropriate, incidents and patterns of misuse, after deployment including placement on the market.

  • Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increase accountability.

  • Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia.

  • Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures.

  • Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.

  • Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.

  • Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.

  • Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education.

  • Advance the development of and, where appropriate, adoption of international technical standards.

  • Implement appropriate data input measures and protections for personal data and intellectual property.