G7-led initiative with International Guiding Principles for AI (based on OECD) and International Code of Conduct for Organizations Developing Advanced AI Systems. Voluntary code covers risk assessments, mitigations, transparency, privacy/security, governance. Addresses foundation models and generative AI.

The G7 Hiroshima AI Process marked a turning point in international AI cooperation. When the world's major democracies and largest economies agree on principles and practical guidance for advanced AI, it carries real weight. This wasn't just another principles document—it included a detailed Code of Conduct specifically for organisations developing frontier AI systems, the kind that can have significant societal impacts.
The Code covers the critical areas: comprehensive risk assessments and mitigations, robust security controls, transparency about capabilities and limitations, privacy and data protection, governance processes, incident reporting and information sharing. It's specifically designed for foundation models and generative AI systems, recognising that these technologies pose unique challenges compared to narrow AI applications.
For you, this matters in a couple of ways. First, if you're developing advanced AI systems, this Code represents what major governments expect from responsible AI developers. Second, if you're procuring or deploying frontier AI systems from vendors, you should expect them to follow this Code. It's becoming a baseline for what 'responsible AI development' means in the context of powerful general-purpose models. The fact that the G7 plus the EU agreed on this shows there's genuine international alignment on AI safety for advanced systems, despite other geopolitical tensions. This is the direction of travel, and organisations developing or deploying cutting-edge AI need to understand these expectations.