Guideline

OECD AI Principles

First intergovernmental standard on AI. Five values-based principles: inclusive growth, human rights and democratic values, transparency, robustness/security/safety, accountability. Five recommendations for policymakers.

May 2024
OECD.AI

Our take on this

The OECD AI Principles are the foundation stone of global AI governance. When 47 countries agree on something, you know it matters. These principles established the first intergovernmental consensus on how AI should be developed and used, and they've become the template that everyone else builds on. You'll see these same ideas reflected in the EU AI Act, UNESCO's framework, our Australian Ethics Principles and just about every other major AI governance initiative.

The five principles are straightforward: AI should foster inclusive growth, respect human rights and democratic values, be transparent and understandable, work robustly and securely and have clear accountability. They're deliberately high-level because they need to work across different legal systems and cultural contexts. But don't let that fool you—these principles carry real weight in how governments and international bodies think about AI regulation.

For you, this matters as the common language of global AI governance. When you're dealing with international partners, investors or regulators, these principles provide the shared baseline everyone understands. They're particularly important if you're in sectors with international standards or cross-border operations. The OECD regularly updates these principles—the 2024 refresh specifically addressed generative AI—so they remain relevant. While you wouldn't build your entire AI governance program on these alone, they're essential for understanding the global policy environment your business operates in.