First legally binding international treaty on AI. Covers entire AI lifecycle with risk-based approach. Protects human rights, democracy and rule of law. Applies to public authorities and private actors.
This is a big deal—the world's first legally binding international treaty on AI. The Council of Europe Framework Convention creates baseline legal obligations that countries must enforce, covering both public and private sector AI use. Unlike voluntary guidelines or principles, this has teeth. If your country ratifies it, these become real legal requirements.
What's particularly significant is who's signed up: not just European countries, but the United States, United Kingdom, Japan and Israel. That's a strong signal that democracies globally are aligning on AI governance expectations centred on human rights, democracy and the rule of law. It's the international community saying 'here's the minimum standard for rights-respecting AI'.
For Australian businesses, this matters even though Australia hasn't signed yet. Many of your international clients, partners and competitors are operating under these obligations. If you're working with European or North American organisations, they'll increasingly expect you to meet these standards as part of supply chain due diligence. The treaty reinforces principles we already recognise—transparency, accountability, human oversight—but elevates them from 'best practice' to legal obligation in major markets. It's worth understanding what's in it, because these obligations will flow through contractual requirements even if Australia never formally joins.