Guideline

Australia AI Ethics Framework

Eight voluntary ethics principles to guide responsible AI. Principles: wellbeing, human-centred values, fairness, privacy, reliability, transparency, contestability, accountability. Developed by CSIRO Data61. Updated 2024.

November 7, 2019
Australia Government - Department of Industry, Science and Resources

Our take on this

This is Australia's starting point for AI governance. Published in 2019, these eight ethics principles—wellbeing, human-centred values, fairness, privacy, reliability, transparency, contestability and accountability—set out what the Australian government believes responsible AI should look like. They're voluntary, they're principles-based and they've been the foundation for everything else Australia has done on AI governance since.

Here's the thing about these principles: they're deliberately broad and non-prescriptive. That's both their strength and their limitation. On one hand, they work across any sector and any type of AI use. On the other hand, they don't tell you specifically what to do. That's why we also have the Voluntary AI Safety Standard now—to provide more concrete guidance on implementation.

For you, these principles matter because they signal government expectations and they align with international frameworks (particularly the OECD Principles). If you're working with Australian government agencies, they'll expect you to understand and apply these. They're also referenced in various industry codes and guidelines. But here's our advice: don't stop at the principles. Use them as your philosophical foundation, but pair them with more practical frameworks like NIST RMF or ISO 42001 for actual implementation. Think of these as the 'why' and use other frameworks for the 'how'.