Algorithmic Accountability
The obligation to explain and be answerable for the outcomes and impacts of algorithmic decision-making systems, including their design, deployment and ongoing use.
In Plain Language
If an AI makes a bad decision; like wrongly denying someone a loan; someone needs to be responsible for it. This means there's always a person or team who can explain why the AI did what it did.
Why This Matters
Regulators and the public increasingly demand that organisations can explain and justify AI decisions. Establishing clear accountability structures is essential for compliance, particularly under frameworks like the EU AI Act and Australia's proposed AI regulations.
.png)
