
NIST AI RMF, ISO 42001, the EU AI Act and OWASP controls are not compliance programmes to run alongside your architecture. They are architecture requirements in disguise. This post maps each framework directly to the codified EA artefacts and enforcement mechanisms that make AI governance real rather than theoretical.

Most organisations approach AI governance frameworks the way they approach compliance programmes: assign someone to own it, run a gap analysis, produce a report and present it to the board. The framework sits on the shelf. The AI systems run in production. The two rarely connect.
This is the fundamental misunderstanding that keeps AI governance theoretical. NIST AI RMF, ISO 42001, the EU AI Act and OWASP controls are not external standards to be documented and filed. They are requirements that can only be satisfied through architecture; through codified artefacts, runtime enforcement and the change governance processes we have been examining across this series. When you treat them as architecture requirements, they become achievable. When you treat them as compliance exercises, they become expensive and ineffective.
The natural organisational response to AI governance is to create a committee. An AI Ethics Committee, an AI Steering Group, a Centre of Excellence. These bodies have genuine value for strategic direction and policy development. But they cannot govern AI behaviour at runtime. A committee that meets monthly cannot prevent an AI agent from taking an out-of-scope action on a Tuesday afternoon. A policy that lives in a document cannot stop a model from accessing data it should not access.
Governance that lives only in committees and documents is governance theatre. It satisfies the visible aspects of compliance (e.g. board reporting, policy documentation, framework attestation) while leaving the actual AI systems largely ungoverned at the point of operation.
Real AI governance is embedded in architecture: codified in artefacts, enforced at runtime and surfaced through change workflows when boundaries are tested. The governance frameworks examined in this post all presuppose this. The NIST AI RMF's Govern function is not a committee charter, it is a set of organisational practices and technical controls that must be operational. ISO 42001's management system requirements are not attestable through documentation alone, they require evidence that controls are operating. The EU AI Act's requirements for high-risk AI systems are not satisfied by a risk register, they require demonstrable technical measures.
The NIST AI Risk Management Framework organises AI governance into four functions: Govern, Map, Measure and Manage. Each function maps to specific architectural components.
Govern establishes the organisational context for AI risk management: policies, roles, responsibilities and accountability structures. In architectural terms, this is the governance and standards layer of the AI platform, the codified policy objects, the defined approval workflows and the ownership assignments that make governance operational rather than nominal. Govern does not just define the rules. It establishes the mechanisms through which rules are enforced and exceptions are handled.
Map identifies and categorises AI risks in context: what the AI system does, who it affects, what could go wrong and how the risk compares to the organisation's risk appetite. In architectural terms, this is the AI use case register, a structured record of every AI deployment with its risk classification, data dependencies, model provenance and applicable controls. When a new use case is registered, the Map function is the intake assessment that classifies it before it is approved to proceed.
Measure analyses and tracks AI risks over time. In architectural terms, this is the monitoring and audit layer: the evaluation metrics, drift detection, cost telemetry and audit trail that provide the evidence base for risk assessment. Measure is not a point-in-time exercise. It is continuous instrumentation that generates the data the Manage function acts on.
Manage responds to identified risks: prioritising treatments, implementing controls and tracking residual risk. In architectural terms, this is the change governance workflow: the structured process through which boundary violations, deviation requests and risk findings are assessed, decided and resolved. The NIST AI RMF's March 2025 update strengthened the connection between model provenance and the Manage function, requiring organisations to demonstrate that the lineage of models used in production is documented and that changes to those models are governed.
ISO/IEC 42001:2023 is the international standard for AI management systems. It provides a Plan-Do-Check-Act structure for establishing, implementing, maintaining and continually improving an organisation's approach to AI governance. For EA functions, it is most usefully understood as a codification requirement: the controls ISO 42001 requires must exist as operational, evidenceable mechanisms, not as documented intentions.
The standard's requirements for risk assessment, impact assessment for AI-affected parties and third-party AI supply chain governance all translate directly into EA artefacts. Risk assessments are not spreadsheets completed annually; they are structured objects updated when new use cases are registered or when existing deployments change. Impact assessments for high-risk AI systems are not consultant reports; they are codified artefacts produced through the intake process, linked to the relevant governance approvals and updated when the system or its operating context changes materially.
ISO 42001 certification is increasingly sought by organisations that need to demonstrate AI governance maturity to regulators, enterprise customers and boards. The path to certification is considerably shorter for organisations that have already built the codified artefact and enforcement architecture described in this series. The certification audit is largely an evidence collection exercise and when the artefacts exist as operational objects rather than documents, the evidence is already there.
The EU AI Act classifies AI systems by risk level: unacceptable risk (prohibited), high risk (requiring conformity assessment, technical documentation and human oversight), limited risk (transparency requirements) and minimal risk (no specific requirements). For Australian organisations, the Act is directly relevant if they operate in, sell to or process data from EU markets and it is increasingly influential on regulatory expectations globally, including in Australia's own AI policy development.
The high-risk classification is the one with the most significant architectural implications. High-risk AI systems; which include systems used in employment, credit, education, critical infrastructure and several other domains must meet requirements that are fundamentally architectural: technical documentation of the system's purpose, capabilities and limitations; data governance measures ensuring training and operational data is relevant, representative and bias-mitigated; logging and audit trail requirements that enable post-hoc review of system decisions; and human oversight measures that enable overriding or stopping the system.
Each of these requirements maps to a codified artefact in the governance layer. The technical documentation requirement is satisfied by the use case register entry and its associated architecture artefacts. The data governance requirement is satisfied by the AI-readiness data standard and its lineage and permissions objects. The logging requirement is satisfied by the monitoring and audit layer's tracing and retention configuration. The human oversight requirement is satisfied by the boundary artefact's human-in-the-loop checkpoint definitions.
When organisations treat the EU AI Act as an architecture requirement, they discover that compliance is a natural consequence of building AI systems on the foundations described in this series. When they treat it as a separate compliance programme, they discover that retrofitting those foundations to existing deployments is expensive and disruptive.
The OWASP LLM Top 10 and the OWASP Agentic AI Top 10 (examined in detail in Part 4) provide the security-specific risk taxonomy that complements the governance frameworks above. For enterprise architects, the key insight is that the architectural mitigations for OWASP risks are the same artefacts and enforcement mechanisms that satisfy NIST, ISO and EU AI Act requirements. There is not a separate security architecture and a separate governance architecture. They are the same architecture, serving multiple frameworks simultaneously.
Prompt injection mitigations (OWASP LLM01, OWASP Agentic goal hijacking) are enforced through the evaluation and safety layer, the same layer that provides the Measure function in NIST AI RMF. Excessive agency mitigations (OWASP LLM08) are enforced through the agent boundary artefact, the same artefact that satisfies the human oversight requirement of the EU AI Act's high-risk provisions. Sensitive information disclosure mitigations (OWASP LLM02) are enforced through the data access policy objects, the same objects that satisfy ISO 42001's data governance requirements.
The practical tool that ties these frameworks together is an AI control catalogue: a structured register of every control required by the applicable governance frameworks, the architecture component that implements each control, the artefact that codifies it, the evidence that demonstrates it is operating and the owner responsible for its maintenance.
An AI control catalogue does three things. It makes the coverage of governance frameworks visible: you can see at a glance which controls are implemented, which are partially implemented and which are gaps. It makes accountability clear: every control has an owner who is responsible for the artefact and the evidence. And it makes audit straightforward: when a regulator, certifier or board asks for evidence that a specific governance requirement is met, the control catalogue points to the artefact and the evidence trail.
For Australian organisations pursuing ISO 42001 certification or preparing for regulatory scrutiny under APRA's AI-related guidance, the control catalogue is the bridge between the governance frameworks they need to satisfy and the architecture that satisfies them.
We have worked with organisations that approached AI governance as a compliance programme and organisations that approached it as architecture. The difference in outcomes is stark. Compliance programme organisations produce documentation that passes initial review but fails to prevent the incidents it was designed to prevent. Architecture organisations produce systems that govern AI behaviour at runtime, generate the evidence that audits and regulators require and adapt to new frameworks and requirements through governed changes to codified artefacts rather than one-off documentation exercises.
The frameworks examined in this post (e.g. NIST AI RMF, ISO 42001, the EU AI Act and OWASP) are not in competition with each other. They are complementary lenses on the same underlying requirement: AI systems must be governed at the point of operation, with codified controls, continuous monitoring and structured human oversight. Treat them as architecture requirements, and satisfying multiple frameworks simultaneously becomes a natural outcome of building AI systems the right way.
In Part 6, Mark Miller examines the EA operating model that makes everything in this series sustainable at scale; moving from Architecture Review Board-centric governance to EA as a product team maintaining codified "golden paths." We will cover the three operating model patterns (centralised, federated and hybrid), how to measure EA's contribution to AI outcomes and the metrics that tell you whether your governance architecture is working.
AI Governance Foundations is the structured engagement for organisations building their first codified AI governance architecture. It establishes the control catalogue, maps applicable frameworks to architecture artefacts and produces the initial set of codified governance objects that the AI platform's governance layer requires.
AI Governance Maturity Uplift is for organisations that have existing governance documentation but need to operationalise it; moving from policies and principles that live in documents to controls that are codified as artefacts, enforced at runtime and evidenceable on demand. The engagement directly addresses the gap between governance theatre and governance that works.
