Enterprise Architecture

The New EA Operating Model: From Architecture Review to Enablement at Scale

The ARB-centric EA operating model cannot keep pace with AI delivery. This final post in the series examines how EA must evolve from a gatekeeping function to a product team maintaining codified "golden paths" and the metrics that tell you whether your governance architecture is actually working.

March 15, 2026
9
 min read

There is an honest conversation that most EA leaders need to have with themselves. The Architecture Review Board, in its traditional form, was designed for a world where significant technology changes happened a few times a year and weeks of review time were acceptable. In an organisation deploying AI use cases at scale, that model does not just slow things down; it becomes irrelevant. Teams route around it. Shadow AI proliferates. The ARB reviews the deployments that were polite enough to ask for permission, while the ones that moved fastest skip the queue entirely.

This is not a critique of architecture governance. It is a critique of a specific operating model that was never designed for the pace and volume of AI delivery. The principles examined across this series (e.g. codified artefacts, paved roads, runtime enforcement, human-in-the-loop workflows) are the foundation of a different operating model. One where EA's value comes from building the infrastructure that makes safe delivery fast, not from being the gatekeeper that makes it slow.

The Three EA Operating Models

There are three patterns for EA operating models in the AI era, and the right choice depends on the organisation's size, AI maturity and the degree to which AI delivery is centralised or distributed across business units.

Centralised

A centralised EA model maintains all governance artefacts, platform standards and "golden pahts" patterns in a single function. All AI use cases are assessed against centrally maintained standards. Deviations are reviewed and approved by a central team. This model works well for organisations with a single delivery organisation, relatively homogeneous technology landscape and a lower volume of AI deployments where the centralised team is not a bottleneck.

The risk of centralised EA in a high-volume AI environment is exactly what it sounds like: the central team becomes the constraint. Every deviation request, every new use case, every platform change routes through the same group of people. The "golden path" pattern mitigates this significantly, most deployments that follow the path do not require central review; but the central team must be staffed to handle the deviation volume without creating backlogs.

Federated

A federated EA model distributes governance accountability to domain or business unit architects, with a lightweight central function responsible for enterprise-level standards and cross-cutting concerns. Domain architects maintain the codified artefacts relevant to their domains, run the intake and deviation workflows for their area and report into the central function for enterprise-level changes.

Federation scales well and aligns governance accountability with the people who understand each domain's business context. The risk is inconsistency: if federated architects interpret enterprise standards differently, the codified artefacts across domains diverge and the enterprise loses the unified governance layer it needs. A federated model requires strong central standards, a shared artefact repository that all domains contribute to and regular cross-domain review to catch divergence before it becomes technical debt.

Hybrid

Most large organisations operating at meaningful AI scale should move toward a hybrid model: a central EA function responsible for enterprise platform architecture, the governance and standards layer and the "golden path" patterns, with domain or capability architects embedded in delivery teams who operate within the centrally defined boundaries and contribute to the evolution of those boundaries through the change governance process.

The hybrid model is how you get both consistency and speed. The central function ensures the paved roads exist and are well-maintained. The embedded architects help delivery teams use those roads effectively and surface the places where new paths are needed. The deviation workflow connects the two: when an embedded architect encounters a legitimate need to go down new path, the central function reviews it, makes the governance decision and, if the deviation is approved, considers whether it warrants becoming a new "golden path."

EA as a Product Team

The operating model shift that matters most is not the centralised-versus-federated question. It is the move from EA as a review function to EA as a product team.

A review function's job is to assess proposals and issue approvals or rejections. A product team's job is to build and maintain products that make other teams more effective. In the AI era, EA's primary product is the paved road: the set of codified patterns, pre-approved configurations and governance workflows that delivery teams can use to move from idea to production safely and quickly.

Product management for EA assets means: tracking adoption of each "golden path" pattern (which teams are using it, what friction they are experiencing, where they are deviating and why), soliciting feedback from delivery teams and incorporating it into path improvements, releasing new versions of patterns as the technology landscape or regulatory requirements change and deprecating old patterns with sufficient notice and migration support that teams can transition without disruption.

This is a fundamentally different mindset from the ARB model. The ARB is reactive: it responds to proposals. The EA product team is proactive: it anticipates what delivery teams will need and builds the infrastructure for it before they need it. The ARB measures its success by the quality of its decisions. The EA product team measures its success by the adoption rate of its paved roads and the reduction in time-to-safe-deployment for AI use cases.

Kill Criteria as Architectural Artefacts

One element of the new EA operating model that is consistently underemphasised is portfolio discipline. Organisations accumulate AI deployments the same way they accumulate application portfolio debt: use cases get stood up, deliver initial value, fall out of active maintenance and persist in production long after they should have been retired.

Kill criteria, the conditions under which an AI deployment should be decommissioned, are architectural artefacts. They should be codified at intake, when the use case is registered, not decided ad hoc when someone eventually notices that a system is underperforming. Kill criteria might include: evaluation metrics falling below threshold for a sustained period, cost-per-output exceeding a defined ceiling, data dependencies becoming non-compliant with updated standards, model provenance no longer meeting current requirements or business context changing such that the use case no longer addresses a relevant need.

When kill criteria are codified artefacts, portfolio discipline becomes an automated alert rather than a manual governance conversation. The monitoring layer surfaces the signal. The governance workflow routes it to the right owner. The owner makes the decommission decision with full context. The artefacts are retired in an orderly way and the dependencies are documented.

EA OKRs: Measuring What Actually Matters

How do you know whether the new EA operating model is working? Not by counting documents produced or ARB meetings held. By measuring the outcomes that the operating model is designed to produce.

The metrics that matter for EA in the AI era are:

Time to first safe deployment. How long does it take a new AI use case to move from registration to production, following the "golden path?" This measures whether the "golden paths" are actually usable, fast enough and clear enough that teams choose them over shadow alternatives.

"Golden Path" adoption rate. What proportion of AI deployments follow an existing paved road pattern versus requiring a deviation workflow? High adoption rates mean the roads cover the actual deployment landscape. Low adoption rates mean either the roads do not fit the organisation's use cases or teams are not aware they exist.

Change workflow cycle time. How long does the deviation and change workflow take from submission to decision? This measures whether governance is a bottleneck. If cycle time is measured in weeks, the workflow is not fit for purpose in an AI delivery environment. If it is measured in days, it is working.

Policy compliance rate. What proportion of AI deployments are operating within their approved boundaries at any given time? Drift from this metric is the first signal that the enforcement mechanisms are degrading or that the standards have not kept pace with delivery team needs.

AI incident rate. How frequently do AI deployments produce incidents: boundary violations, output quality failures, security events or compliance breaches? This is the lagging indicator that validates whether the leading indicators above are measuring the right things.

The Trusenta Perspective

The EA functions we see succeeding in the AI era have made one decisive shift: they stopped trying to review everything and started building the infrastructure that makes review unnecessary for in-boundary work. They invested in codifying their standards as paved roads, standing up the governance and standards layer of the AI platform and measuring their success by delivery team adoption rather than ARB output.

That shift is not about reducing rigour. It is about applying rigour at the right point, in the design of the paths that can be taken, not in the friction of every individual journey. When the paths are well-built, safe delivery is not the exception that requires special approval. It is the default that requires no approval at all.

That is what EA in the AI era looks like at its best. Not a gatekeeper. A builder. Not a reviewer. A product team. Not a constraint on AI delivery. The infrastructure that makes AI delivery fast, safe and governed at scale.

Key Takeaways

  • The ARB-centric EA operating model cannot keep pace with AI delivery at scale. Teams route around it and shadow AI proliferates. The operating model must change.
  • Most large organisations should move toward a hybrid EA model: a central function maintaining enterprise platform standards and "golden paths," with embedded architects helping delivery teams use them effectively.
  • EA must operate as a product team maintaining codified artifacts; tracking adoption, soliciting feedback, releasing improvements and deprecating outdated patterns and not as a review function issuing approvals.
  • Kill criteria for AI deployments should be codified artefacts defined at intake, not ad hoc conversations held when underperformance becomes undeniable.
  • EA's effectiveness in the AI era is measured by time to first safe deployment, paved road adoption rate, change workflow cycle time, policy compliance rate and AI incident rate not by documents produced or meetings held.

How Trusenta Can Help

Enterprise Architecture in the AI Era includes operating model design as a core deliverable assessing the current EA operating model, identifying the gaps between it and what AI delivery at scale requires and producing a transition roadmap that moves from ARB-centric review to paved road-based enablement at a pace the organisation can absorb.

Fractional Enterprise Architect provides ongoing EA leadership for organisations that need the new operating model established and maintained but do not have a full-time EA leader to drive it. A fractional EA brings the product team mindset, the codified artefact framework and the governance layer design embedded in your organisation at the pace and commitment level that matches your current AI maturity and investment capacity.

Conclusion

Across six posts, we have examined a single thesis from multiple angles: enterprise architecture in the AI era is not a documentation and review discipline, it is a governance engine. The artefacts it produces are not documents but objects. The standards it maintains are not principles but codified constraints. The oversight it provides is not periodic review but continuous enforcement, with human-in-the-loop workflows that surface the decisions that genuinely warrant human judgement.

This is not a vision of AI governance that slows down delivery. It is a vision of AI governance that makes fast delivery safe. The paved roads are built to be fast. The enforcement is built to be invisible to teams operating within boundaries. The human-in-the-loop is reserved for the moments when boundaries are genuinely tested, not applied to every routine deployment as a tax on progress.

The organisations that build this infrastructure now will compound its value with every AI deployment that follows. The ones that keep treating EA as a documentation function in a world that has moved on will keep losing the race between AI adoption and AI governance. The gap between those two outcomes is the operating model.

Author

Mark Miller
Mark brings a rare blend of C-suite leadership and hands-on consulting experience to Trusenta. As former SVP of Services, SVP of Business Opeartions, Managing Director and CIO he brings a breadth of experinece in his specialty in guiding organisations through AI strategy, governance and adoption; bridging ambition with practical execution. His focus is on helping clients embed AI responsibly, at scale and in service of real business outcomes.
https://www.linkedin.com/in/consult-mmiller/