ADG | Adopt | Defend | Govern

AI Security Governance Framework • April 2026
0% · Why ADG
Pick your path above ↑
Framework Document
Get Involved
Solutioning Statement

Why ADG

Enterprise AI is shipping into production faster than governance can absorb. Engineering teams are deploying agents that act on tools, consume retrieved context, run multi-step plans, and chain to other agents - often before the people accountable for risk have seen the system, let alone approved it. Boards are asking questions control departments cannot answer. Regulators are landing - the EU AI Act first, others to follow. And the standards every organization has invested in - ISO/IEC 42001, NIST AI RMF, the EU AI Act itself - tell you what to think about, but not who decides, who builds, who breaks, who signs, or what evidence belongs in front of a regulator on Monday morning.

That is the gap ADG closes. ADG is not another standard. It is the operating model that sits underneath the standards you already have and makes them executable - written by a practitioner advisory board across financial services, healthcare, manufacturing, telecommunications, energy, and technology, in regulated and less-regulated jurisdictions across North America, Europe, and Asia-Pacific. Credentials and curricula are downstream of the framework, not the reason for it.

The architecture is three pillars - ADOPT (execute and deliver), DEFEND (secure and validate), and GOVERN (oversee, assure, decide) - that scale unchanged from the board to the engineer. The same three words frame a board paper, organize a release gate, label a red-team report, and structure an incident playbook. Beneath those pillars sit nine governance surfaces (the points where engineers instrument and attackers act); twelve Minimum Controls (MC-1 through MC-12), each with a named evidence artefact your auditor can hold; nine deployment overlays for the patterns AI actually ships in today - agentic orchestration, tools and MCP, multi-agent interoperability, multi-modal and composite stacks, long-context architectures; three autonomy tiers (HITL / HOTL / HOOTL) tied directly to the controls; and a four-phase roadmap that takes an organization from inventory to continuous assurance in twenty-four months.

ADG is built for alignment, not competition. Every Minimum Control maps cleanly into NIST AI RMF functions (GOVERN, MAP, MEASURE, MANAGE) and ISO/IEC 42001 Annex A - the framework's §11.2 crosswalk does it explicitly. MC-1 produces the AI inventory ISO 42001 A.6 demands and EU AI Act Art. 49 will register. MC-9 produces the post-market log Art. 73 inspectors will request. MC-11 produces the bias evidence Art. 10 requires. Organizations chasing 42001 certification or EU AI Act conformity use ADG to operationalize what those frameworks describe abstractly. The work is the same; ADG removes the translation layer.

ADG also goes where the standards do not. Agentic systems, MCP-connected tools, multi-agent interoperability, long-context architectures, and composite multi-modal stacks each carry an additive deployment overlay that composes with the same twelve Minimum Controls. Move from one copilot to a fleet of agents without rewriting a single control.

One element of ADG sits where peer frameworks generally do not: the AI Governance Council. Every framework tells you to govern AI. ADG tells you who breaks the tie when delivery and safety disagree - the Council resolves ADOPT-DEFEND tension, sets go / no-go thresholds, owns the exception register, and ties decision rights to MC-3 (Separation of Duties) and MC-10 (Periodic Governance Review). The mechanism, not the abstract requirement, is what is rarely named in peer frameworks - and it is where most production AI risk actually lives.

ADG is built for boards, CISOs and AI red teams, platform and AI engineers, risk and compliance, and procurement. Each audience reads a different path; the artefacts are the same. The framework was forged with senior AI, security, and governance leaders running production AI inside top Fortune 500, Fortune Global 500, and Big Four firms - including Salesforce, Microsoft, Citi, JPMorgan Chase, NTT DATA, KPMG, ServiceNow, BNP Paribas, Prudential, GE Healthcare, BASF, and Jio - the practitioners who own the failure modes this framework governs.

ADG takes the ADOPT-DEFEND-GOVERN operating model down to twelve auditable controls, nine technical surfaces, and additive overlays for agentic and multi-agent deployments - composable across homegrown, foundation-model-API, and SaaS-embedded AI.
Pick your path

Who's reading this?

ADG is dense by design. Pick the role that matches you and jump to the path written for it. Each path is 4-5 sections out of 13 - you don't have to read everything.

1. Background

1.1 Origin and Positioning

ADG began with a mission at EC-Council: the AI governance space was fragmenting into point standards, vendor playbooks, and academic frameworks - none of which gave enterprise boards and engineering teams a single, practitioner-tested operating model for deploying AI safely at scale. EC-Council set out to build that missing standard: a framework built around three enduring functions - ADOPT DEFEND GOVERN - that scales from the board to the engineer.

To avoid building in a vacuum, EC-Council convened an Advisory Board of practitioners from mature enterprises that have actually deployed AI in production. The initial draft was circulated to senior AI, security, and governance leaders across global organizations in financial services, technology, manufacturing, healthcare, telecommunications, energy, and consulting - covering both regulated and less-regulated sectors, and spanning North America, Europe, and Asia-Pacific.

Advisory Board Impact

ADG was not drafted in a conference room. It was forged with the senior AI, security, and governance leaders running production AI inside Fortune 500, Fortune Global 500, and Big Four firms - across regulated and less-regulated sectors, from design through implementation - the practitioners who own the failure modes this framework governs.

Their input drove eight structural enhancement clusters: an expanded harm taxonomy with Responsible AI integration, coverage beyond LLMs (diffusion, multi-modal, composite systems), a shared responsibility model for vendor/SaaS AI, deeper strategic treatment of the GOVERN pillar, a measurable metrics & evidence framework, multi-agent interoperability governance, explicit regulatory mapping (EU AI Act, NIST AI RMF, ISO 42001), and post-deployment continuous governance.
Framework Leadership
Jay Bavisi
Jay Bavisi
Chairman & CEO, EC-Council Group
Karthik S.
Karthik S.
Framework Architect & Lead Author
Practice Head, SecureAI, EC-Council Global Services
Mayank Tandon
Mayank Tandon
Global Outreach & Partner Experience
EC-Council
With Gratitude

Thank you to the Advisory Board

The practitioners below gave their time, scrutiny, and hard-won experience to review ADG. Their comments, corrections, and counterpoints are the reason this framework is field-tested rather than aspirational.

Adam Spearing
Adam Spearing
VP of AI GTM EMEA
ServiceNow
Andrei Son
Andrei Son
Head of AI Transformation
AUMOVIO
Anish Mitra
Anish Mitra
Director
KPMG
Anita Lacea
Anita Lacea
Head of AI Transformation, Azure Hardware Infrastructure
Microsoft
Dinesh Bhogle
Dinesh Bhogle
Head of AI/ML platform
Black & Veatch
Dr. Sayed Peerzade
Dr. Sayed Peerzade
Executive Vice President — Cloud, AI & Government Initiatives
Jio
Edoardo Tealdi
Edoardo Tealdi
Executive Head of AI Transformation — Business Engagement and Growth Units
NTT DATA, Inc.
Kathy Baxter
Kathy Baxter
VP / Principal Architect, Responsible AI & Tech
Salesforce
Lewis V. Adams
Lewis V. Adams
Vice President | Enterprise AI & Capital Productivity Transformation
Citi
Lily Rachmawati
Lily Rachmawati
Director, Head of Applied AI
BNP Paribas
Malik Hussain
Malik Hussain
AI Enablement Lead, Data & AI Academy
BASF
Mark Ritcey
Mark Ritcey
Vice President, AI & Automation Delivery
Latent Bridge
Naveen Upadhyay
Naveen Upadhyay
Vice President, AI/ML Product Management – Machine Learning & Intelligence Operations
JPMorgan Chase & Co.
Oscar Jarabo
Oscar Jarabo
Global Head of AI Product & Strategy
TKE
Pavan Kristipati
Pavan Kristipati
Head of AI Engineering & Transformation (SVP Scope) | Enterprise AI Adoption, Governance & Platform
Huntington Bank
Raghunandan Mishra
Raghunandan Mishra
AI & Data Engineering Leader
Raji Bhimireddy
Raji Bhimireddy
Vice President Cloud, AI, Architecture, FInOps & Business Value
Prudential
Sanjoy K. Saha
Sanjoy K. Saha
Head of AI Portfolio and Governance & Chief of Staff CDAO
GE Healthcare
ShanShan Pa
ShanShan Pa
Global Head of AI & Data Governance
GlobalLogic
Sophia Katrenko
Sophia Katrenko
VP of AI/ML
EcoVadis
Sruthi Pakanati
Sruthi Pakanati
Head of AI & Data Transformation, National Quality & Risk
Deloitte Australia
Sudarson Roy Pratihar
Sudarson Roy Pratihar
Founder and Principal
A2IQ
Yashwinder Chhikara
Yashwinder Chhikara
Sr. Vice President — AI, Analytics, and Product Management
iSON Xperiences

This version is the result - the iterative output of that collaboration. ADG is not a one-and-done document. It is a living framework, designed to be enhanced as AI deployment patterns evolve and as new practitioners join the conversation. The scope has grown from its original LLM-centric framing to cover modern deployment patterns including LLMOps, retrieval-augmented systems, tool-using agents, MCP-connected ecosystems, long-context architectures, multi-agent systems, and model lifecycle controls spanning pre-training through runtime.

Framework Objective

Provide a board-to-engineering operating model that separates decision rights, assigns control ownership, and defines the minimum governance required for safe, responsible, and scalable AI deployment - and to continue evolving it through ongoing practitioner review.

Practitioners who want to contribute to the next version can join the Advisory Board here.

1.2 Framework Purpose: The RE3 Trust Model

ADG is purpose-built to operationalize four non-negotiable properties of trustworthy enterprise AI - captured in the RE3 Trust Model:

RE³ = Responsible, Ethical, Efficient, Explainable AI

These are not checkboxes. They are governance outcomes that must be engineered into every AI deployment - from an internal copilot through a production multi-agent system. ADG translates each outcome into concrete controls across all nine governance surfaces (Model, Prompt, Context, Tools, Orchestration, Identity, Safety Layer, Telemetry, Learning Loop) and applies them to every deployment pattern (LLMOps, Agentic, Tools & MCP, Context, Training, Multi-Agent, Multi-Modal, and composite systems) across all three deployment classes (Homegrown, Foundation Model API, SaaS AI).

Responsible AI
Clear Accountability, Every Layer

Every AI action traces back to a human decision-maker. Delivered by: MC-1 (AI System Inventory), MC-3 (Separation of Duties), Agent Authority Statements, Shared Responsibility Model (Section 7), and the AI Governance Council escalation path. Each of the 9 deployment overlays explicitly aligns activities to ADOPT/DEFEND/GOVERN.

Ethical AI
Fairness, Safety, and Human Oversight

Protection from bias, manipulation, and harm across four classes (Technical, Societal, Operational, Systemic). Delivered by: the Four-Class Harm Taxonomy, Seven RAI Principles embedded in every surface, MC-11 (Fairness & Bias Evaluation), the Safety Layer surface, and Autonomy Tier controls (HITL/HOTL/HOOTL) scaled to risk.

Efficient AI
Reliable Delivery at Enterprise Scale

Governance that enables velocity, not one that grinds it. Delivered by: the 4-Phase Implementation Roadmap (Foundation - Maturity), Tier-1 Operational Metrics, the LLMOps Overlay for cost/performance monitoring, reusable Operating Artifacts, and one unified control set that spans Homegrown, API, and SaaS deployments - eliminating governance fragmentation.

Explainable AI
Traceable, Replayable, Defensible

Every output can be justified to a regulator, a board, or a customer. Delivered by: the Telemetry surface, MC-6 (Context Policy with provenance tracking), MC-7 (Tool/MCP audit logging), MC-8 (Runtime Monitoring), and MC-9 (Incident Response with forensic replay). Architecture-neutral - works for LLMs, diffusion, classical ML, and composite systems.

The outcome: RE3 AI becomes measurable. Each trust property maps to specific controls, metrics, and evidence requirements, so organizations can demonstrate trustworthy AI to boards, regulators, and customers - not just claim it.

1.3 Core Thesis

Traditional cyber and software governance frameworks assume deterministic execution, finite input spaces, stable functionality, and visible logic. AI systems violate those assumptions. Organizations need a framework that governs:

  • Systems that can generate novel behavior
  • Systems that act through external tools and APIs
  • Systems that consume dynamic context from internal and external sources
  • Systems whose behavior can change through model updates, tuning, retrieval changes, and runtime interaction
  • Systems composed of multiple models and agents that interact with each other, creating emergent risks not present in any individual component

ADG addresses this by combining role separation, surface-based control ownership, deployment-pattern overlays, lifecycle governance, and a structured harm taxonomy that extends beyond security to encompass responsible AI.

1.4 Scope and System Coverage

ADG governs all AI systems as defined by the OECD AI Policy Observatory and the EU AI Act: machine-based systems that, for explicit or implicit objectives, infer from input how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This explicitly includes:

  • Large language models (LLMs) and transformer-based generative AI
  • Diffusion models, diffusion-transformer hybrids, and image/video generation systems
  • Multi-modal systems combining text, image, audio, and code generation
  • Classical machine learning models (supervised, unsupervised, reinforcement learning)
  • Composite AI systems that orchestrate multiple model types within a single workflow
  • Emerging architectures including world models, neuromorphic systems, and foundation models beyond text

The governance surfaces, overlays, and controls in this framework are architecture-neutral by design. Where a deployment pattern creates unique governance requirements (e.g., agentic orchestration, diffusion-model provenance), a dedicated overlay provides additional controls. The framework is not limited to LLM-centric systems.

Scope Coverage - AI System Architecture with Agentic Harness

AGENTIC HARNESS & GOVERNANCE LAYER Tool Use & MCP Orchestr. Multi-agent Context & Memory Safety Guardrails Identity & AuthZ Telemetry & Replay EMERGING ARCHITECTURES World models, neuromorphic, foundation models beyond text COMPOSITE AI SYSTEMS Multiple model types within a single workflow CLASSICAL ML Supervised, unsupervised, reinforcement learning MULTI-MODAL Text + image + audio + code generation DIFFUSION MODELS Image & video generation LLMs Transformer-based Generative AI ADG GOVERNS ALL LAYERS

1.5 Distinction: Standards vs. Regulations

ADG is a voluntary governance framework designed to be compatible with, but distinct from, regulatory requirements. Throughout this document:

  • Regulatory requirements refer to legally binding obligations (EU AI Act, sector-specific regulations, data protection laws such as GDPR)
  • Standards refer to voluntary frameworks and best practices (NIST AI RMF, ISO/IEC 42001, OWASP Top 10 for LLM Applications, OWASP Top 10 for Agentic Applications, MITRE ATT&CK, MITRE ATLAS)
  • Threat catalogues describe what attackers do. ADG consumes four canonical sources - MITRE ATT&CK, MITRE ATLAS, OWASP Top 10 for LLM, OWASP Top 10 for Agentic - and maps each technique to the Minimum Control Set and on to NIST / ISO. See the §11.2.1 interactive flow.

ADG controls are designed to satisfy both regulatory and standards-based requirements, but organizations must independently verify compliance with applicable legal obligations in their jurisdictions.

2. Executive Summary

ADG is an enterprise AI security and responsible AI governance framework structured around three pillars - ADOPT DEFEND GOVERN - that provide a board-to-engineering operating model for safe, responsible, and scalable AI deployment.

Key framework components:

Pillars
The ADG Triad

ADOPT executes and delivers AI capabilities. DEFEND secures and validates against all harm classes. GOVERN oversees, assures, and resolves tensions between the other two pillars.

Harm Classes
Four-Class Harm Taxonomy

Technical, Societal, Operational, and Systemic harms - ensuring governance extends beyond security to cover bias, fairness, reliability, and emergent multi-agent risks.

RAI Principles
Responsible AI Principles

Fairness & non-discrimination, Transparency & explainability, Privacy & data protection, Accountability, Human oversight, Robustness & safety, and Sustainability & societal well-being - embedded across all three pillars.

Governance Surfaces
What Must Be Governed

Model, Prompt, Context, Tools, Orchestration, Identity, Safety Layer, Telemetry, and Learning Loop - defining what must be governed regardless of architecture.

Foundational Principles
The Durable Standards

Separation of powers, Explicit authority, Context as attack surface, Tool use as highest-risk control plane, Graduated oversight, Lifecycle governance, Mandatory evidence, Embedded RAI, and Defined shared responsibility.

Minimum Controls
MC-1 through MC-12

Each control includes an evidence requirement to ensure measurability. Together they form the baseline every production AI system must meet.

Deployment Overlays
Pattern-Specific Controls

LLMOps, Agentic Orchestration, Agent, Agentic Hardening, Tools & MCP, Context & Long-Window, Pre/Post-training, Multi-Agent Interop, and Multi-Modal/Composite systems.

Autonomy Tiers
HITL / HOTL / HOOTL

Assistive (Human-in-the-Loop), Conditional (Human-on-the-Loop), and Autonomous (Human-out-of-the-Loop) - determining the minimum governance required per system.

Implementation Phases
Roadmap to Maturity

Foundation (0-3 months), Control Deployment (3-9), Agentic Readiness (9-15), and Maturity (15-24) - a staged path from inventory to continuous assurance.

This version was forged with senior AI, security, and governance leaders running production AI inside Fortune 500, Fortune Global 500, and Big Four firms - practitioners working from design through implementation across regulated and less-regulated sectors. It addresses eight major gap clusters identified through systematic review and extends the framework to cover multi-agent systems, multi-modal architectures, shared responsibility, responsible AI, and measurable governance. It is suitable as the basis for consulting assessments, enterprise AI governance programs, certification architecture, board-facing assurance discussions, regulatory preparation, and vendor due diligence.

Framework Architecture - How the Layers Connect

Adopt
Execute & Deliver
Defend
Secure & Validate
Govern
Oversee & Decide
3 Pillars
9 Governance Surfaces
Model Prompt Context Tools Orchestration Identity Safety Layer Telemetry Learning Loop
12 Minimum Controls
MC-1 MC-2 MC-3 MC-4 MC-5 MC-6 MC-7 MC-8 MC-9 MC-10 MC-11 MC-12
4 Harm Classes
Technical Harms
Societal Harms
Operational Harms
Systemic Harms
4 Industry Threat Catalogues
Threat input · consumed by Defend & Govern
MITRE ATT&CK MITRE ATLAS OWASP LLM Top 10 OWASP Agentic Top 10
See §11.2.1 interactive flow: threat → ADG control → NIST + ISO
9 Deployment Pattern Overlays
LLMOps Agentic Agent Hardening Tools/MCP Context Training Multi-Agent Multi-Modal
3 Autonomy Tiers
HITL
Assistive
HOTL
Conditional
HOOTL
Autonomous
4-Phase Roadmap
Foundation Controls Agentic Maturity

Layer-by-Layer Reference

Concrete definitions and one example per element in the Framework Architecture diagram above. Use this as the quick-reference key when reading the rest of the document.

3 Pillars

Operating model
PillarDefinitionExample
ADOPTExecute and deliver AI capabilities. Owns selection, integration, deployment, operations.An LLMOps team rolling out a customer-support copilot to production.
DEFENDSecure and validate. Owns adversarial testing, guardrails, runtime detection, fairness evaluation, AI incident response.A red team running prompt-injection scenarios against the copilot before launch.
GOVERNOversee, assure, decide. Owns policy, decision rights, regulatory alignment, board accountability.An AI Governance Council approving the copilot for high-risk customer interactions and setting escalation thresholds.

9 Governance Surfaces

What must be governed
SurfaceDefinitionExample
ModelFoundation models, fine-tuned variants, adapters, routers, model versions.Switching the copilot from Claude 3.7 to Claude 4.7 - a model-surface change requiring change-control evidence.
PromptSystem prompts, templates, agent instructions, prompt libraries.Updating the copilot's system prompt to refuse PII requests - versioned and approved.
ContextRetrieval sources, session state, memory, hidden context, user metadata.A RAG pipeline pulling from SharePoint - context provenance and retention rules apply.
ToolsAPIs, plugins, actions, code execution, MCP capabilities.Granting the copilot a refund.process() tool - tool register entry, trust tier, audit logging required.
OrchestrationPlanners, workflow graphs, retry logic, multi-step flows, agent routing.An agent that retries up to 3 times then escalates to a human - retry budget defined here.
IdentityCredentials, service accounts, delegated authority, agent identity, secrets.The agent runs under its own scoped service account, not a user's personal credentials.
Safety LayerGuardrails, policy engines, semantic filters, classifiers, circuit breakers.A classifier that blocks the copilot from generating regulated financial advice.
TelemetryLogs, traces, evaluations, replay data, fairness metrics.Every prompt, retrieved context, and tool call captured for forensic replay.
Learning LoopPre-training sources, post-training alignment, feedback loops, RLHF data, retraining inputs.Customer thumbs-down feedback feeding retraining data - provenance and consent tracked.

12 Minimum Controls

Baseline every production AI system must meet
ControlDefinitionExample
MC-1AI System Inventory - published inventory with named owners, reviewed quarterly.A Confluence page listing every AI system with owner, risk tier, and last review date.
MC-2Risk Classification - rate each system by data sensitivity, autonomy, exposure, harm potential, criticality.The copilot rated "high autonomy + external-facing + financial harm potential" → high-risk tier.
MC-3Separation of Duties - split deployment, validation, and approval across ADOPT, DEFEND, GOVERN.The product team builds; DEFEND validates; GOVERN approves. No single function holds all three.
MC-4Pre-Production Evaluation - quality, safety, security, fairness, failure-mode tests before go-live.A signed test report covering jailbreak resistance, bias evaluation, and accuracy benchmarks.
MC-5Change Control - governed change process for prompts, tools, models, retrieval sources.A pull request to update the system prompt routed through DEFEND review and GOVERN approval.
MC-6Context Policy - provenance, retention, access, trust ordering for context inputs.A policy stating "SharePoint sources only, 30-day retention, redact PII at retrieval."
MC-7Tool & MCP Register - catalog of tools and MCP capabilities with trust tiering and invocation controls.A register of all approved MCP servers with risk tier and last assurance review date.
MC-8Runtime Monitoring - continuous detection of abuse, drift, data leakage, unsafe actions, bias emergence.Dashboards alerting when refusal rate drops below 95% (potential jailbreak signal).
MC-9AI Incident Response - AI-specific playbook with replayable evidence; tabletop at least annually.A runbook for "agent leaks data" plus a quarterly tabletop exercise with the response team.
MC-10Periodic Governance Review - exception handling and board reporting for high-risk systems.Quarterly review of all high-risk AI systems with the AI Governance Council.
MC-11Fairness & Bias Evaluation - discriminatory outcome testing using representative data.A signed report showing the copilot's response quality is consistent across demographic segments.
MC-12Shared Responsibility Documentation - accountability boundaries for vendor / SaaS AI.A signed matrix with the model vendor showing what they own (model + training) vs what we own (prompts, context, monitoring).

4 Harm Classes

What can go wrong
ClassDefinitionExample
TechnicalSecurity, reliability, and system integrity failures.Prompt injection causing the copilot to leak its system prompt or run unintended tool calls.
SocietalBias, discrimination, fairness, and human-rights impacts.A loan-decision agent denying credit at a higher rate to one demographic group.
OperationalReliability, accuracy, and business-impact failures.The copilot hallucinating a refund policy that does not exist, leading to disputes.
SystemicEmergent risks from AI-to-AI interaction and scaled deployment.Two negotiation agents from different vendors colluding on an unfavorable deal because their guardrails do not catch joint behavior.

4 Industry Threat Catalogues

Threat input · how Defend & Govern stay current

ADG does not invent its own threat list - it consumes the four canonical industry catalogues as input and maps each threat into the Minimum Control Set and the deployment overlays. The interactive §11.2.1 flow shows every link from threat → ADG control → NIST AI RMF & ISO/IEC 42001 clause.

CatalogueWhat it providesHow ADG consumes it
MITRE ATT&CKEnterprise adversary techniques abused via or against AI-enabled systems - phishing, credential theft, lateral movement, exfiltration, persistence.MC-8 (Runtime Monitoring) detection rules, MC-9 (Incident Response) playbooks, and red-team scenarios in MC-4.
MITRE ATLAS84+ adversarial ML/AI techniques and real-world case studies - model evasion, ML supply chain, model extraction, prompt injection at the model layer.MC-4 pre-production adversarial eval suites, MC-8 detection signatures, MC-9 forensic replay test cases.
OWASP Top 10 for LLM (2025)LLM01 Prompt Injection, LLM02 Sensitive Info Disclosure, LLM03 Supply Chain, LLM04 Data & Model Poisoning, LLM05 Improper Output Handling, LLM06 Excessive Agency, LLM07 System Prompt Leakage, LLM08 Vector & Embedding Weaknesses, LLM09 Misinformation, LLM10 Unbounded Consumption.MC-4 evaluation checklist, MC-6 (Context Policy) for LLM03/04/08, MC-8 runtime guardrails, MC-11 fairness checks for LLM09.
OWASP Top 10 for Agentic (2026)T1 Memory Poisoning, T2 Tool Misuse, T3 Privilege Compromise, T4 Resource Overload, T5 Cascading Hallucinations, T6 Intent Breaking, T7 Misaligned/Deceptive Behavior, T8 Repudiation, T9 Identity Spoofing, T10 HITL Overwhelm.Agentic Hardening overlay (§10.4) consumes this directly. MC-7 (Tool & MCP Register), MC-8 (Runtime Monitoring), MC-9 (Incident Response).

9 Deployment Pattern Overlays

Additive - select all that apply
OverlayDefinitionExample
LLMOpsVersioning, evaluation, release management, rollback, cost control, performance monitoring for LLM-based services.A production RAG chatbot with rollback, eval suite, per-token cost tracking, and a release gate.
Agentic OrchestrationThe planner / loop layer of a single agent - multi-step reasoning, retries, subtask decomposition, stop conditions.An agent that decomposes "book my trip" into flights → hotel → calendar with retry budgets.
AgentThe agent as a class of system - business mission, action boundaries, liability model, segregation of duties.A travel-booking agent authorized to spend up to $2,500 per booking on behalf of the user.
Agentic HardeningDefensive controls bolted onto agents - authority bounds, action isolation, kill switches, forensic replay, semantic firewalls.Dry-run mode for irreversible actions with human approval required before the booking is committed.
Tools & MCPTool discovery, capability registration, trust tiering, per-invocation policy for tools and MCP servers.An MCP server registered as "trust tier 2" - read-only access to corporate wiki, no write actions allowed.
Context & Long-WindowSession history, retrieved enterprise knowledge, persistent memory, multi-turn poisoning resistance.A copilot with persistent memory that flags conflicting facts injected mid-conversation.
Pre / Post-trainingModel provenance, fine-tuning, alignment, RLHF data, retraining governance.A LoRA adapter trained on internal docs - training data licensed, evaluated for leakage, version-controlled.
Multi-Agent InteropEmergent risks when multiple agents interact - agent-to-agent trust, value alignment, collusion detection, cascading failure prevention.A purchasing agent talking to a vendor's quoting agent, with circuit breakers if they loop or agree on out-of-policy terms.
Multi-Modal / CompositeCross-model risks when LLM + diffusion + classifier + retrieval combine in one product.An assistant that takes a screenshot, classifies it, retrieves docs, and generates a response - system-level eval required, not per-model.

3 Autonomy Tiers

Determines minimum governance per system
TierDefinitionExample
HITL - AssistiveAI recommends or drafts; a human approves every output before it takes effect.A copilot that drafts emails for the user to review and send.
HOTL - ConditionalAI acts within pre-approved limits; a human monitors and can intervene.A trading agent executing within position limits, with humans monitoring exposure dashboards.
HOOTL - AutonomousAI executes multi-step goals with limited or delayed human review.An overnight data-pipeline agent that processes batches and reports results in the morning.

4-Phase Roadmap

From inventory to continuous assurance, 0 – 24 months
PhaseDefinitionExample
Foundation (0 – 3 mo)Inventory, owners, ADG roles, criticality classification, AI Governance Council established.Quarter 1: publish AI inventory, name owners, charter the Governance Council. Controls: MC-1, MC-2, MC-3.
Controls (3 – 9 mo)Release gates, change control, context policies, monitoring, adversarial testing, fairness evals, vendor due diligence.Quarter 2 – 3: implement MC-4 through MC-9 plus MC-11 / MC-12 across all high-risk systems.
Agentic (9 – 15 mo)Authority statements, tool trust tiering, MCP governance, circuit breakers, multi-agent governance, forensic replay.Quarter 4 – 5: agentic hardening overlay deployed for all production agents.
Maturity (15 – 24 mo)Continuous evaluation, persistent adversarial monitoring, drift governance, RAI measurement, board-level reporting.Quarter 6 – 8: automated weekly red-team runs, quarterly board AI risk review, formal assurance review.

3. Document Usage

This section provides guidance on how to read, apply, and implement the ADG framework.

3.1 Target Audience

Each audience has a tailored reading path. Click any section to jump there directly.

AudienceRecommended Reading Path
Board & Executive Sponsors
Security & AI Red Teams
Engineering & Platform Teams
Compliance & Legal
All Stakeholders

3.2 Foundational Principles

ADG is built on nine foundational principles. Each principle is a durable standard that applies across industries, deployment patterns, and regulatory regimes.

Foundational Principles - Nine Durable Standards

1 Separation of Powers Independent validation is mandatory 2 Authority Must Be Explicit Implicit authority is a control gap 3 Context Is Attack Surface Governed assets need provenance & access control 4 Tool Use Is Highest Risk Risk escalates from output to real-world action 5 Graduated Oversight Higher autonomy needs stronger controls 6 Lifecycle Governance No single gate is sufficient 7 Evidence Is Mandatory If it can't be evidenced it can't be relied upon 8 RAI Embedded, Not Appended Structural requirements across all three pillars 9 Shared Responsibility Accountability must be documented & enforced 9 FOUNDATIONAL PRINCIPLES

3.3 Autonomy Tiers

ADG distinguishes three operational autonomy tiers. The tier determines the minimum governance requirements for deployment.

Note on terminology: HITL/HOTL/HOOTL originated in DoD doctrine (DoD 5000.59-M, 1998) and remain widely used, though their boundaries are debated in current AI-safety literature. ADG uses the tighter operational definitions below to make each tier directly testable against the Minimum Control Set.

TierDescriptionOversightMinimum Governance
AssistiveRecommends or drafts; does not actHITLOutput review, bounded context, no direct tool execution
ConditionalActs within pre-approved limits and constrained toolsHOTLAuthority statement, guardrails, audit logs, kill switch, exception thresholds, bias monitoring
AutonomousExecutes multi-step goals with limited or delayed human reviewHOOTLFormal approval, strong telemetry, circuit breakers, forensic replay, periodic governance review, mandatory fairness evaluation

3.4 Implementation Roadmap Overview

PhaseFocusKey Activities
Phase 1 - FoundationInventory and ClassificationEstablish AI system inventory, assign accountable owners, define ADG roles, classify systems by criticality and autonomy, establish AI Governance Council, conduct baseline assessment and gap analysis
Phase 2 - Control DeploymentOperational ControlsImplement release gates, prompt and tool change control, context policies, logging, monitoring, basic adversarial testing, initial fairness evaluations, vendor AI due diligence
Phase 3 - Agentic ReadinessAdvanced GovernanceAdd authority statements, tool trust tiers, MCP governance, circuit breakers, multi-agent governance controls, forensic replay capability
Phase 4 - MaturityContinuous AssuranceContinuous evaluation, persistent adversarial monitoring, drift governance, responsible AI measurement, board-level reporting, formal assurance review

4. The ADG Triad and Operating Model

Colour key:
Adopt - Execute & Deliver
Defend - Secure & Validate
Govern - Oversee & Decide
Council - Tension Resolution

The ADG Triad - Three Pillars of AI Governance

Adopt
Mandate

"Deliver business value safely"

Stakeholders
AI Product Owner ML Engineers Prompt Engineers DevSecOps
Builds &
operates
Defend
Mandate

"Identify, prevent, detect harmful behavior"

Stakeholders
AI Red Team Security Engineers Bias Auditors Incident Response
Breaks &
protects
Govern
Mandate

"Justify, approve, evidence AI use at board level"

Stakeholders
CAIO / Ethics Lead Legal & Compliance Risk Officers Board / C-Suite
Authorizes & oversees
AI Governance Council
Resolves ADOPT-DEFEND tension Defines go / no-go criteria Exception policy
4.1 - Pillar 1

ADOPT

Execute and Deliver

"Deliver business value safely"

Primary Question
Can we deploy it effectively?
Decision Focus
Capability, performance, delivery, reliability
Stakeholders
AI Product Owner ML Engineers Prompt Engineers DevSecOps App Architects Enterprise Architects
Core Outputs
  • Deployed service
  • Runbooks
  • Baselines
  • Releases
Success Measure
Value delivery with controlled operations
Escalation: Escalates blockers to GOVERN
4.2 - Pillar 2

DEFEND

Secure and Validate

"Identify, prevent, detect harmful behavior"

Primary Question
Can it be abused, fail dangerously, or cause harm?
Decision Focus
Security, resilience, fairness, abuse resistance, containment
Stakeholders
AI Red Team Security Engineers Bias Auditors Incident Response Guardrail Engineers Detection Engineers
Core Outputs
  • Test results & detections
  • Guardrails
  • Fairness evaluations
  • Incident playbooks
Success Measure
Risk reduced across all four harm classes
Escalation: Escalates unresolved risks to GOVERN
4.3 - Pillar 3

GOVERN

Oversee, Assure, and Decide

"Justify, approve, evidence AI use at board level"

Primary Question
Should we approve it, under what conditions, and at what risk?
Decision Focus
Risk appetite, legality, accountability, ethics, oversight
Stakeholders
CAIO / Ethics Lead Legal & Compliance Risk Officers Board / C-Suite Privacy Counsel Compliance Lead
Core Outputs
  • Policies, approvals, risk thresholds
  • Exceptions & evidence
  • Board reports
Strategic Functions
  • Define organizational AI risk appetite
  • Establish decision rights and escalation paths
  • Own the AI Governance Council charter
  • Define board-level reporting requirements
  • Set investment justification criteria
  • Maintain regulatory mapping
Success Measure
Defensible use with auditable, measurable governance
Tension resolution: Resolves ADOPT-DEFEND tension; defines go/no-go and exception policy

4.4 Operating Model

ADG uses a simple rule: ADOPT builds and operates, DEFEND breaks and protects, GOVERN authorizes and oversees.

DimensionADOPTDEFENDGOVERN
Primary questionCan we deploy it effectively?Can it be abused, fail dangerously, or cause harm?Should we approve it, under what conditions, and at what risk?
Decision focusCapability, performance, delivery, reliabilitySecurity, resilience, fairness, abuse resistance, containmentRisk appetite, legality, accountability, ethics, oversight
Core outputsDeployed service, runbooks, baselines, releasesTest results, detections, guardrails, fairness evaluations, incident playbooksPolicies, approvals, risk thresholds, exceptions, evidence, board reports
Success measureValue delivery with controlled operationsRisk reduced across all four harm classesDefensible use with auditable, measurable governance
Tension resolutionEscalates blockers to GOVERNEscalates unresolved risks to GOVERNResolves ADOPT-DEFEND tension; defines go/no-go and exception policy

5. Governance Surfaces

Colour key:
Adopt (A)
Defend (D)
Govern (G)
Cross-pillar

ADG organizes governance into nine surfaces. These surfaces define what must be governed regardless of model vendor, model architecture, or deployment pattern.

Governance Surfaces - Where Engineers Instrument and Attackers Act

Request flow - left to right
INPUT
PROMPT
AD
CONTEXT
GD
MODEL
AG
TOOLS
D
ORCHESTRATION
AD
OUTPUT
Cross-cutting bands - apply across the flow above
🔑Identity
Credentials, service accounts, agent identity, delegated authority. Touches every node above; traces actions back to a human authority.
GD
🛡Safety Layer
Guardrails, policy engines, classifiers, circuit breakers. Sits between Model + Tools + Output - blocks unsafe content and unauthorized actions before impact.
D
📊Telemetry
Logs, traces, evaluations, replay data, fairness metrics. Underneath everything - makes behavior observable, reviewable, provable.
DG
Learning Loop
Pre-training sources, post-training alignment, RLHF data, retraining inputs. Feeds back into Model - controls drift and undocumented behavior change.
GA
AAdopt
DDefend
GGovern
|
Hover any surface for definition
SurfaceScopeADG AlignmentControl Objective
ModelFoundation models, fine-tuned models, adapters, routers, versions, diffusion models, composite model chainsA+GUse only approved models with known risk posture, provenance, and change traceability
PromptSystem prompts, templates, policies, agent instructions, prompt libraries, multi-modal input validationA+DPrevent unmanaged behavior changes and unsafe instruction patterns
ContextRetrieval sources, session state, memory, hidden context, user metadata, cross-session dataG+DPrevent poisoning, leakage, cross-session contamination, and privacy violations
ToolsAPIs, plugins, actions, code execution, file access, transactional endpoints, MCP capabilitiesDEnforce least privilege, strong validation, sandboxing, and full audit logging
OrchestrationPlanners, workflow graphs, retry logic, multi-agent flows, model routing, agent-to-agent communicationA+DBound agent behavior, prevent cascading failures, ensure deterministic control
IdentityCredentials, service accounts, delegated authority, secrets, trust relationships, agent identityG+DPrevent privilege misuse, preserve accountability, trace agent actions to human authority
Safety LayerGuardrails, policy engines, semantic filters, classifiers, circuit breakers, harm detectorsDBlock unsafe content, unfair outputs, and unauthorized actions before impact
TelemetryLogs, traces, evaluations, replay data, alerts, governance evidence, fairness metricsD+GMake behavior observable, reviewable, provable, and measurable
Learning LoopPre-training sources, post-training alignment, feedback loops, retraining updates, RLHF dataG+AControl data provenance, drift, alignment stability, and undocumented behavior change
Technical Layer · Industry Threat Catalogues

Where ADG meets MITRE and OWASP

The nine surfaces above describe what must be governed. The four industry threat catalogues describe what attackers do. ADG consumes them as input - every surface inherits its detection rules, eval scenarios, and runtime guardrails from these sources, then maps the resulting controls to NIST AI RMF and ISO/IEC 42001. See the interactive §11.2.1 flow for the threat-to-control-to-standard chain.

MITRE ATT&CK
Enterprise adversary techniques
IdentityToolsTelemetry
Drives MC-8 detection rules and MC-9 incident playbooks for AI systems integrated with the corporate estate.
MITRE ATLAS
Adversarial ML / AI techniques
ModelContextLearning Loop
Drives MC-4 pre-production red-team scenarios and MC-8 detection signatures for model-layer attacks.
OWASP Top 10 for LLM (2025)
LLM application risks
PromptContextSafety LayerTelemetry
Drives MC-4 evaluation suites, MC-6 Context Policy, MC-8 guardrails, and MC-11 bias checks.
OWASP Top 10 for Agentic (2026)
Agent & multi-agent risks
ToolsOrchestrationIdentitySafety Layer
Drives the Agentic Hardening overlay (§10.4), MC-7 Tool & MCP Register, MC-8, MC-9.

6. Harm Taxonomy and Responsible AI Integration

TL;DRFour classes of AI harm (Technical, Societal, Operational, Systemic) plus six Responsible AI principles - so governance covers attacks, bias, reliability, and emergent multi-agent risk in one taxonomy.

Read this if you're 👔 Board 🛡 Security ⚖ Compliance
ADG Alignment:
Adopt
Defend
Govern

ADG recognizes that AI governance must address harms beyond security vulnerabilities. The framework adopts a four-class harm taxonomy that maps into every governance surface, lifecycle stage, and deployment overlay.

6.1 Harm Classification

The four harm classes plotted by detection difficulty and impact scope. Each quadrant carries the threats it covers, the pillars that own it, and the Minimum Controls that detect it.

Harm Classification - Four-Class Matrix

Easy ↑ Detection Difficulty ↓ Hard
Easy detect · Individual scope
Operational
Reliability, accuracy, business-impact failures
HallucinationModel driftGrounding failureData quality degradationLack of explainability
ADOPTDEFEND
Detected by: MC-4, MC-8, MC-5
Easy detect · Systemic scope
Technical
Security, integrity, system reliability failures
Prompt injectionData exfiltrationModel theftJailbreakingDenial of wallet
DEFEND
Detected by: MC-7, MC-8, MC-9
Hard detect · Individual scope
Societal
Bias, discrimination, human-rights impacts
Algorithmic discriminationStereotypingOpportunity denialManipulationSurveillance
GOVERNDEFEND
Detected by: MC-11, MC-10, MC-2
Hard detect · Systemic scope
Systemic
Emergent risk from AI-to-AI interaction
Agent collusionValue misalignment amplificationCascading failuresSycophancy loops
GOVERNDEFEND
Detected by: MC-8, MC-10, MC-9
← Individual Impact Scope Systemic →

6.2 Responsible AI Principles

Every governance surface in ADG must account for seven responsible AI dimensions, embedded across all three pillars rather than appended as a separate track.

Responsible AI Principles - Seven Embedded Standards

1 Fairness & Non-Discrimination Outputs must not disadvantage groups 2 Transparency & Explainability Explainable to its risk tier 3 Privacy & Data Protection Minimize. Comply with data law 4 Account- ability Operator to executive sponsor 5 Human Oversight Scaled to autonomy and harm potential 6 Robustness & Safety Tested under adversarial conditions 7 Sustainability & Well-Being Environmental & societal impact 7 RESPONSIBLE AI PRINCIPLES
Integration rule: Responsible AI is not a separate track. It is embedded into every ADG pillar - ADOPT, DEFEND, and GOVERN - and must be reflected in all certification curricula, operating artifacts, and governance reviews.

7. Shared Responsibility Model

When organizations deploy AI, responsibility for governance controls varies based on how the AI is provisioned. ADG defines three deployment responsibility classes and maps control ownership for each.

Shared Responsibility Model - Control Ownership by Deployment Class

Shared Responsibility Model Who owns which of the 9 governance surfaces, by deployment class Organization owns Shared Vendor owns Homegrown Self-hosted full stack 100% Organization All 9 surfaces - Full lifecycle - Vendor provides infrastructure SLAs only Foundation Model API OpenAI - Anthropic - Google Organization - 7 surfaces Prompt - Context - Tools - Orchestration - Identity - Safety - Telemetry Shared Model gov. - Learning Vendor Model - Training - API SaaS / Embedded AI Copilot - Einstein - ServiceNow Org - 4 surfaces Context - Identity - Telemetry - GOVERN Shared Safety - Prompt Vendor - 5+ surfaces Model - Orchestration - Tools - Runtime - Learning Loop Ownership shifts right as you move toward vendor-embedded AI - but accountability for use never fully transfers. MC-12 (Shared Responsibility Documentation) requires every deployment to map control boundaries explicitly.

7.1 Deployment Responsibility Classes

ClassDescriptionOrganization ControlsSharedVendor Controls
Homegrown / Self-HostedOrganization trains, hosts, and operates the full AI stackAll 9 surfaces, full lifecycleNone (full ownership)Infrastructure SLAs only
Foundation Model APIConsumes a foundation model via API (e.g., OpenAI, Anthropic, Google)Prompt, Context, Tools, Orchestration, Identity, Safety Layer, TelemetryModel governance, Learning LoopModel training, alignment, infrastructure, API availability
SaaS AI / Embedded AIAI embedded in a vendor product (e.g., Copilot, Einstein, ServiceNow)Context, Identity, Telemetry, GOVERN policySafety Layer, Prompt customizationModel, Orchestration, Tools, Runtime stack, Learning Loop

7.2 Vendor AI Due Diligence Requirements

For Foundation Model API and SaaS AI deployments, organizations must:

  1. Require vendor disclosure of model provenance, training data policies, alignment methods, and known limitations
  2. Establish contractual AI governance clauses covering incident notification, data handling, model change management, and liability allocation
  3. Conduct independent evaluation of vendor-provided safety controls and not rely solely on vendor claims
  4. Maintain consumer-side telemetry and monitoring regardless of vendor monitoring capabilities
  5. Include AI governance questions in procurement processes and vendor risk assessments
  6. Define rollback and exit strategies for vendor AI dependencies

8. People, Process, Technology, and Data

Layers:
People (Layer 1)
Process (Layer 2)
Technology (Layer 3)
Data (Layer 4)

8.1 People Layer

ADG requires explicit capability ownership in addition to functional ownership. The people model includes dedicated responsible AI expertise across all pillars.

  • ADOPT roles: AI product owner, accountable AI service owner, LLMOps lead, orchestration engineer, prompt designer, platform engineer, enterprise architect, DevSecOps engineer
  • DEFEND roles: AI red team lead, guardrail engineer, detection engineer, AI security architect, incident commander, forensic analyst, responsible AI scientist, bias/fairness auditor
  • GOVERN roles: model risk officer, privacy counsel, compliance lead, data steward, AI ethics and sociotechnical oversight lead, executive approver, procurement governance lead

AI Governance Council: Every organization deploying AI systems should establish a cross-functional AI Governance Council (or equivalent steering body) with representation from ADOPT, DEFEND, and GOVERN. This council serves as the escalation path and tension-resolution mechanism described in Section 4.4.

8.2 Process Layer

ADG formalizes the minimum process backbone required for enterprise implementation:

Process Backbone - Nine Steps Across the Three Pillars

ADOPT Build & operate DEFEND Break & protect GOVERN Authorize & oversee 1Step 1: AI System Inventory and criticality classification (ADOPT) Inventory 2Step 2: Deployment gate and approval workflow, incl. pre-deployment fairness and safety evaluation (DEFEND) Deployment Gate 3Step 3: Prompt, model, and tool change management (ADOPT) Change Mgmt 4Step 4: Continuous evaluation, drift review, and fairness monitoring (ADOPT) Continuous Eval 5Step 5: Human oversight model selection - HITL/HOTL/HOOTL with documented rationale (GOVERN) Oversight Tier 6Step 6: AI-specific incident response and evidence retention (DEFEND) Incident Response 7Step 7: Post-incident remediation with retest and governance review (DEFEND) Remediation 8Step 8: Vendor AI due diligence and ongoing assurance (ADOPT) Vendor DD 9Step 9: Board-level reporting cycle for high-risk AI systems (GOVERN) Board Reporting Sequence flows left to right; lane crossings indicate primary pillar handoff. Hover any node for full step description.
  1. AI system inventory and criticality classification
  2. Deployment gate and approval workflow (including pre-deployment fairness and safety evaluation)
  3. Prompt, model, and tool change management
  4. Continuous evaluation, drift review, and fairness monitoring
  5. Human oversight model selection: HITL, HOTL, or HOOTL with documented rationale
  6. AI-specific incident response and evidence retention
  7. Post-incident remediation with retest and governance review
  8. Vendor AI due diligence and ongoing assurance
  9. Board-level reporting cycle for high-risk AI systems

8.3 Technology Layer

ADG treats the following as governed technology layers:

  1. Model gateway and routing layer
  2. Retrieval, memory, and context management layer
  3. Tool registry and MCP trust layer
  4. Policy engine, guardrail layer, and harm detection layer
  5. Runtime telemetry, replay, and fairness measurement layer
  6. Training, tuning, and evaluation pipeline layer
  7. Input validation and sanitization layer (covering prompts, RAG inputs, and multi-modal inputs)

8.4 Data Layer

Traditional enterprise data architecture separates data into distinct tiers - structured databases, data warehouses, data lakes, unstructured file stores, and APIs - each with its own governance model, access controls, and tooling. AI systems fundamentally disrupt this separation. When an AI agent consumes enterprise data through retrieval pipelines, MCP connections, or tool invocations, it does not distinguish between a SQL database record, a SharePoint document, a Slack thread, or a PDF. All data is converted into text tokens within a context window, collapsing traditional tier boundaries into a single consumption surface.

This convergence means that AI systems can reassemble sensitive information from fragments scattered across sources that were never intended to be combined, infer personal data from context that contains no explicit PII, and traverse entire enterprise file systems where content is readable as text. The governance implication is profound: access control at the storage layer alone is no longer sufficient. Organizations must govern the full pipeline from source data through retrieval, embedding, context assembly, and AI consumption.

ADG requires organizations to govern data across four dimensions:

8.4.1 Data People

  • Data Stewards for AI: extending traditional data stewardship to govern the full text-based data surface that AI systems can access
  • AI Data Engineers: bridging data engineering and AI operations, responsible for RAG indexing, embedding generation, knowledge base curation
  • Context Architects: designing what data flows into AI context windows, in what priority order, with what trust ranking
  • Knowledge Base Curators: responsible for freshness, accuracy, deduplication, and retirement of enterprise knowledge assets

8.4.2 Data Process

  • AI-aware data classification: extending beyond storage-tier access control to include AI-readability rules
  • Provenance tracking across the text pipeline: maintaining a verifiable chain from any AI output back through the retrieval step to the source document
  • Cross-source inference governance: policies governing when AI systems may combine information from multiple data sources
  • Text-based policy management: AI system configurations governed as controlled documents with versioning, approval workflows, and rollback
  • Knowledge base lifecycle management: content ingestion, quality validation, freshness review, conflict resolution, deduplication, and retirement
  • Data minimization for AI: ensuring context windows contain only data necessary for the task
  • Consent and lawful basis tracking: maintaining records of lawful basis for processing each data category

8.4.3 Data Technology

  • Vector stores and embedding infrastructure: governed data infrastructure requiring access controls, encryption, backup and recovery
  • Enterprise knowledge graphs: structured representations enabling context assembly with awareness of entity relationships
  • Context assembly engines: systems that select, rank, truncate, filter, and compose text from multiple sources
  • MCP and tool registries as data access layers: governed as data access infrastructure with the same rigor as database connections
  • Text-based configuration stores: GitOps-style repositories for all AI system configurations
  • Data lineage and output attribution: technology to trace which source documents contributed to a specific AI output
  • Embedding pipeline governance: validation testing, drift monitoring, and re-indexing governance

8.4.4 Data Governance Integration with ADG Surfaces

SurfaceData Layer Intersection
ModelTraining data provenance, fine-tuning data governance, model card data documentation
PromptSystem prompt versioning and change control as governed text artifacts
ContextRetrieval source classification, context assembly governance, cross-source inference controls
ToolsMCP servers as data access gateways; tool-retrieved data classified and logged
OrchestrationData flow governance across multi-step agent workflows; inter-agent data sharing rules
IdentityData access tied to agent identity and delegated authority; no implicit data access
Safety LayerGuardrail configurations as governed text; data-driven harm detection models governed as data assets
TelemetryLogs and traces as sensitive data requiring retention, redaction, and access governance
Learning LoopFeedback data, RLHF inputs, and retraining datasets governed as controlled data assets with provenance

9. Lifecycle Governance

Stage colours:
Adopt-led stages
Defend-led stages
Govern-led stages
Feedback loop

ADG applies controls across six lifecycle stages. The lifecycle is explicitly circular: lessons from Retire/Learn feed back into Pre-training/Sourcing decisions.

Lifecycle Governance - Six-Stage Circular Flow

↻ Lessons feed back into sourcing - continuous governance loop
1
Pre-training / Sourcing
A Select suppliers & datasets
D Assess provenance risk
G Approve sourcing constraints
2
Post-training / Alignment
A Tune for use-case fit
D Test for regressions
G Review alignment objectives
3
Build / Integrate
A Assemble workflows & tools
D Validate attack surfaces
G Classify use case & risk
6
Retire / Learn
A Decommission services
D Preserve evidence
G Update policy
5
Run / Monitor
A Operate service & SLAs
D Detect abuse & drift
G Review incidents
4
Deploy / Authorize
A Controlled release
D Confirm testing readiness
G Grant formal approval
StageADOPTDEFENDGOVERN
Pre-trainingSelect suppliers & datasets fit for purpose; assess training data for representation and biasAssess provenance abuse, contamination risk, and training data biasApprove sourcing constraints, licensing, jurisdictional requirements, and data ethics
Post-trainingTune for use-case quality and operational fitTest for regressions, bypasses, safety degradation, and fairness driftReview alignment objectives, documentation sufficiency, and RAI criteria
BuildAssemble workflows, prompts, tools, retrieval; integrate DevSecOps controlsValidate interfaces, secrets, attack surfaces, and input validation coverageClassify use case, approve controls, define oversight requirements and risk tier
DeployRelease through controlled change process with rollback readinessConfirm pre-production testing, monitoring readiness, and fairness evaluationGrant formal deployment approval or exception with documented conditions
RunOperate service, maintain SLAs, track quality and costDetect abuse, failures, drift, unsafe actions, and bias emergence; continuous red teamingReview incidents, exceptions, compliance posture, and configuration drift
RetireDecommission services and roll forward lessonsPreserve evidence, investigate failures, validate closureUpdate policy, records, accountability decisions; feed lessons into sourcing cycle

9.1 Post-Deployment Continuous Governance

The Run/Monitor stage requires specific continuous governance controls that go beyond traditional operational monitoring:

  1. Configuration drift detection: verify that the system in production matches what was approved (models, prompts, tools, context sources)
  2. Usage authorization monitoring: confirm that approved users are using the system for approved purposes within approved boundaries
  3. Feature and capability change governance: new model versions, prompt updates, tool additions, and retrieval source changes post-deployment must go through change control
  4. Continuous automated evaluation: scheduled adversarial testing, fairness benchmarking, and accuracy regression testing on production systems
  5. Ongoing governance of product roadmap: feature additions to deployed AI systems require re-evaluation against the original risk classification and approval conditions

10. Deployment Pattern Overlays

Overlay families:
Operational (LLMOps, Training)
Security (Agent, Orchestration)
Hardening (Agentic, Multi-Agent)
Integration (Tools/MCP, Multi-Modal)
Context (Long-Window)

Deployment Pattern Applicability - Which Overlays Apply When

Deployment Class Autonomy Tier
HomegrownFM APISaaS HITLHOTLHOOTL
LLMOpsRequiredRequiredRequiredRequiredRequired
Agentic OrchestrationN/AOptionalRequiredRequired
AgentOptionalOptionalRequiredRequired
Agentic HardeningRequiredRequiredOptionalRequiredRequired
Tools & MCPRequiredOptionalRequiredRequiredRequired
Context & Long-WindowRequiredRequiredRequiredRequiredRequired
Pre/Post-trainingRequiredOptionalN/ARequired
Multi-Agent InteropOptionalOptionalRequired
Multi-Modal / CompositeOptionalOptionalRequired
Required Recommended Optional Not applicable

ADG adds deployment-specific overlays so that governance matches the actual architecture being deployed. Overlays are additive - select all that apply.

10.1 LLMOps Overlay

Scope: versioning, evaluation, release management, rollback, cost control, and performance monitoring for LLM-based services.

  • ADOPT structures model onboarding, baseline evaluation, config versioning, rollback readiness, and operational SLOs
  • DEFEND structures adversarial testing, abuse simulation, leakage testing, denial-of-wallet controls, monitoring coverage, and bias evaluation
  • GOVERN structures use-case approval, vendor due diligence, deployment thresholds, exception management, and LLMOps cost governance

10.2 Agentic Orchestration Overlay

Scope: planners, loops, retries, multi-step reasoning, subtask decomposition, and multi-agent coordination.

  • ADOPT defines mission scope, stop conditions, retry budgets, and workflow boundaries
  • DEFEND validates loop abuse resistance, prompt chaining resistance, recursion limits, and kill-switch behavior
  • GOVERN approves autonomy tier, escalation path, and legal accountability for delegated decisions

10.3 Agent Overlay

Scope: AI systems that act on behalf of a user, team, or enterprise process rather than merely generating content.

  • ADOPT defines business mission, action boundaries, and acceptable failure modes
  • DEFEND tests transaction safety, impersonation resistance, and unsafe action prevention
  • GOVERN defines liability model, mandatory approvals, record retention, and segregation-of-duty requirements

10.4 Agentic Hardening Overlay

Required control themes:

  1. Delegated authority bounds
  2. Action-layer isolation and dry-run modes
  3. Semantic firewalls before tool use and before response release
  4. Memory hygiene and state reset controls
  5. Human override and emergency stop pathways
  6. Forensic replay of prompts, retrieved context, policy decisions, and tool calls

10.5 Tools and MCP Overlay

Scope: tool discovery, capability registration, trust mediation, and policy-controlled invocation of tools and context providers.

  • ADOPT justifies why each tool or MCP capability is needed and what business task boundary it serves
  • DEFEND enforces authentication, authorization, parameter validation, rate limiting, sandboxing, and audit logging
  • GOVERN maintains approval policy, third-party assurance criteria, trust tiers, and data-sharing restrictions

Minimum MCP requirements:

  1. Approved server inventory
  2. Capability trust tiering
  3. Per-invocation policy evaluation
  4. Auditable request and response traces
  5. Explicit rejection of implicit trust in tool-supplied instructions

10.6 Context and Long-Window Overlay

Scope: session history, retrieved enterprise knowledge, hidden orchestration instructions, persistent memory, and context overflow handling.

  • ADOPT defines the minimum context required for task quality
  • DEFEND tests for context poisoning, leakage, overflow abuse, cross-session contamination, and multi-turn attack patterns
  • GOVERN defines provenance rules, retention, data-class handling, trust ranking, and lawful-use constraints

Minimum context policy: every production system must define approved sources, retention periods, redaction rules, persistence rules, and trust ordering across context inputs.

10.7 Pre-training and Post-training Overlay

Scope: model provenance, fine-tuning, adapters, preference alignment, safety tuning, feedback loops, and retraining.

  • ADOPT structures fine-tuning objectives, experiment tracking, rollback, and operational performance baselines
  • DEFEND structures regression testing, jailbreak retesting, alignment failure analysis, fairness drift monitoring, and bias evaluation
  • GOVERN structures provenance review, licensing, cross-border constraints, GDPR compliance for training data, documentation, and approval of learning-loop inputs

10.8 Multi-Agent and Agent Interoperability Overlay

Scope: systems where multiple AI agents interact, coordinate, negotiate, or delegate tasks to each other.

Multi-agent systems create emergent governance challenges that single-agent controls do not address:

  • Agent-to-agent trust boundaries: each agent interaction must enforce explicit trust verification; no agent should implicitly trust another agent's outputs or instructions
  • Value alignment verification: agents with different guardrail configurations, different training, or different vendors may have incompatible safety policies. Cross-agent interactions must be tested for value alignment conflicts
  • Collusion and deception detection: monitoring for patterns where agents coordinate to bypass controls, produce misleading outputs, or exploit gaps between their respective guardrails
  • Cascading failure and amplification risk: a failure or bias in one agent can be amplified through a chain of dependent agents. Circuit breakers must exist at agent-to-agent boundaries
  • Cross-agent audit trails: every agent-to-agent interaction must be logged with sufficient detail for forensic replay
  • Sycophancy degradation prevention: in agent-agent interactions, sycophantic behavior can devolve into harmful feedback loops. Detection and interruption mechanisms are required
  • Interoperability governance: when agents from different organizations or vendors interact, a shared governance protocol must define minimum safety, logging, and accountability requirements

10.9 Multi-Modal and Composite AI Systems Overlay

Scope: systems combining multiple model architectures (LLM + diffusion + classifier + retrieval) within a single product or workflow.

  • ADOPT defines the system-level architecture, ensuring each model component has a clear purpose and integration specification
  • DEFEND tests for cross-model interaction risks: outputs from one model becoming adversarial inputs to another, amplification of individual model weaknesses through the chain
  • GOVERN requires system-level risk assessment, not just per-model evaluation, and defines composite system approval criteria

11. Controls, Artifacts, and Measurement

ADG Alignment:
Adopt-aligned
Defend-aligned
Govern-aligned

11.1 Minimum Control Set

The following controls form the minimum baseline for ADG implementation. Each control includes an evidence requirement to ensure measurability.

ControlDescriptionEvidence Requirement
MC-1AI System Inventory - Maintain an inventory of all AI systems with an accountable owner, risk classification, and autonomy tierPublished inventory, reviewed quarterly, with named owner per system
MC-2Risk Classification - Classify each AI system by data sensitivity, autonomy, external exposure, harm potential, and business criticalityDocumented classification per system using a standardized risk taxonomy
MC-3Separation of Duties - Separate deployment ownership, security validation, and approval authority across ADOPT, DEFEND, GOVERNRACI matrix per AI system; no single function holding all three roles
MC-4Pre-Production Evaluation - Complete quality, safety, security, fairness, and failure-mode testing before any production deploymentSigned evaluation report covering all four harm classes before go-live
MC-5Change Control - Manage changes to prompts, tools, models, and retrieval sources through a governed change processChange log with approval records; no uncontrolled production changes
MC-6Context Policy - Define provenance, retention, access restrictions, and trust ordering for all context inputsPublished context policy per system; annual review
MC-7Tool and MCP Register - Maintain a register of all tools and MCP capabilities with trust tiering and invocation controlsPublished register with per-tool risk assessment and approval status
MC-8Runtime Monitoring - Monitor for abuse, drift, data leakage, unsafe actions, bias emergence, and configuration drift in productionActive monitoring with defined alert thresholds and response SLAs
MC-9AI Incident Response - Maintain AI-specific incident response procedures with replayable evidence captureDocumented playbook; at least one tabletop exercise per year
MC-10Periodic Governance Review - Conduct governance reviews with exception handling and board reporting for high-risk systemsReview records with findings, decisions, and exception dispositions
MC-11Fairness and Bias Evaluation - Evaluate AI systems for discriminatory outcomes using representative test data and established fairness metricsFairness evaluation report; re-evaluation after model or data changes
MC-12Shared Responsibility Documentation - For vendor or SaaS AI, document accountability boundaries, contractual obligations, and assurance requirementsSigned responsibility matrix; vendor due diligence records

11.2 Control Crosswalk - NIST AI RMF & ISO/IEC 42001

Each Minimum Control maps to one or more NIST AI RMF functions and ISO/IEC 42001 Annex A groups. Click any control below to reveal how many domains it touches across both frameworks.

Control Crosswalk - NIST AI RMF & ISO/IEC 42001

NIST AI RMF function ISO/IEC 42001 Annex A group Click a control to reveal its mappings · Esc to clear
Select a control above to see how many NIST AI RMF functions and ISO/IEC 42001 Annex A groups it touches.
ControlNameNIST AI RMFISO/IEC 42001 Annex A#NIST#ISO

11.2.1 Threat → Control → Standard Flow

The crosswalk above shows ADG controls mapping to NIST and ISO. The flow below adds the technical layer on top: every threat catalogued by MITRE ATT&CK, MITRE ATLAS, OWASP Top 10 for LLM (2025), and OWASP Top 10 for Agentic (2026) flows into one or more ADG Minimum Controls - and from there to the NIST AI RMF function and ISO/IEC 42001 Annex A clause that recognises the same control objective. Pick a catalogue tab, click a threat, and trace the chain.

Threat → ADG Control → NIST AI RMF + ISO/IEC 42001

Threat → ADG MC MC → NIST function MC → ISO Annex A Click a threat to trace the full chain · Esc to clear
Pick a catalogue tab, then click any threat to see its ADG controls and the NIST & ISO clauses they satisfy.
SourceIDThreatADG Minimum Controls

11.3 Required Operating Artifacts

ArtifactPurposeADG Alignment
AI System ProfileDocuments system architecture, risk classification, autonomy tier, harm classes, and deployment patternADOPT
ADG RACI MatrixMaps control ownership across ADOPT, DEFEND, GOVERN for each AI systemGOVERN
Agent Authority StatementDefines what an agent may access, decide, execute, and escalateGOVERN+ADOPT
Context PolicySpecifies approved sources, retention, redaction, trust ordering, and privacy controlsGOVERN
Tool and MCP RegisterCatalogues tools, trust tiers, invocation policies, and third-party assurance statusDEFEND
AI Release Gate ChecklistPre-deployment verification covering security, fairness, safety, and governance approvalADOPT+DEFEND
AI Incident Evidence PackForensic capture package for AI-specific incidentsDEFEND
Model and Prompt Change LogTracks all changes to models, prompts, and system configurations with approval recordsADOPT
Governance Exception RegisterRecords all governance exceptions with justification, risk acceptance, and expirationGOVERN
Vendor AI Due Diligence RecordDocuments vendor AI assessments, contractual AI clauses, and shared responsibility boundariesGOVERN
Fairness Evaluation ReportRecords fairness and bias testing methodology, results, and remediation actionsDEFEND+GOVERN

11.4 Measurement and Evidence Framework

Metric tiers:
Tier 1 - Operational
Tier 2 - Cross-Surface
Tier 3 - Executive

ADG requires measurable governance. Three measurement tiers ensure controls are not just documented but demonstrably effective.

1

Control Effectiveness Metrics

Per-control KPIs - Operational
%
AI systems with completed risk classification
%
AI systems with documented RACI matrix
T
Mean time from model change to governance approval
%
Tool/MCP capabilities with current trust assessment
%
Production AI systems with active runtime monitoring
#
Uncontrolled production changes detected per quarter
%
High-risk AI systems with completed fairness evaluation
2

Outcome Metrics

Cross-surface governance effectiveness
T
Mean time to detect AI-specific incidents (MTTD)
T
Mean time to contain AI-specific incidents (MTTC)
#
AI governance exceptions open beyond expiration date
%
AI systems operating within approved config (no drift)
#
AI-related compliance findings from internal/external audit
Fairness metric trends across production systems (QoQ)
3

Board-Level Indicators

Executive reporting - aggregated
S
AI risk posture score (composite across all surfaces)
%
AI compliance coverage (systems with current governance review)
Exception backlog trend (open, aging, risk-weighted)
AI incident trend (frequency, severity, resolution time)
#
Vendor AI risk exposure (third-party model dependencies)
%
Responsible AI compliance rate (fairness, transparency, accountability)

12. Regulatory Alignment and Roadmap

12.1 Regulatory and Standards Alignment

ADG is designed to be compatible with major AI governance regulations and standards. The following matrix maps ADG components to key external frameworks.

Note: This mapping is indicative, not exhaustive. Organizations must conduct their own regulatory compliance assessment. ADG provides a governance backbone that facilitates compliance but does not guarantee it.

For the full 12-control Minimum Control Set mapped to NIST AI RMF and ISO/IEC 42001, see §11.2 Control Crosswalk. The table below covers EU AI Act alignment and cross-cutting ADG components.

ADG ComponentEU AI Act (High-Risk Focus)NIST AI RMFISO/IEC 42001
Risk Classification (MC-2)Art. 6-7: Risk categorizationMap: Risk identification and analysis6.1.2: AI risk assessment
Pre-Production Eval (MC-4)Art. 9: Risk management systemMeasure: AI risk measurement8.1: Operational planning and control
Runtime Monitoring (MC-8)Art. 72: Post-market monitoringManage: Continuous monitoring9.1: Monitoring, measurement, analysis and evaluation
Fairness Eval (MC-11)Art. 10: Data governance and bias preventionMap: Bias identificationA.7: Data for AI systems + A.5: Impact assessment
Incident Response (MC-9)Art. 73: Serious incident reportingManage: Incident response10.2: Nonconformity and corrective action
AI System Inventory (MC-1)Art. 49 + Art. 71: Registration in EU databaseGovern: Inventory and categorization7.5: Documented information
Human Oversight (Tiers)Art. 14: Human oversight requirementsGovern: Human-AI teamingA.9: Use of AI systems + A.5: Impact assessment
Transparency (Telemetry)Art. 13: Transparency requirementsGovern: Transparency and documentationA.8: Information for interested parties

12.2 Implementation Roadmap

Phases:
Phase 1 - Foundation
Phase 2 - Controls
Phase 3 - Agentic
Phase 4 - Maturity

Implementation Roadmap - Four-Phase Rollout

Phase 1
Foundation
Months 0-3
• AI system inventory
• Assign owners
• Define ADG roles
• Classify by criticality
• Governance Council
• Baseline gap analysis
MC-1, MC-2, MC-3
Phase 2
Control Deployment
Months 3-9
• Release gates
• Change control
• Context policies
• Adversarial testing
• Fairness evals
• Vendor due diligence
MC-4-MC-9, MC-11, MC-12
Phase 3
Agentic Readiness
Months 9-15
• Authority statements
• Tool trust tiering
• MCP governance
• Circuit breakers
• Multi-agent governance
• Forensic replay
MC-5, MC-7, MC-8, MC-9
Phase 4
Maturity
Months 15-24
• Continuous evaluation
• Adversarial monitoring
• Drift governance
• RAI measurement
• Board reporting
• Formal assurance
MC-8, MC-10, MC-11
FoundationalOperationalAdvancedMature

13. Workforce Capability

Closing the AI Skills Gap

13.1 The AI Skills Gap

The defining constraint on enterprise AI is not the technology - it is the workforce. AI is scaling into production faster than the people who must run, secure, and govern it can be trained, credentialed, and deployed. IDC projects that by 2026 more than 90% of organizations will feel the IT skills crisis, at a cost of $5.5 trillion in delays, quality issues, and lost competitiveness (IDC, Enterprise Resilience: IT Skilling Strategies, 2024). Without workforce capability, every control in this framework remains theoretical.

"AI is moving from experimentation to infrastructure, and the workforce has to move with it. Security leaders are now accountable for systems that learn, adapt, and influence outcomes at speed."
- Jay Bavisi, Chairman & CEO, EC-Council Group

The ADG framework defines three pillars - ADOPT DEFEND GOVERN - and each one surfaces a distinct capability gap that is showing up across enterprises today:

  • The Adopt gap: program and transformation leaders cannot translate AI strategy into measurable, governed delivery - stalling pilots before they reach production.
  • The Defend gap: security organisations lack adversarial AI expertise - prompt injection, model evasion, data poisoning, and multi-agent exploitation are not yet standard tradecraft.
  • The Govern gap: risk, compliance, and board leaders lack the vocabulary, evidence protocols, and regulatory mapping (EU AI Act, NIST AI RMF, ISO/IEC 42001) to hold AI systems to account.

13.2 The Capability Bridge

EC-Council's answer is the Enterprise AI Credential Suite - a portfolio of role-aligned credentials mapped directly to the ADG framework. Launched February 2026 as the largest single portfolio expansion in EC-Council's 25-year history, the suite is structured as a four-part capability ladder:

  1. Literacy baseline - a shared AI vocabulary for every practitioner, technical or not, so governance conversations can happen at enterprise speed. Anchored by the Artificial Intelligence Essentials (AIE) programme.
  2. Role-aligned capability - three flagship credentials mapped 1:1 to the ADG pillars: C|AIPM for Adopt, C|OASP for Defend, C|RAGE for Govern. Each credential certifies execution-grade competence for the people accountable for that pillar.
  3. Extended core skills - CEH v13 restructured to integrate AI modules across all five phases of ethical hacking, extending existing security practitioners into the AI attack surface without a role change.
  4. Operational readiness - each credential crosswalks to the framework's Minimum Control Set (MC-1 through MC-12), turning certification into deployable evidence that auditors, regulators, and boards can accept.
EC-Council's Capability Bridge Enterprise AI Credential Suite - role-aligned uplift across the ADG framework WORKFORCE READINESS The Gap Can't ADOPT safely Can't DEFEND against attack Can't GOVERN responsibly $5.5T projected skills-gap loss ENTERPRISE AI Capability ADOPT DEFEND GOVERN Production-ready workforce AIE Literacy C|AIPM Adopt C|OASP Defend C|RAGE Govern CEH v13 Extended FOUNDATION ADG Framework - Adopt - Defend - Govern
How to read it. The cliffs represent where enterprises are and where they need to be. The deck is the ADG-aligned credential ladder. The foundation slab is the ADG framework itself - the bedrock that gives every credential shared meaning, shared evidence protocols, and a shared control set.

13.3 Credential Portfolio

The five credentials below map to the four-part capability ladder in Section 13.2. Each links to its official EC-Council programme page for syllabus, prerequisites, and enrolment.

The full EC-Council AI credential catalogue is maintained at eccouncil.org/ai-courses.

Appendix A: Definitions

TermDefinition
AI SystemA machine-based system that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions (aligned with OECD/EU AI Act).
AI AgentAn AI system that can autonomously plan, execute multi-step tasks, invoke tools, and take actions in physical or digital environments on behalf of a user or organization.
Foundation ModelA general-purpose AI model trained on broad data that can be adapted to a wide range of downstream tasks (e.g., GPT, Claude, Gemini, Llama, Stable Diffusion).
ModelAny machine learning artifact - including LLMs, diffusion models, classifiers, regressors, and reinforcement learning agents - used within an AI system.
High-Risk AI SystemAn AI system whose failure or misuse could cause significant harm to health, safety, fundamental rights, or critical infrastructure (aligned with EU AI Act Article 6).
Composite AI SystemAn AI system that orchestrates multiple models (potentially of different architectures) within a single workflow or product.
Agentic AIAI systems exhibiting autonomous behavior: planning, tool use, multi-step execution, and environmental interaction with limited or delayed human review.
HITLHuman-in-the-Loop: a human reviews and approves every AI output before it takes effect.
HOTLHuman-on-the-Loop: a human monitors AI operations and can intervene, but does not approve each individual output.
HOOTLHuman-out-of-the-Loop: AI operates autonomously with periodic governance review rather than real-time human oversight.

Executive Conclusion

ADG turns the original ADG concept into a full enterprise AI security and responsible AI governance framework. It keeps the simplicity of ADOPT, DEFEND, and GOVERN, but adds the structure required to govern contemporary AI systems - across people, process, technology, deployment pattern, lifecycle, harm class, and regulatory environment.

This version was forged with senior AI, security, and governance leaders running production AI inside Fortune 500, Fortune Global 500, and Big Four firms - practitioners working from design through implementation across regulated and less-regulated sectors. It addresses eight major gap clusters identified through systematic review, and extends the framework to cover multi-agent systems, multi-modal architectures, shared responsibility, responsible AI, and measurable governance.

From a framework standpoint, this version is suitable as the basis for:

  • Consulting assessments and maturity reviews
  • Enterprise AI governance programs
  • Certification architecture and training design across all three ADG pillars
  • Board-facing AI security, responsible AI, and assurance discussions
  • Regulatory preparation and compliance gap analysis
  • Vendor AI due diligence and procurement governance
EC-Council
ADG | Adopt | Defend | Govern - AI Security Governance Framework
Enhanced Working Draft • April 2026 • Confidential
Living framework · Open call

Shape what comes next.

ADG is not a finished product. It is iteratively shaped by the practitioners who actually deploy AI in production - across Fortune 500, Fortune Global 500, and Big Four firms. Every comment, every red-team finding, every regulatory shift becomes the next version. Here is how to engage.

9
Governance surfaces
12
Minimum controls
9
Deployment overlays
Take action

Four ways to engage with ADG.

Pick the door that matches your role.
Market pulse

What we’re tracking.

Curated regulatory, standards, and threat-landscape signals shaping ADG’s next version.
EU regulation

EU AI Act — high-risk obligations now in effect.

Status: conformity assessments, post-market monitoring (Art. 72), and serious-incident reporting (Art. 73) live across the EU. ADG MC-1, MC-4, MC-8, and MC-9 map directly.

Read the Act →
Attestation

CSA STAR for AI — attestation pathway live since Oct 2025.

What it means: the first auditable attestation regime for AI controls maps the CSA AI Controls Matrix (243 control objectives) against ISO 42001, NIST AI RMF, and EU AI Act. ADG composes underneath it.

View AICM →
Standard

Singapore IMDA — Agentic AI MGF released January 2026.

Why it matters: first state-backed framework with first-class agentic coverage. ADG’s Multi-Agent Interop overlay aligns to the IMDA principles; AI Verify is the testing toolkit underneath.

Read MGF →
Threat catalogue

OWASP Top 10 for Agentic Applications 2026.

What changed: first-class agentic threats — goal hijack, tool misuse, identity abuse, multi-agent collusion. ADG’s Agentic Hardening overlay (10.4) consumes this as input.

View OWASP →
NIST update

NIST AI 600-1 GenAI Profile in active use.

Status: companion profile to the AI RMF for generative-AI risks. ADG’s §11.2 crosswalk maps every Minimum Control to NIST RMF GOVERN / MAP / MEASURE / MANAGE.

Read NIST profile →
Threat intel

MITRE ATLAS v5.4 — agentic case studies expanded.

Why it matters: 84+ adversarial techniques with real-world AI incident case studies. ADG MC-8 (Runtime Monitoring) and MC-9 (Incident Response) cite ATLAS as the threat catalogue input.

View ATLAS →

Stay in the loop.

Quarterly updates on framework versions, regulatory landings, and the standards we’re tracking.

No marketing. Practitioner signal only.

We respect inbox hygiene. Unsubscribe in one click.
Back to Audience