ADOPT
"Deliver business value safely"
- Deployed service
- Runbooks
- Baselines
- Releases
Enterprise AI is shipping into production faster than governance can absorb. Engineering teams are deploying agents that act on tools, consume retrieved context, run multi-step plans, and chain to other agents - often before the people accountable for risk have seen the system, let alone approved it. Boards are asking questions control departments cannot answer. Regulators are landing - the EU AI Act first, others to follow. And the standards every organization has invested in - ISO/IEC 42001, NIST AI RMF, the EU AI Act itself - tell you what to think about, but not who decides, who builds, who breaks, who signs, or what evidence belongs in front of a regulator on Monday morning.
The architecture is three pillars - ADOPT (execute and deliver), DEFEND (secure and validate), and GOVERN (oversee, assure, decide) - that scale unchanged from the board to the engineer. The same three words frame a board paper, organize a release gate, label a red-team report, and structure an incident playbook. Beneath those pillars sit nine governance surfaces (the points where engineers instrument and attackers act); twelve Minimum Controls (MC-1 through MC-12), each with a named evidence artefact your auditor can hold; nine deployment overlays for the patterns AI actually ships in today - agentic orchestration, tools and MCP, multi-agent interoperability, multi-modal and composite stacks, long-context architectures; three autonomy tiers (HITL / HOTL / HOOTL) tied directly to the controls; and a four-phase roadmap that takes an organization from inventory to continuous assurance in twenty-four months.
ADG also goes where the standards do not. Agentic systems, MCP-connected tools, multi-agent interoperability, long-context architectures, and composite multi-modal stacks each carry an additive deployment overlay that composes with the same twelve Minimum Controls. Move from one copilot to a fleet of agents without rewriting a single control.
One element of ADG sits where peer frameworks generally do not: the AI Governance Council. Every framework tells you to govern AI. ADG tells you who breaks the tie when delivery and safety disagree - the Council resolves ADOPT-DEFEND tension, sets go / no-go thresholds, owns the exception register, and ties decision rights to MC-3 (Separation of Duties) and MC-10 (Periodic Governance Review). The mechanism, not the abstract requirement, is what is rarely named in peer frameworks - and it is where most production AI risk actually lives.
ADG is built for boards, CISOs and AI red teams, platform and AI engineers, risk and compliance, and procurement. Each audience reads a different path; the artefacts are the same. The framework was forged with senior AI, security, and governance leaders running production AI inside top Fortune 500, Fortune Global 500, and Big Four firms - including Salesforce, Microsoft, Citi, JPMorgan Chase, NTT DATA, KPMG, ServiceNow, BNP Paribas, Prudential, GE Healthcare, BASF, and Jio - the practitioners who own the failure modes this framework governs.
ADG is dense by design. Pick the role that matches you and jump to the path written for it. Each path is 4-5 sections out of 13 - you don't have to read everything.
"Can we sign off this AI system, and what evidence will the auditor want?"
"Where are the attack surfaces, and which controls fire when?"
"Which controls bind to my CI/CD, my MCP servers, my prompt repo?"
"How does this map to ISO 42001, NIST AI RMF, EU AI Act?"
ADG began with a mission at EC-Council: the AI governance space was fragmenting into point standards, vendor playbooks, and academic frameworks - none of which gave enterprise boards and engineering teams a single, practitioner-tested operating model for deploying AI safely at scale. EC-Council set out to build that missing standard: a framework built around three enduring functions - ADOPT DEFEND GOVERN - that scales from the board to the engineer.
To avoid building in a vacuum, EC-Council convened an Advisory Board of practitioners from mature enterprises that have actually deployed AI in production. The initial draft was circulated to senior AI, security, and governance leaders across global organizations in financial services, technology, manufacturing, healthcare, telecommunications, energy, and consulting - covering both regulated and less-regulated sectors, and spanning North America, Europe, and Asia-Pacific.
ADG was not drafted in a conference room. It was forged with the senior AI, security, and governance leaders running production AI inside Fortune 500, Fortune Global 500, and Big Four firms - across regulated and less-regulated sectors, from design through implementation - the practitioners who own the failure modes this framework governs.
The practitioners below gave their time, scrutiny, and hard-won experience to review ADG. Their comments, corrections, and counterpoints are the reason this framework is field-tested rather than aspirational.























This version is the result - the iterative output of that collaboration. ADG is not a one-and-done document. It is a living framework, designed to be enhanced as AI deployment patterns evolve and as new practitioners join the conversation. The scope has grown from its original LLM-centric framing to cover modern deployment patterns including LLMOps, retrieval-augmented systems, tool-using agents, MCP-connected ecosystems, long-context architectures, multi-agent systems, and model lifecycle controls spanning pre-training through runtime.
Provide a board-to-engineering operating model that separates decision rights, assigns control ownership, and defines the minimum governance required for safe, responsible, and scalable AI deployment - and to continue evolving it through ongoing practitioner review.
Practitioners who want to contribute to the next version can join the Advisory Board here.
ADG is purpose-built to operationalize four non-negotiable properties of trustworthy enterprise AI - captured in the RE3 Trust Model:
These are not checkboxes. They are governance outcomes that must be engineered into every AI deployment - from an internal copilot through a production multi-agent system. ADG translates each outcome into concrete controls across all nine governance surfaces (Model, Prompt, Context, Tools, Orchestration, Identity, Safety Layer, Telemetry, Learning Loop) and applies them to every deployment pattern (LLMOps, Agentic, Tools & MCP, Context, Training, Multi-Agent, Multi-Modal, and composite systems) across all three deployment classes (Homegrown, Foundation Model API, SaaS AI).
Every AI action traces back to a human decision-maker. Delivered by: MC-1 (AI System Inventory), MC-3 (Separation of Duties), Agent Authority Statements, Shared Responsibility Model (Section 7), and the AI Governance Council escalation path. Each of the 9 deployment overlays explicitly aligns activities to ADOPT/DEFEND/GOVERN.
Protection from bias, manipulation, and harm across four classes (Technical, Societal, Operational, Systemic). Delivered by: the Four-Class Harm Taxonomy, Seven RAI Principles embedded in every surface, MC-11 (Fairness & Bias Evaluation), the Safety Layer surface, and Autonomy Tier controls (HITL/HOTL/HOOTL) scaled to risk.
Governance that enables velocity, not one that grinds it. Delivered by: the 4-Phase Implementation Roadmap (Foundation - Maturity), Tier-1 Operational Metrics, the LLMOps Overlay for cost/performance monitoring, reusable Operating Artifacts, and one unified control set that spans Homegrown, API, and SaaS deployments - eliminating governance fragmentation.
Every output can be justified to a regulator, a board, or a customer. Delivered by: the Telemetry surface, MC-6 (Context Policy with provenance tracking), MC-7 (Tool/MCP audit logging), MC-8 (Runtime Monitoring), and MC-9 (Incident Response with forensic replay). Architecture-neutral - works for LLMs, diffusion, classical ML, and composite systems.
The outcome: RE3 AI becomes measurable. Each trust property maps to specific controls, metrics, and evidence requirements, so organizations can demonstrate trustworthy AI to boards, regulators, and customers - not just claim it.
Traditional cyber and software governance frameworks assume deterministic execution, finite input spaces, stable functionality, and visible logic. AI systems violate those assumptions. Organizations need a framework that governs:
ADG addresses this by combining role separation, surface-based control ownership, deployment-pattern overlays, lifecycle governance, and a structured harm taxonomy that extends beyond security to encompass responsible AI.
ADG governs all AI systems as defined by the OECD AI Policy Observatory and the EU AI Act: machine-based systems that, for explicit or implicit objectives, infer from input how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. This explicitly includes:
The governance surfaces, overlays, and controls in this framework are architecture-neutral by design. Where a deployment pattern creates unique governance requirements (e.g., agentic orchestration, diffusion-model provenance), a dedicated overlay provides additional controls. The framework is not limited to LLM-centric systems.
ADG is a voluntary governance framework designed to be compatible with, but distinct from, regulatory requirements. Throughout this document:
ADG controls are designed to satisfy both regulatory and standards-based requirements, but organizations must independently verify compliance with applicable legal obligations in their jurisdictions.
ADG is an enterprise AI security and responsible AI governance framework structured around three pillars - ADOPT DEFEND GOVERN - that provide a board-to-engineering operating model for safe, responsible, and scalable AI deployment.
Key framework components:
ADOPT executes and delivers AI capabilities. DEFEND secures and validates against all harm classes. GOVERN oversees, assures, and resolves tensions between the other two pillars.
Technical, Societal, Operational, and Systemic harms - ensuring governance extends beyond security to cover bias, fairness, reliability, and emergent multi-agent risks.
Fairness & non-discrimination, Transparency & explainability, Privacy & data protection, Accountability, Human oversight, Robustness & safety, and Sustainability & societal well-being - embedded across all three pillars.
Model, Prompt, Context, Tools, Orchestration, Identity, Safety Layer, Telemetry, and Learning Loop - defining what must be governed regardless of architecture.
Separation of powers, Explicit authority, Context as attack surface, Tool use as highest-risk control plane, Graduated oversight, Lifecycle governance, Mandatory evidence, Embedded RAI, and Defined shared responsibility.
Each control includes an evidence requirement to ensure measurability. Together they form the baseline every production AI system must meet.
LLMOps, Agentic Orchestration, Agent, Agentic Hardening, Tools & MCP, Context & Long-Window, Pre/Post-training, Multi-Agent Interop, and Multi-Modal/Composite systems.
Assistive (Human-in-the-Loop), Conditional (Human-on-the-Loop), and Autonomous (Human-out-of-the-Loop) - determining the minimum governance required per system.
Foundation (0-3 months), Control Deployment (3-9), Agentic Readiness (9-15), and Maturity (15-24) - a staged path from inventory to continuous assurance.
This version was forged with senior AI, security, and governance leaders running production AI inside Fortune 500, Fortune Global 500, and Big Four firms - practitioners working from design through implementation across regulated and less-regulated sectors. It addresses eight major gap clusters identified through systematic review and extends the framework to cover multi-agent systems, multi-modal architectures, shared responsibility, responsible AI, and measurable governance. It is suitable as the basis for consulting assessments, enterprise AI governance programs, certification architecture, board-facing assurance discussions, regulatory preparation, and vendor due diligence.
Concrete definitions and one example per element in the Framework Architecture diagram above. Use this as the quick-reference key when reading the rest of the document.
| Pillar | Definition | Example |
|---|---|---|
| ADOPT | Execute and deliver AI capabilities. Owns selection, integration, deployment, operations. | An LLMOps team rolling out a customer-support copilot to production. |
| DEFEND | Secure and validate. Owns adversarial testing, guardrails, runtime detection, fairness evaluation, AI incident response. | A red team running prompt-injection scenarios against the copilot before launch. |
| GOVERN | Oversee, assure, decide. Owns policy, decision rights, regulatory alignment, board accountability. | An AI Governance Council approving the copilot for high-risk customer interactions and setting escalation thresholds. |
| Surface | Definition | Example |
|---|---|---|
| Model | Foundation models, fine-tuned variants, adapters, routers, model versions. | Switching the copilot from Claude 3.7 to Claude 4.7 - a model-surface change requiring change-control evidence. |
| Prompt | System prompts, templates, agent instructions, prompt libraries. | Updating the copilot's system prompt to refuse PII requests - versioned and approved. |
| Context | Retrieval sources, session state, memory, hidden context, user metadata. | A RAG pipeline pulling from SharePoint - context provenance and retention rules apply. |
| Tools | APIs, plugins, actions, code execution, MCP capabilities. | Granting the copilot a refund.process() tool - tool register entry, trust tier, audit logging required. |
| Orchestration | Planners, workflow graphs, retry logic, multi-step flows, agent routing. | An agent that retries up to 3 times then escalates to a human - retry budget defined here. |
| Identity | Credentials, service accounts, delegated authority, agent identity, secrets. | The agent runs under its own scoped service account, not a user's personal credentials. |
| Safety Layer | Guardrails, policy engines, semantic filters, classifiers, circuit breakers. | A classifier that blocks the copilot from generating regulated financial advice. |
| Telemetry | Logs, traces, evaluations, replay data, fairness metrics. | Every prompt, retrieved context, and tool call captured for forensic replay. |
| Learning Loop | Pre-training sources, post-training alignment, feedback loops, RLHF data, retraining inputs. | Customer thumbs-down feedback feeding retraining data - provenance and consent tracked. |
| Control | Definition | Example |
|---|---|---|
| MC-1 | AI System Inventory - published inventory with named owners, reviewed quarterly. | A Confluence page listing every AI system with owner, risk tier, and last review date. |
| MC-2 | Risk Classification - rate each system by data sensitivity, autonomy, exposure, harm potential, criticality. | The copilot rated "high autonomy + external-facing + financial harm potential" → high-risk tier. |
| MC-3 | Separation of Duties - split deployment, validation, and approval across ADOPT, DEFEND, GOVERN. | The product team builds; DEFEND validates; GOVERN approves. No single function holds all three. |
| MC-4 | Pre-Production Evaluation - quality, safety, security, fairness, failure-mode tests before go-live. | A signed test report covering jailbreak resistance, bias evaluation, and accuracy benchmarks. |
| MC-5 | Change Control - governed change process for prompts, tools, models, retrieval sources. | A pull request to update the system prompt routed through DEFEND review and GOVERN approval. |
| MC-6 | Context Policy - provenance, retention, access, trust ordering for context inputs. | A policy stating "SharePoint sources only, 30-day retention, redact PII at retrieval." |
| MC-7 | Tool & MCP Register - catalog of tools and MCP capabilities with trust tiering and invocation controls. | A register of all approved MCP servers with risk tier and last assurance review date. |
| MC-8 | Runtime Monitoring - continuous detection of abuse, drift, data leakage, unsafe actions, bias emergence. | Dashboards alerting when refusal rate drops below 95% (potential jailbreak signal). |
| MC-9 | AI Incident Response - AI-specific playbook with replayable evidence; tabletop at least annually. | A runbook for "agent leaks data" plus a quarterly tabletop exercise with the response team. |
| MC-10 | Periodic Governance Review - exception handling and board reporting for high-risk systems. | Quarterly review of all high-risk AI systems with the AI Governance Council. |
| MC-11 | Fairness & Bias Evaluation - discriminatory outcome testing using representative data. | A signed report showing the copilot's response quality is consistent across demographic segments. |
| MC-12 | Shared Responsibility Documentation - accountability boundaries for vendor / SaaS AI. | A signed matrix with the model vendor showing what they own (model + training) vs what we own (prompts, context, monitoring). |
| Class | Definition | Example |
|---|---|---|
| Technical | Security, reliability, and system integrity failures. | Prompt injection causing the copilot to leak its system prompt or run unintended tool calls. |
| Societal | Bias, discrimination, fairness, and human-rights impacts. | A loan-decision agent denying credit at a higher rate to one demographic group. |
| Operational | Reliability, accuracy, and business-impact failures. | The copilot hallucinating a refund policy that does not exist, leading to disputes. |
| Systemic | Emergent risks from AI-to-AI interaction and scaled deployment. | Two negotiation agents from different vendors colluding on an unfavorable deal because their guardrails do not catch joint behavior. |
ADG does not invent its own threat list - it consumes the four canonical industry catalogues as input and maps each threat into the Minimum Control Set and the deployment overlays. The interactive §11.2.1 flow shows every link from threat → ADG control → NIST AI RMF & ISO/IEC 42001 clause.
| Catalogue | What it provides | How ADG consumes it |
|---|---|---|
| MITRE ATT&CK | Enterprise adversary techniques abused via or against AI-enabled systems - phishing, credential theft, lateral movement, exfiltration, persistence. | MC-8 (Runtime Monitoring) detection rules, MC-9 (Incident Response) playbooks, and red-team scenarios in MC-4. |
| MITRE ATLAS | 84+ adversarial ML/AI techniques and real-world case studies - model evasion, ML supply chain, model extraction, prompt injection at the model layer. | MC-4 pre-production adversarial eval suites, MC-8 detection signatures, MC-9 forensic replay test cases. |
| OWASP Top 10 for LLM (2025) | LLM01 Prompt Injection, LLM02 Sensitive Info Disclosure, LLM03 Supply Chain, LLM04 Data & Model Poisoning, LLM05 Improper Output Handling, LLM06 Excessive Agency, LLM07 System Prompt Leakage, LLM08 Vector & Embedding Weaknesses, LLM09 Misinformation, LLM10 Unbounded Consumption. | MC-4 evaluation checklist, MC-6 (Context Policy) for LLM03/04/08, MC-8 runtime guardrails, MC-11 fairness checks for LLM09. |
| OWASP Top 10 for Agentic (2026) | T1 Memory Poisoning, T2 Tool Misuse, T3 Privilege Compromise, T4 Resource Overload, T5 Cascading Hallucinations, T6 Intent Breaking, T7 Misaligned/Deceptive Behavior, T8 Repudiation, T9 Identity Spoofing, T10 HITL Overwhelm. | Agentic Hardening overlay (§10.4) consumes this directly. MC-7 (Tool & MCP Register), MC-8 (Runtime Monitoring), MC-9 (Incident Response). |
| Overlay | Definition | Example |
|---|---|---|
| LLMOps | Versioning, evaluation, release management, rollback, cost control, performance monitoring for LLM-based services. | A production RAG chatbot with rollback, eval suite, per-token cost tracking, and a release gate. |
| Agentic Orchestration | The planner / loop layer of a single agent - multi-step reasoning, retries, subtask decomposition, stop conditions. | An agent that decomposes "book my trip" into flights → hotel → calendar with retry budgets. |
| Agent | The agent as a class of system - business mission, action boundaries, liability model, segregation of duties. | A travel-booking agent authorized to spend up to $2,500 per booking on behalf of the user. |
| Agentic Hardening | Defensive controls bolted onto agents - authority bounds, action isolation, kill switches, forensic replay, semantic firewalls. | Dry-run mode for irreversible actions with human approval required before the booking is committed. |
| Tools & MCP | Tool discovery, capability registration, trust tiering, per-invocation policy for tools and MCP servers. | An MCP server registered as "trust tier 2" - read-only access to corporate wiki, no write actions allowed. |
| Context & Long-Window | Session history, retrieved enterprise knowledge, persistent memory, multi-turn poisoning resistance. | A copilot with persistent memory that flags conflicting facts injected mid-conversation. |
| Pre / Post-training | Model provenance, fine-tuning, alignment, RLHF data, retraining governance. | A LoRA adapter trained on internal docs - training data licensed, evaluated for leakage, version-controlled. |
| Multi-Agent Interop | Emergent risks when multiple agents interact - agent-to-agent trust, value alignment, collusion detection, cascading failure prevention. | A purchasing agent talking to a vendor's quoting agent, with circuit breakers if they loop or agree on out-of-policy terms. |
| Multi-Modal / Composite | Cross-model risks when LLM + diffusion + classifier + retrieval combine in one product. | An assistant that takes a screenshot, classifies it, retrieves docs, and generates a response - system-level eval required, not per-model. |
| Tier | Definition | Example |
|---|---|---|
| HITL - Assistive | AI recommends or drafts; a human approves every output before it takes effect. | A copilot that drafts emails for the user to review and send. |
| HOTL - Conditional | AI acts within pre-approved limits; a human monitors and can intervene. | A trading agent executing within position limits, with humans monitoring exposure dashboards. |
| HOOTL - Autonomous | AI executes multi-step goals with limited or delayed human review. | An overnight data-pipeline agent that processes batches and reports results in the morning. |
| Phase | Definition | Example |
|---|---|---|
| Foundation (0 – 3 mo) | Inventory, owners, ADG roles, criticality classification, AI Governance Council established. | Quarter 1: publish AI inventory, name owners, charter the Governance Council. Controls: MC-1, MC-2, MC-3. |
| Controls (3 – 9 mo) | Release gates, change control, context policies, monitoring, adversarial testing, fairness evals, vendor due diligence. | Quarter 2 – 3: implement MC-4 through MC-9 plus MC-11 / MC-12 across all high-risk systems. |
| Agentic (9 – 15 mo) | Authority statements, tool trust tiering, MCP governance, circuit breakers, multi-agent governance, forensic replay. | Quarter 4 – 5: agentic hardening overlay deployed for all production agents. |
| Maturity (15 – 24 mo) | Continuous evaluation, persistent adversarial monitoring, drift governance, RAI measurement, board-level reporting. | Quarter 6 – 8: automated weekly red-team runs, quarterly board AI risk review, formal assurance review. |
This section provides guidance on how to read, apply, and implement the ADG framework.
Each audience has a tailored reading path. Click any section to jump there directly.
| Audience | Recommended Reading Path |
|---|---|
| Board & Executive Sponsors | |
| Security & AI Red Teams | |
| Engineering & Platform Teams | |
| Compliance & Legal | |
| All Stakeholders |
ADG is built on nine foundational principles. Each principle is a durable standard that applies across industries, deployment patterns, and regulatory regimes.
ADG distinguishes three operational autonomy tiers. The tier determines the minimum governance requirements for deployment.
Note on terminology: HITL/HOTL/HOOTL originated in DoD doctrine (DoD 5000.59-M, 1998) and remain widely used, though their boundaries are debated in current AI-safety literature. ADG uses the tighter operational definitions below to make each tier directly testable against the Minimum Control Set.
| Tier | Description | Oversight | Minimum Governance |
|---|---|---|---|
| Assistive | Recommends or drafts; does not act | HITL | Output review, bounded context, no direct tool execution |
| Conditional | Acts within pre-approved limits and constrained tools | HOTL | Authority statement, guardrails, audit logs, kill switch, exception thresholds, bias monitoring |
| Autonomous | Executes multi-step goals with limited or delayed human review | HOOTL | Formal approval, strong telemetry, circuit breakers, forensic replay, periodic governance review, mandatory fairness evaluation |
| Phase | Focus | Key Activities |
|---|---|---|
| Phase 1 - Foundation | Inventory and Classification | Establish AI system inventory, assign accountable owners, define ADG roles, classify systems by criticality and autonomy, establish AI Governance Council, conduct baseline assessment and gap analysis |
| Phase 2 - Control Deployment | Operational Controls | Implement release gates, prompt and tool change control, context policies, logging, monitoring, basic adversarial testing, initial fairness evaluations, vendor AI due diligence |
| Phase 3 - Agentic Readiness | Advanced Governance | Add authority statements, tool trust tiers, MCP governance, circuit breakers, multi-agent governance controls, forensic replay capability |
| Phase 4 - Maturity | Continuous Assurance | Continuous evaluation, persistent adversarial monitoring, drift governance, responsible AI measurement, board-level reporting, formal assurance review |
"Deliver business value safely"
"Identify, prevent, detect harmful behavior"
"Justify, approve, evidence AI use at board level"
"Deliver business value safely"
"Identify, prevent, detect harmful behavior"
"Justify, approve, evidence AI use at board level"
ADG uses a simple rule: ADOPT builds and operates, DEFEND breaks and protects, GOVERN authorizes and oversees.
| Dimension | ADOPT | DEFEND | GOVERN |
|---|---|---|---|
| Primary question | Can we deploy it effectively? | Can it be abused, fail dangerously, or cause harm? | Should we approve it, under what conditions, and at what risk? |
| Decision focus | Capability, performance, delivery, reliability | Security, resilience, fairness, abuse resistance, containment | Risk appetite, legality, accountability, ethics, oversight |
| Core outputs | Deployed service, runbooks, baselines, releases | Test results, detections, guardrails, fairness evaluations, incident playbooks | Policies, approvals, risk thresholds, exceptions, evidence, board reports |
| Success measure | Value delivery with controlled operations | Risk reduced across all four harm classes | Defensible use with auditable, measurable governance |
| Tension resolution | Escalates blockers to GOVERN | Escalates unresolved risks to GOVERN | Resolves ADOPT-DEFEND tension; defines go/no-go and exception policy |
ADG organizes governance into nine surfaces. These surfaces define what must be governed regardless of model vendor, model architecture, or deployment pattern.
| Surface | Scope | ADG Alignment | Control Objective |
|---|---|---|---|
| Model | Foundation models, fine-tuned models, adapters, routers, versions, diffusion models, composite model chains | A+G | Use only approved models with known risk posture, provenance, and change traceability |
| Prompt | System prompts, templates, policies, agent instructions, prompt libraries, multi-modal input validation | A+D | Prevent unmanaged behavior changes and unsafe instruction patterns |
| Context | Retrieval sources, session state, memory, hidden context, user metadata, cross-session data | G+D | Prevent poisoning, leakage, cross-session contamination, and privacy violations |
| Tools | APIs, plugins, actions, code execution, file access, transactional endpoints, MCP capabilities | D | Enforce least privilege, strong validation, sandboxing, and full audit logging |
| Orchestration | Planners, workflow graphs, retry logic, multi-agent flows, model routing, agent-to-agent communication | A+D | Bound agent behavior, prevent cascading failures, ensure deterministic control |
| Identity | Credentials, service accounts, delegated authority, secrets, trust relationships, agent identity | G+D | Prevent privilege misuse, preserve accountability, trace agent actions to human authority |
| Safety Layer | Guardrails, policy engines, semantic filters, classifiers, circuit breakers, harm detectors | D | Block unsafe content, unfair outputs, and unauthorized actions before impact |
| Telemetry | Logs, traces, evaluations, replay data, alerts, governance evidence, fairness metrics | D+G | Make behavior observable, reviewable, provable, and measurable |
| Learning Loop | Pre-training sources, post-training alignment, feedback loops, retraining updates, RLHF data | G+A | Control data provenance, drift, alignment stability, and undocumented behavior change |
The nine surfaces above describe what must be governed. The four industry threat catalogues describe what attackers do. ADG consumes them as input - every surface inherits its detection rules, eval scenarios, and runtime guardrails from these sources, then maps the resulting controls to NIST AI RMF and ISO/IEC 42001. See the interactive §11.2.1 flow for the threat-to-control-to-standard chain.
ADG recognizes that AI governance must address harms beyond security vulnerabilities. The framework adopts a four-class harm taxonomy that maps into every governance surface, lifecycle stage, and deployment overlay.
The four harm classes plotted by detection difficulty and impact scope. Each quadrant carries the threats it covers, the pillars that own it, and the Minimum Controls that detect it.
Every governance surface in ADG must account for seven responsible AI dimensions, embedded across all three pillars rather than appended as a separate track.
Integration rule: Responsible AI is not a separate track. It is embedded into every ADG pillar - ADOPT, DEFEND, and GOVERN - and must be reflected in all certification curricula, operating artifacts, and governance reviews.
When organizations deploy AI, responsibility for governance controls varies based on how the AI is provisioned. ADG defines three deployment responsibility classes and maps control ownership for each.
| Class | Description | Organization Controls | Shared | Vendor Controls |
|---|---|---|---|---|
| Homegrown / Self-Hosted | Organization trains, hosts, and operates the full AI stack | All 9 surfaces, full lifecycle | None (full ownership) | Infrastructure SLAs only |
| Foundation Model API | Consumes a foundation model via API (e.g., OpenAI, Anthropic, Google) | Prompt, Context, Tools, Orchestration, Identity, Safety Layer, Telemetry | Model governance, Learning Loop | Model training, alignment, infrastructure, API availability |
| SaaS AI / Embedded AI | AI embedded in a vendor product (e.g., Copilot, Einstein, ServiceNow) | Context, Identity, Telemetry, GOVERN policy | Safety Layer, Prompt customization | Model, Orchestration, Tools, Runtime stack, Learning Loop |
For Foundation Model API and SaaS AI deployments, organizations must:
ADG requires explicit capability ownership in addition to functional ownership. The people model includes dedicated responsible AI expertise across all pillars.
AI Governance Council: Every organization deploying AI systems should establish a cross-functional AI Governance Council (or equivalent steering body) with representation from ADOPT, DEFEND, and GOVERN. This council serves as the escalation path and tension-resolution mechanism described in Section 4.4.
ADG formalizes the minimum process backbone required for enterprise implementation:
ADG treats the following as governed technology layers:
Traditional enterprise data architecture separates data into distinct tiers - structured databases, data warehouses, data lakes, unstructured file stores, and APIs - each with its own governance model, access controls, and tooling. AI systems fundamentally disrupt this separation. When an AI agent consumes enterprise data through retrieval pipelines, MCP connections, or tool invocations, it does not distinguish between a SQL database record, a SharePoint document, a Slack thread, or a PDF. All data is converted into text tokens within a context window, collapsing traditional tier boundaries into a single consumption surface.
This convergence means that AI systems can reassemble sensitive information from fragments scattered across sources that were never intended to be combined, infer personal data from context that contains no explicit PII, and traverse entire enterprise file systems where content is readable as text. The governance implication is profound: access control at the storage layer alone is no longer sufficient. Organizations must govern the full pipeline from source data through retrieval, embedding, context assembly, and AI consumption.
ADG requires organizations to govern data across four dimensions:
| Surface | Data Layer Intersection |
|---|---|
| Model | Training data provenance, fine-tuning data governance, model card data documentation |
| Prompt | System prompt versioning and change control as governed text artifacts |
| Context | Retrieval source classification, context assembly governance, cross-source inference controls |
| Tools | MCP servers as data access gateways; tool-retrieved data classified and logged |
| Orchestration | Data flow governance across multi-step agent workflows; inter-agent data sharing rules |
| Identity | Data access tied to agent identity and delegated authority; no implicit data access |
| Safety Layer | Guardrail configurations as governed text; data-driven harm detection models governed as data assets |
| Telemetry | Logs and traces as sensitive data requiring retention, redaction, and access governance |
| Learning Loop | Feedback data, RLHF inputs, and retraining datasets governed as controlled data assets with provenance |
ADG applies controls across six lifecycle stages. The lifecycle is explicitly circular: lessons from Retire/Learn feed back into Pre-training/Sourcing decisions.
| Stage | ADOPT | DEFEND | GOVERN |
|---|---|---|---|
| Pre-training | Select suppliers & datasets fit for purpose; assess training data for representation and bias | Assess provenance abuse, contamination risk, and training data bias | Approve sourcing constraints, licensing, jurisdictional requirements, and data ethics |
| Post-training | Tune for use-case quality and operational fit | Test for regressions, bypasses, safety degradation, and fairness drift | Review alignment objectives, documentation sufficiency, and RAI criteria |
| Build | Assemble workflows, prompts, tools, retrieval; integrate DevSecOps controls | Validate interfaces, secrets, attack surfaces, and input validation coverage | Classify use case, approve controls, define oversight requirements and risk tier |
| Deploy | Release through controlled change process with rollback readiness | Confirm pre-production testing, monitoring readiness, and fairness evaluation | Grant formal deployment approval or exception with documented conditions |
| Run | Operate service, maintain SLAs, track quality and cost | Detect abuse, failures, drift, unsafe actions, and bias emergence; continuous red teaming | Review incidents, exceptions, compliance posture, and configuration drift |
| Retire | Decommission services and roll forward lessons | Preserve evidence, investigate failures, validate closure | Update policy, records, accountability decisions; feed lessons into sourcing cycle |
The Run/Monitor stage requires specific continuous governance controls that go beyond traditional operational monitoring:
ADG adds deployment-specific overlays so that governance matches the actual architecture being deployed. Overlays are additive - select all that apply.
Scope: versioning, evaluation, release management, rollback, cost control, and performance monitoring for LLM-based services.
Scope: planners, loops, retries, multi-step reasoning, subtask decomposition, and multi-agent coordination.
Scope: AI systems that act on behalf of a user, team, or enterprise process rather than merely generating content.
Required control themes:
Scope: tool discovery, capability registration, trust mediation, and policy-controlled invocation of tools and context providers.
Minimum MCP requirements:
Scope: session history, retrieved enterprise knowledge, hidden orchestration instructions, persistent memory, and context overflow handling.
Minimum context policy: every production system must define approved sources, retention periods, redaction rules, persistence rules, and trust ordering across context inputs.
Scope: model provenance, fine-tuning, adapters, preference alignment, safety tuning, feedback loops, and retraining.
Scope: systems where multiple AI agents interact, coordinate, negotiate, or delegate tasks to each other.
Multi-agent systems create emergent governance challenges that single-agent controls do not address:
Scope: systems combining multiple model architectures (LLM + diffusion + classifier + retrieval) within a single product or workflow.
The following controls form the minimum baseline for ADG implementation. Each control includes an evidence requirement to ensure measurability.
| Control | Description | Evidence Requirement |
|---|---|---|
| MC-1 | AI System Inventory - Maintain an inventory of all AI systems with an accountable owner, risk classification, and autonomy tier | Published inventory, reviewed quarterly, with named owner per system |
| MC-2 | Risk Classification - Classify each AI system by data sensitivity, autonomy, external exposure, harm potential, and business criticality | Documented classification per system using a standardized risk taxonomy |
| MC-3 | Separation of Duties - Separate deployment ownership, security validation, and approval authority across ADOPT, DEFEND, GOVERN | RACI matrix per AI system; no single function holding all three roles |
| MC-4 | Pre-Production Evaluation - Complete quality, safety, security, fairness, and failure-mode testing before any production deployment | Signed evaluation report covering all four harm classes before go-live |
| MC-5 | Change Control - Manage changes to prompts, tools, models, and retrieval sources through a governed change process | Change log with approval records; no uncontrolled production changes |
| MC-6 | Context Policy - Define provenance, retention, access restrictions, and trust ordering for all context inputs | Published context policy per system; annual review |
| MC-7 | Tool and MCP Register - Maintain a register of all tools and MCP capabilities with trust tiering and invocation controls | Published register with per-tool risk assessment and approval status |
| MC-8 | Runtime Monitoring - Monitor for abuse, drift, data leakage, unsafe actions, bias emergence, and configuration drift in production | Active monitoring with defined alert thresholds and response SLAs |
| MC-9 | AI Incident Response - Maintain AI-specific incident response procedures with replayable evidence capture | Documented playbook; at least one tabletop exercise per year |
| MC-10 | Periodic Governance Review - Conduct governance reviews with exception handling and board reporting for high-risk systems | Review records with findings, decisions, and exception dispositions |
| MC-11 | Fairness and Bias Evaluation - Evaluate AI systems for discriminatory outcomes using representative test data and established fairness metrics | Fairness evaluation report; re-evaluation after model or data changes |
| MC-12 | Shared Responsibility Documentation - For vendor or SaaS AI, document accountability boundaries, contractual obligations, and assurance requirements | Signed responsibility matrix; vendor due diligence records |
Each Minimum Control maps to one or more NIST AI RMF functions and ISO/IEC 42001 Annex A groups. Click any control below to reveal how many domains it touches across both frameworks.
| Control | Name | NIST AI RMF | ISO/IEC 42001 Annex A | #NIST | #ISO |
|---|
The crosswalk above shows ADG controls mapping to NIST and ISO. The flow below adds the technical layer on top: every threat catalogued by MITRE ATT&CK, MITRE ATLAS, OWASP Top 10 for LLM (2025), and OWASP Top 10 for Agentic (2026) flows into one or more ADG Minimum Controls - and from there to the NIST AI RMF function and ISO/IEC 42001 Annex A clause that recognises the same control objective. Pick a catalogue tab, click a threat, and trace the chain.
| Source | ID | Threat | ADG Minimum Controls |
|---|
| Artifact | Purpose | ADG Alignment |
|---|---|---|
| AI System Profile | Documents system architecture, risk classification, autonomy tier, harm classes, and deployment pattern | ADOPT |
| ADG RACI Matrix | Maps control ownership across ADOPT, DEFEND, GOVERN for each AI system | GOVERN |
| Agent Authority Statement | Defines what an agent may access, decide, execute, and escalate | GOVERN+ADOPT |
| Context Policy | Specifies approved sources, retention, redaction, trust ordering, and privacy controls | GOVERN |
| Tool and MCP Register | Catalogues tools, trust tiers, invocation policies, and third-party assurance status | DEFEND |
| AI Release Gate Checklist | Pre-deployment verification covering security, fairness, safety, and governance approval | ADOPT+DEFEND |
| AI Incident Evidence Pack | Forensic capture package for AI-specific incidents | DEFEND |
| Model and Prompt Change Log | Tracks all changes to models, prompts, and system configurations with approval records | ADOPT |
| Governance Exception Register | Records all governance exceptions with justification, risk acceptance, and expiration | GOVERN |
| Vendor AI Due Diligence Record | Documents vendor AI assessments, contractual AI clauses, and shared responsibility boundaries | GOVERN |
| Fairness Evaluation Report | Records fairness and bias testing methodology, results, and remediation actions | DEFEND+GOVERN |
ADG requires measurable governance. Three measurement tiers ensure controls are not just documented but demonstrably effective.
ADG is designed to be compatible with major AI governance regulations and standards. The following matrix maps ADG components to key external frameworks.
Note: This mapping is indicative, not exhaustive. Organizations must conduct their own regulatory compliance assessment. ADG provides a governance backbone that facilitates compliance but does not guarantee it.
For the full 12-control Minimum Control Set mapped to NIST AI RMF and ISO/IEC 42001, see §11.2 Control Crosswalk. The table below covers EU AI Act alignment and cross-cutting ADG components.
| ADG Component | EU AI Act (High-Risk Focus) | NIST AI RMF | ISO/IEC 42001 |
|---|---|---|---|
| Risk Classification (MC-2) | Art. 6-7: Risk categorization | Map: Risk identification and analysis | 6.1.2: AI risk assessment |
| Pre-Production Eval (MC-4) | Art. 9: Risk management system | Measure: AI risk measurement | 8.1: Operational planning and control |
| Runtime Monitoring (MC-8) | Art. 72: Post-market monitoring | Manage: Continuous monitoring | 9.1: Monitoring, measurement, analysis and evaluation |
| Fairness Eval (MC-11) | Art. 10: Data governance and bias prevention | Map: Bias identification | A.7: Data for AI systems + A.5: Impact assessment |
| Incident Response (MC-9) | Art. 73: Serious incident reporting | Manage: Incident response | 10.2: Nonconformity and corrective action |
| AI System Inventory (MC-1) | Art. 49 + Art. 71: Registration in EU database | Govern: Inventory and categorization | 7.5: Documented information |
| Human Oversight (Tiers) | Art. 14: Human oversight requirements | Govern: Human-AI teaming | A.9: Use of AI systems + A.5: Impact assessment |
| Transparency (Telemetry) | Art. 13: Transparency requirements | Govern: Transparency and documentation | A.8: Information for interested parties |
The defining constraint on enterprise AI is not the technology - it is the workforce. AI is scaling into production faster than the people who must run, secure, and govern it can be trained, credentialed, and deployed. IDC projects that by 2026 more than 90% of organizations will feel the IT skills crisis, at a cost of $5.5 trillion in delays, quality issues, and lost competitiveness (IDC, Enterprise Resilience: IT Skilling Strategies, 2024). Without workforce capability, every control in this framework remains theoretical.
"AI is moving from experimentation to infrastructure, and the workforce has to move with it. Security leaders are now accountable for systems that learn, adapt, and influence outcomes at speed."
- Jay Bavisi, Chairman & CEO, EC-Council Group
The ADG framework defines three pillars - ADOPT DEFEND GOVERN - and each one surfaces a distinct capability gap that is showing up across enterprises today:
EC-Council's answer is the Enterprise AI Credential Suite - a portfolio of role-aligned credentials mapped directly to the ADG framework. Launched February 2026 as the largest single portfolio expansion in EC-Council's 25-year history, the suite is structured as a four-part capability ladder:
The five credentials below map to the four-part capability ladder in Section 13.2. Each links to its official EC-Council programme page for syllabus, prerequisites, and enrolment.
Non-technical AI literacy covering core principles, prompt engineering, AI ethics, and tool fluency. Five hands-on modules, no coding prerequisites.
Translates AI strategy into execution - strategy, governance, and enterprise AI lifecycle delivery for program leaders accountable for ROI.
End-to-end AI red-teaming - prompt injection, model evasion, data poisoning, model exploitation, agentic / model-to-model attacks, and AI supply-chain attacks. For security teams defending production AI.
AI risk management, NIST AI RMF / ISO/IEC 42001 alignment, and accountability across the AI lifecycle for governance and compliance leaders.
Ethical hacking restructured to integrate AI across the full ethical-hacking lifecycle (recon, scanning, gaining access, maintaining access, covering tracks) - closing the AI chasm for existing security practitioners without a role change.
The full EC-Council AI credential catalogue is maintained at eccouncil.org/ai-courses.
| Term | Definition |
|---|---|
| AI System | A machine-based system that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions (aligned with OECD/EU AI Act). |
| AI Agent | An AI system that can autonomously plan, execute multi-step tasks, invoke tools, and take actions in physical or digital environments on behalf of a user or organization. |
| Foundation Model | A general-purpose AI model trained on broad data that can be adapted to a wide range of downstream tasks (e.g., GPT, Claude, Gemini, Llama, Stable Diffusion). |
| Model | Any machine learning artifact - including LLMs, diffusion models, classifiers, regressors, and reinforcement learning agents - used within an AI system. |
| High-Risk AI System | An AI system whose failure or misuse could cause significant harm to health, safety, fundamental rights, or critical infrastructure (aligned with EU AI Act Article 6). |
| Composite AI System | An AI system that orchestrates multiple models (potentially of different architectures) within a single workflow or product. |
| Agentic AI | AI systems exhibiting autonomous behavior: planning, tool use, multi-step execution, and environmental interaction with limited or delayed human review. |
| HITL | Human-in-the-Loop: a human reviews and approves every AI output before it takes effect. |
| HOTL | Human-on-the-Loop: a human monitors AI operations and can intervene, but does not approve each individual output. |
| HOOTL | Human-out-of-the-Loop: AI operates autonomously with periodic governance review rather than real-time human oversight. |
ADG turns the original ADG concept into a full enterprise AI security and responsible AI governance framework. It keeps the simplicity of ADOPT, DEFEND, and GOVERN, but adds the structure required to govern contemporary AI systems - across people, process, technology, deployment pattern, lifecycle, harm class, and regulatory environment.
This version was forged with senior AI, security, and governance leaders running production AI inside Fortune 500, Fortune Global 500, and Big Four firms - practitioners working from design through implementation across regulated and less-regulated sectors. It addresses eight major gap clusters identified through systematic review, and extends the framework to cover multi-agent systems, multi-modal architectures, shared responsibility, responsible AI, and measurable governance.
From a framework standpoint, this version is suitable as the basis for:
ADG is not a finished product. It is iteratively shaped by the practitioners who actually deploy AI in production - across Fortune 500, Fortune Global 500, and Big Four firms. Every comment, every red-team finding, every regulatory shift becomes the next version. Here is how to engage.
Complete a short ADG readiness workflow and receive a neutral 30/60/90-day action roadmap with control gaps, evidence priorities, and NIST AI RMF / ISO 42001 mapping coverage.
Build your roadmap → Shape the frameworkGot expertise in AI governance, security, or deployment? Get your perspective into the next version.
Become a contributor → ResourcesTake the full framework offline as Markdown. Print, share, or fork into your own governance program.
Download .md → Talk to usFor boards, CISOs, and governance leads evaluating ADG for an enterprise programme.
Email the team →Status: conformity assessments, post-market monitoring (Art. 72), and serious-incident reporting (Art. 73) live across the EU. ADG MC-1, MC-4, MC-8, and MC-9 map directly.
Read the Act →What it means: the first auditable attestation regime for AI controls maps the CSA AI Controls Matrix (243 control objectives) against ISO 42001, NIST AI RMF, and EU AI Act. ADG composes underneath it.
View AICM →Why it matters: first state-backed framework with first-class agentic coverage. ADG’s Multi-Agent Interop overlay aligns to the IMDA principles; AI Verify is the testing toolkit underneath.
Read MGF →What changed: first-class agentic threats — goal hijack, tool misuse, identity abuse, multi-agent collusion. ADG’s Agentic Hardening overlay (10.4) consumes this as input.
View OWASP →Status: companion profile to the AI RMF for generative-AI risks. ADG’s §11.2 crosswalk maps every Minimum Control to NIST RMF GOVERN / MAP / MEASURE / MANAGE.
Read NIST profile →Why it matters: 84+ adversarial techniques with real-world AI incident case studies. ADG MC-8 (Runtime Monitoring) and MC-9 (Incident Response) cite ATLAS as the threat catalogue input.
View ATLAS →Quarterly updates on framework versions, regulatory landings, and the standards we’re tracking.
No marketing. Practitioner signal only.