Practical governance patterns for AI and GenAI systems in regulated industries
Governance is enablement, not gatekeeping. The goal is not to slow down AI adoption — it is to make AI adoption durable, defensible, and scalable across a regulated organization.
This framework provides opinionated, field-tested patterns for governing both traditional ML and GenAI/LLM systems in financial services, healthcare, and other industries where model risk, data privacy, and regulatory compliance are non-negotiable.
- Chief Risk Officers who need confidence that AI systems meet regulatory expectations
- CTOs and Engineering VPs who need repeatable standards for AI productionization
- AI/ML leads who need clear guardrails without bureaucratic overhead
- Compliance teams who need to map AI controls to regulatory requirements
- CISOs who need to assess the security surface of LLM and agentic AI systems
- Board members who need to understand AI risk posture without operational detail
Define how AI systems are tiered by risk and what governance intensity each tier requires.
- Risk Matrix (Traditional ML) — tiering criteria for classical ML models
- GenAI Risk Matrix — risk dimensions specific to LLMs and generative AI
- Agentic AI Risk — risk taxonomy for autonomous AI agents (single and multi-agent)
- Multimodal AI Risk Matrix — risk dimensions for vision, audio, video, and mixed-modality systems
- Assessment Template — structured risk assessment
Standards and gates across the full model lifecycle — from development through production monitoring.
Governance standards specific to large language models and generative AI systems.
- Foundation Model Selection — evaluation criteria for choosing LLMs
- Prompt Engineering Standards — version control, review, testing
- Fine-Tuning Governance — when and how to fine-tune safely
- RAG Governance — retrieval quality, access controls, document lifecycle
- GenAI Deployment Gates — what must pass before GenAI goes live
- LLM Production Monitoring — output quality, drift, cost, safety
- Evaluation Governance — what to evaluate, thresholds by tier, release criteria, dataset governance
- Multimodal Governance — input/output governance for non-text modalities
- Multi-Agent Governance — orchestration, coordination, and accountability for multi-agent systems
- Open-Source Model Governance — evaluation, licensing, and maintenance for open-weight models
- Model Deprecation Governance — sunsetting, migration, and archival
Standards for securing AI systems against adversarial threats.
- Red-Teaming Protocol — structured adversarial testing methodology
- Adversarial Robustness — defenses against jailbreaking, prompt injection, model extraction, data poisoning
- Supply Chain Security — model provenance, open-source risk, AI bill of materials
How framework controls map to specific regulatory requirements.
- EU AI Act Mapping — traditional AI system requirements (with enforcement status)
- EU AI Act — GenAI Mapping — GPAI model obligations, Article 50 transparency
- NIST AI RMF Mapping — Govern, Map, Measure, Manage function alignment
- ISO/IEC 42001 Mapping — AI management system certification backbone
- Multi-Jurisdictional Guide — EU, US, Singapore, UAE, Brazil, Canada, Australia, Japan
- Cross-Border Data Governance — data flows, sovereignty, conflict resolution
- Regulatory Change Monitoring — horizon scanning and impact assessment
- Prompt Audit Trail — logging, retention, reconstruction
- Data Residency for LLMs — data flows in LLM architectures
- Third-Party Model Risk — vendor assessment and ongoing monitoring
- Responsible AI Checklist
- Audit Trail Requirements
Standards for ethical and responsible deployment of AI systems.
- Bias Detection for LLMs — counterfactual testing, metrics, monitoring
- Hallucination Policy — tolerance levels, measurement, mitigation
- Transparency Standards — disclosure, labeling, explainability
- Human-in-the-Loop Patterns — when and how humans review AI output
Who does what, when, and how decisions escalate.
- AI CoE Design & Evolution — from centralized CoE to distributed operating model
- GenAI Roles & Responsibilities — prompt engineers, AI risk analysts, LLMOps
- Roles & Responsibilities (RACI)
- Review Cadence
- Escalation Paths
- Board Reporting — quarterly risk report structure, material incident criteria, risk appetite
- Governance Metrics & KPIs — measuring governance program health
- Cost Governance — budgeting, attribution, optimization governance
- GRC Integration — COSO, ISO 31000, three lines of defense
- Incident Forensics — post-incident investigation and evidence preservation
The operational backbone — controls, evidence, and the end-to-end process.
- Control Register — master mapping of every control to its evidence, owner, and escalation
- Governance Workflow — end-to-end process from idea intake to retirement, with gates and evidence packs
Ready-to-use templates for immediate adoption:
Traditional ML:
GenAI / LLM:
- LLM Model Card Template — model card for LLM-based systems
- GenAI Use Case Assessment — risk assessment for new GenAI projects
- Prompt Review Checklist — pre-deployment prompt review
- GenAI Incident Report — GenAI-specific incident template
- Board AI Risk Report — quarterly board reporting template
- Red-Team Report — adversarial testing findings template
See the framework applied to real-world use cases:
Traditional ML:
- Credit Scoring Model — T1 (Critical), heavily regulated, full validation
- Fraud Detection System — T2 (High), real-time, high-autonomy
GenAI / LLM:
- Customer Service Chatbot — T1 (Critical), customer-facing, RAG-based, complete governance case file
- Document Summarization — T2 (High), compliance docs, human-reviewed
- Agentic Research Assistant — T1 (Critical), agentic AI with tool access
- Internal Knowledge Search — T3 (Medium), internal RAG, lighter governance
This framework is designed for phased rollout:
- Phase 1 (Week 1–2): Adopt the risk classification matrix. Tier your existing AI and GenAI systems.
- Phase 2 (Month 1): Apply lifecycle standards to one high-risk system as a pilot.
- Phase 3 (Month 2–3): Roll out templates (model cards, impact assessments, prompt review checklists) org-wide.
- Phase 4 (Ongoing): Establish the operating model — governance cadence, escalation paths, RACI.
- Phase 5 (GenAI): Apply LLM lifecycle standards — model selection, prompt governance, RAG governance, deployment gates.
- Phase 6 (Security): Establish red-teaming program, adversarial robustness standards, supply chain governance.
- Phase 7 (Global): Extend compliance mapping to all operating jurisdictions; implement regulatory change monitoring.
Start where the risk is highest. Expand as the organization builds muscle memory.
This framework has been designed with explicit alignment to:
| Regulation | Coverage |
|---|---|
| EU AI Act | High-risk system requirements (Articles 6–15), GPAI model obligations (Articles 53–55), transparency (Article 50), enforcement status tracking |
| NIST AI RMF | Govern, Map, Measure, Manage functions — full category-level mapping |
| ISO/IEC 42001 | AI management system — clause-by-clause implementation mapping for certification |
| SR 11-7 (Fed/OCC) | Model risk management — development, validation, governance |
| MAS FEAT | Fairness, Ethics, Accountability, Transparency principles for AI in financial services |
| MAS Model Governance Framework | Singapore AI governance — model risk, fairness, explainability |
| BCBS 239 | Risk data aggregation and reporting — data lineage, quality, timeliness |
| DORA | Digital operational resilience — ICT risk management for AI systems |
| GDPR | Data protection, automated decision-making (Article 22), right to explanation |
| LGPD (Brazil) | Data protection for AI systems — consent, data subject rights |
| MiFID II | Investment research requirements, communication recording |
| Consumer Duty (FCA) | Fair customer outcomes, clear AI communications |
Contributions are welcome. If you work in regulated industries and have governance patterns worth sharing, open a PR. Practical experience matters more than theoretical frameworks.
- Your ML Risk Framework Wasn't Built for GenAI
- The Year LLMs Met Compliance — And Compliance Wasn't Ready
- AI Security: Added red-teaming protocol, adversarial robustness standards, supply chain security
- Regulatory breadth: Added NIST AI RMF mapping, ISO/IEC 42001 mapping, multi-jurisdictional guide (8 jurisdictions), regulatory change monitoring, cross-border data governance
- Frontier AI coverage: Added multimodal AI risk matrix and governance, multi-agent governance, open-source model governance, model deprecation governance
- Enterprise operations: Added board reporting, governance metrics/KPIs, cost governance, GRC integration, incident forensics
- Templates: Added board risk report and red-team report templates
- Examples: Added T3 internal knowledge search (lighter governance demonstration)
- Updated: EU AI Act mappings with enforcement status and compliance evidence requirements; agentic AI risk taxonomy with multi-agent, persistent memory, and tool chain risk dimensions
Apache 2.0 — see LICENSE.