Skip to content

sunilp/ai-governance-framework

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

44 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AI Governance Framework

Practical governance patterns for AI and GenAI systems in regulated industries


Philosophy

Governance is enablement, not gatekeeping. The goal is not to slow down AI adoption — it is to make AI adoption durable, defensible, and scalable across a regulated organization.

This framework provides opinionated, field-tested patterns for governing both traditional ML and GenAI/LLM systems in financial services, healthcare, and other industries where model risk, data privacy, and regulatory compliance are non-negotiable.

Who This Is For

  • Chief Risk Officers who need confidence that AI systems meet regulatory expectations
  • CTOs and Engineering VPs who need repeatable standards for AI productionization
  • AI/ML leads who need clear guardrails without bureaucratic overhead
  • Compliance teams who need to map AI controls to regulatory requirements
  • CISOs who need to assess the security surface of LLM and agentic AI systems
  • Board members who need to understand AI risk posture without operational detail

Framework Components

Risk Classification

Define how AI systems are tiered by risk and what governance intensity each tier requires.

Model Lifecycle (Traditional ML)

Standards and gates across the full model lifecycle — from development through production monitoring.

LLM Lifecycle (GenAI)

Governance standards specific to large language models and generative AI systems.

AI Security

Standards for securing AI systems against adversarial threats.

Compliance Mapping

How framework controls map to specific regulatory requirements.

Responsible AI

Standards for ethical and responsible deployment of AI systems.

Operating Model

Who does what, when, and how decisions escalate.

Governance Operations

The operational backbone — controls, evidence, and the end-to-end process.

  • Control Register — master mapping of every control to its evidence, owner, and escalation
  • Governance Workflow — end-to-end process from idea intake to retirement, with gates and evidence packs

Templates

Ready-to-use templates for immediate adoption:

Traditional ML:

GenAI / LLM:

Worked Examples

See the framework applied to real-world use cases:

Traditional ML:

GenAI / LLM:

Adoption Approach

This framework is designed for phased rollout:

  1. Phase 1 (Week 1–2): Adopt the risk classification matrix. Tier your existing AI and GenAI systems.
  2. Phase 2 (Month 1): Apply lifecycle standards to one high-risk system as a pilot.
  3. Phase 3 (Month 2–3): Roll out templates (model cards, impact assessments, prompt review checklists) org-wide.
  4. Phase 4 (Ongoing): Establish the operating model — governance cadence, escalation paths, RACI.
  5. Phase 5 (GenAI): Apply LLM lifecycle standards — model selection, prompt governance, RAG governance, deployment gates.
  6. Phase 6 (Security): Establish red-teaming program, adversarial robustness standards, supply chain governance.
  7. Phase 7 (Global): Extend compliance mapping to all operating jurisdictions; implement regulatory change monitoring.

Start where the risk is highest. Expand as the organization builds muscle memory.

Regulatory Alignment

This framework has been designed with explicit alignment to:

Regulation Coverage
EU AI Act High-risk system requirements (Articles 6–15), GPAI model obligations (Articles 53–55), transparency (Article 50), enforcement status tracking
NIST AI RMF Govern, Map, Measure, Manage functions — full category-level mapping
ISO/IEC 42001 AI management system — clause-by-clause implementation mapping for certification
SR 11-7 (Fed/OCC) Model risk management — development, validation, governance
MAS FEAT Fairness, Ethics, Accountability, Transparency principles for AI in financial services
MAS Model Governance Framework Singapore AI governance — model risk, fairness, explainability
BCBS 239 Risk data aggregation and reporting — data lineage, quality, timeliness
DORA Digital operational resilience — ICT risk management for AI systems
GDPR Data protection, automated decision-making (Article 22), right to explanation
LGPD (Brazil) Data protection for AI systems — consent, data subject rights
MiFID II Investment research requirements, communication recording
Consumer Duty (FCA) Fair customer outcomes, clear AI communications

Contributing

Contributions are welcome. If you work in regulated industries and have governance patterns worth sharing, open a PR. Practical experience matters more than theoretical frameworks.

Related Writing

Changelog

March 2026 — Comprehensive Update

  • AI Security: Added red-teaming protocol, adversarial robustness standards, supply chain security
  • Regulatory breadth: Added NIST AI RMF mapping, ISO/IEC 42001 mapping, multi-jurisdictional guide (8 jurisdictions), regulatory change monitoring, cross-border data governance
  • Frontier AI coverage: Added multimodal AI risk matrix and governance, multi-agent governance, open-source model governance, model deprecation governance
  • Enterprise operations: Added board reporting, governance metrics/KPIs, cost governance, GRC integration, incident forensics
  • Templates: Added board risk report and red-team report templates
  • Examples: Added T3 internal knowledge search (lighter governance demonstration)
  • Updated: EU AI Act mappings with enforcement status and compliance evidence requirements; agentic AI risk taxonomy with multi-agent, persistent memory, and tool chain risk dimensions

License

Apache 2.0 — see LICENSE.

About

Practical AI governance patterns for regulated industries — risk assessment templates, model lifecycle controls, EU AI Act alignment

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors