Skip to content
#

nist-ai-rmf

Here are 84 public repositories matching this topic...

verifywise

Complete AI governance and LLM Evals platform with support for EU AI Act, ISO 42001, NIST AI RMF and 20+ more AI frameworks and regulations. Join our Discord channel: https://discord.com/invite/d3k3E4uEpR

  • Updated May 5, 2026
  • TypeScript

The open-source diagnostic for AI misalignment. 32 tests across fabrication, manipulation, deception, unpredictability, and opacity. Provider-agnostic. Runs against OpenAI, Anthropic, Bedrock, Azure, Gemini, and more. Letter grade in under 5 minutes, content-addressed manifest for bit-identical replay. Built by iMe.

  • Updated May 5, 2026
  • Python

The Universal Governance, Risk, Compliance (GRC) Operating System with Integrated Security for Agentic AI, Non-Human Identities, and Swarm Governance. AI SAFE² + AI Sovereignty Maturity Model (AISM) [Dual License: MIT + CC-BY-SA]

  • Updated May 4, 2026
  • Python

Static security scanner for AI agents. Catches prompt injection, runaway loops, missing oversight, and compliance gaps across 21 frameworks. Use from Claude Code, Cursor, ChatGPT (MCP), the CLI, or GitHub Actions.

  • Updated May 5, 2026
  • Go
governance

Zero-dependency TypeScript SDK for AI agent governance: policy enforcement, injection detection, tamper-evident audit, and standards mapping (EU AI Act, OWASP, NIST, ISO 42001)

  • Updated Apr 30, 2026
  • TypeScript

The most comprehensive open-source mapping of OWASP GenAI risks to industry frameworks — 37 files, 16 frameworks, 3 source lists: LLM Top 10, Agentic Top 10, DSGAI 2026. OT/ICS, EU AI Act, NIST, ISO 27001, ISO 42001, CIS, SAMM, ENISA, NHI, AIVSS.

  • Updated May 4, 2026
  • JavaScript

Operationalizes PM insights through working agents grounded in GRC best practices. Provides prompt libraries and tools to identify governance and compliance risks before scaling programs, analytics initiatives, or AI systems.

  • Updated May 1, 2026
  • Jupyter Notebook

Model evaluations for Responsible AI: We evaluate frontier LLMs for responsible AI safety and robustness, map results to the NIST AI Risk Management Framework, and publish the findings to a public dashboard. We run a structured adversarial benchmark against models from Anthropic, OpenAI, and Google, scores them using a deterministic judge (Haiku).

  • Updated May 2, 2026
  • Python

A browser-based Microsoft Defender for Endpoint audit tracker for MSSP security engineers, mapping ~270 tasks across multiple frameworks including — NIST CSF 2.0, Cyber Essentials, SOC 2, and NIST 800-53. Features per-task status, notes, live progress metrics, framework switching, dark/light mode, and CSV, HTML, and JSON export.

  • Updated Apr 1, 2026
  • HTML

Improve this page

Add a description, image, and links to the nist-ai-rmf topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the nist-ai-rmf topic, visit your repo's landing page and select "manage topics."

Learn more