Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions site/guide/_sidebar.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -133,6 +133,7 @@ website:
contents:
- guide/model-validation/review-model-documentation.qmd
- guide/model-validation/assess-compliance.qmd
- guide/model-validation/map-and-assess-evidence.qmd
- guide/model-documentation/work-with-content-blocks.qmd
- guide/model-documentation/work-with-document-versions.qmd
- text: "Check for compliance"
Expand Down
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
187 changes: 187 additions & 0 deletions site/guide/model-validation/map-and-assess-evidence.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,187 @@
---
# Copyright © 2023-2026 ValidMind Inc. All rights reserved.
# Refer to the LICENSE file in the root of this repository for details.
# SPDX-License-Identifier: AGPL-3.0 AND ValidMind Commercial
title: "Map and assess evidence"
date: last-modified
listing:
- id: whats-next
type: grid
grid-columns: 2
max-description-length: 250
sort: false
fields: [title, description]
contents:
- assess-compliance.qmd
---

Use AI-assisted tools to map relevant evidence to validation guidelines and generate compliance assessments based on linked evidence. These features streamline the validation workflow by reducing manual effort while maintaining quality.

::: {.attn}

## Prerequisites

- [x] {{< var link.login >}}
- [x] The model you are validating is registered in the model inventory.[^1]
- [x] A model developer has submitted their model documentation for validation.[^2]
- [x] You are a [{{< fa circle-check >}} Validator]{.bubble} or assigned another role with sufficient permissions.[^3]

:::

## How do evidence mapping and assessment work?

Validation reports require you to link supporting evidence to each guideline and write compliance assessments, a process that can be time-consuming when done manually across dozens of guidelines.

Map evidence
: Scans all available evidence from developers and validators, then suggests which items are relevant to each guideline. Instead of searching through evidence blocks yourself, you review AI-suggested matches and approve the ones that apply. Each suggestion includes a relevance score and explanation so you can make informed decisions.

Assess evidence
: Analyzes the linked evidence for a guideline and drafts a structured compliance assessment. The generated assessment includes a compliance conclusion, specific observations about gaps or issues, and a technical review of what the evidence demonstrates. You review and approve the draft, and then make edits if needed — saving time while maintaining control over the final content.

Both features are designed to accelerate validation without replacing your judgment. You always review and approve suggestions before they become part of the report.

## Map evidence to guidelines

:::: {.column-margin}
![Map Evidence panel](map-evidence-panel.png){fig-alt="Map Evidence panel showing evidence type toggles for Developer Evidence and Validator Evidence, and a Relevance Threshold slider set to 0.7." .screenshot}
::::

Map Evidence uses AI to suggest relevant evidence for each validation guideline, helping you find and link supporting documentation from both developers and validators.

1. In the left sidebar, click **{{< fa cubes >}} Inventory**.

2. Select the model you are validating.[^4]

3. In the left sidebar, click **Validation** to open the validation report.

4. Navigate to a section and expand the **Evidence** panel.

5. Click **{{< fa wand-magic-sparkles >}} Map Evidence**.

6. Configure the mapping options:
- Toggle **Developer Evidence** to include evidence logged via the {{< var validmind.developer >}}.
- Toggle **Validator Evidence** to include evidence uploaded or created by validators.
- Adjust the **Relevance Threshold** slider — lower values return more results while higher values show only the most relevant matches.

7. Click **Map Evidence** to run the AI mapping.

The panel displays how many evidence items are available to review for each guideline in the section.

### Review and approve mapped evidence

After running Map Evidence, you can review and approve suggestions at three levels:

**For the entire report:**

1. Open the validation report and look at the right sidebar.

2. The **Map Evidence** panel shows how many items need review across the entire report.

3. Use **Approve All** to link all suggested evidence across all guidelines, or **Reject All** to dismiss all suggestions.

4. To re-run mapping with different settings, click **Remap Evidence**. This lets you adjust the relevance threshold or change which evidence types to include, then generate new suggestions.

**For an entire section:**

1. Navigate to a specific section in the validation report.

2. In the section header, click **{{< fa wand-magic-sparkles >}} Map Evidence** to open the mapping panel.

3. Use **Approve All** to link all suggested evidence for guidelines in that section, or **Reject All** to dismiss all section suggestions.

4. To re-run mapping with different settings, click **Remap Evidence**.

**For individual guidelines:**
Comment thread
nrichers marked this conversation as resolved.

1. Navigate to a specific section in the validation report.

2. Expand the **Evidence** panel for a guideline.

3. Click **{{< fa wand-magic-sparkles >}} Map Evidence** to open the mapping panel for that guideline.

4. Review individual evidence suggestions:

- Each item shows the evidence block name and a relevance score.
- Click **See Relevance Analysis** to view why the evidence was suggested.
- Click **Approve** to link an individual item to the guideline.
- Click **Reject** to dismiss an individual suggestion.

5. Or use **Approve All** / **Reject All** to handle all suggestions for that guideline at once.

Approved evidence appears in the Evidence panel for that guideline, organized by evidence type (Developer Evidence or Validator Evidence).

## Assess evidence for compliance

:::: {.column-margin}
![Assess Evidence panel](assess-evidence-panel.png){fig-alt="Assess Evidence panel showing option to identify potential risks and compliance gaps based on linked evidence." .screenshot}
::::

Assess Evidence analyzes the linked evidence and generates a structured compliance assessment, identifying potential risks and compliance gaps.

1. Navigate to a section that has linked evidence.

2. Expand the **Evidence** panel.

3. Click **{{< fa wand-magic-sparkles >}} Assess Evidence**.

4. The AI analyzes the linked evidence and generates an **Evidence Assessment** containing:

- **Guideline Assessment** — A compliance conclusion indicating whether the guideline requirements are fully met, partially met, or not met, with an explanation of the evidence quality.

- **Validation Observations** — Specific findings about gaps or issues in the evidence, with each observation covering a single concern and suggesting actions for developers.

- **Evidence Review** — A detailed analysis of what the evidence demonstrates, including references to specific test outputs, documentation, and any limitations.

The panel displays how many assessments are available to review.

### Review and approve evidence assessments

After running Assess Evidence, you can review and approve assessments at three levels:

**For the entire report:**

1. Open the validation report and look at the right sidebar.

2. The **Assess Evidence** panel shows how many assessments need review across the entire report.

3. Use **Approve All** to accept all generated assessments, or **Reject All** to dismiss all assessments.

**For an entire section:**

1. Navigate to a specific section in the validation report.

2. In the section header, click **{{< fa wand-magic-sparkles >}} Assess Evidence** to open the assessment panel.

3. Use **Approve All** to accept all generated assessments for that section, or **Reject All** to dismiss them.

4. To regenerate assessments with updated evidence, click **Re-assess Evidence**.

**For individual guidelines:**

1. Navigate to a specific section in the validation report.

2. Expand the **Evidence Assessment** panel for a guideline. Assessments pending review show a [Review]{.bubble} status.

3. Review the generated assessment content.

4. Click **Approve** to accept the assessment, or **Reject** to dismiss it.

5. After approving, you can edit the assessment content as needed — changes are auto-saved.

6. To regenerate an assessment, click **{{< fa wand-magic-sparkles >}} Reassess Evidence** to run the AI analysis again with any updated evidence.

## What's next

:::{#whats-next}
:::


<!-- FOOTNOTES -->

[^1]: [Register records in the inventory](/guide/inventory/register-records-in-inventory.qmd)

[^2]: [Submit for approval](/guide/model-documentation/submit-for-approval.qmd)

[^3]: [Manage permissions](/guide/configuration/manage-permissions.qmd)

[^4]: [Working with the inventory](/guide/inventory/working-with-the-inventory.qmd#search-filter-and-sort-records)
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading