Conversation
Document the AI-powered features for validators: - Auto-Link Evidence for single/bulk guideline evidence linking - Risk Assessment Generation for single/bulk assessments - Understanding the structured AI-generated assessment format Closes sc-14812.
|
Pull requests must include at least one of the required labels: |
|
Pull requests must include at least one of the required labels: |
- Rename file from autogenerate-validation-reports.qmd to map-and-assess-evidence.qmd - Update title to "Map and assess evidence" - Fix UI element names to match actual product (Map Evidence, Assess Evidence) - Describe Evidence Assessment output structure accurately - Add relevance threshold slider documentation
- Add map-evidence-panel.png showing evidence type toggles and relevance threshold - Add assess-evidence-panel.png showing compliance assessment option - Display both in column-margin divs alongside their respective sections
- Update Map Evidence steps to describe approve/reject workflow - Update Assess Evidence steps to describe approve/reject workflow - Clarify that editing is available after approving assessments
- Add individual Approve/Reject buttons for mapped evidence - Add See Relevance Analysis option with relevance score explanation - Add individual Approve/Reject buttons for evidence assessments - Add Reassess Evidence option to regenerate assessments - Add new "Understand mapped evidence" section explaining relevance scores and analysis
- Report overview: Approve All / Reject All for entire report - Section-level: Individual Approve / Reject per evidence item or assessment - Clarify the two-level review structure for both Map Evidence and Assess Evidence
Replace detailed "Understand mapped evidence" and "Understand evidence assessments" sections with a concise "How do evidence mapping and assessment work?" section that explains the benefits for new users.
Links to the manual compliance assessment workflow for users who prefer traditional evidence linking and assessment selection.
Changed 'From the report overview' to 'For the entire report' and removed references to non-existent 'Overview page' label.
Lighthouse check resultsShow Lighthouse scoresFolder depth level checked: 0 Commit SHA: 1ac5f88 Modify the workflow to check a different depth:
|
sc-14812/documentation---autogenerate-val
PR SummaryThis PR introduces a new guide page titled "Map and assess evidence" in the model validation section. The new documentation provides comprehensive details on how to use AI-assisted tools for mapping evidence to validation guidelines and generating compliance assessments. It outlines the following key functionalities:
Additionally, the sidebar configuration has been updated to include a link to this new guide, ensuring easy navigation for users. The page also references screenshots to illustrate the UI components like the Map Evidence and Assess Evidence panels. Test Suggestions
|
Validate docs site✓ INFO: A live preview of the docs site is available — Open the preview |
* docs: Map and assess evidence (#1299) * docs: Add autogenerate validation reports documentation Document the AI-powered features for validators: - Auto-Link Evidence for single/bulk guideline evidence linking - Risk Assessment Generation for single/bulk assessments - Understanding the structured AI-generated assessment format Closes sc-14812. * fix: Rename to map-and-assess-evidence with accurate UI terminology - Rename file from autogenerate-validation-reports.qmd to map-and-assess-evidence.qmd - Update title to "Map and assess evidence" - Fix UI element names to match actual product (Map Evidence, Assess Evidence) - Describe Evidence Assessment output structure accurately - Add relevance threshold slider documentation * Add screenshots for Map Evidence and Assess Evidence panels - Add map-evidence-panel.png showing evidence type toggles and relevance threshold - Add assess-evidence-panel.png showing compliance assessment option - Display both in column-margin divs alongside their respective sections * fix: Add Approve All / Reject All workflow for Map and Assess Evidence - Update Map Evidence steps to describe approve/reject workflow - Update Assess Evidence steps to describe approve/reject workflow - Clarify that editing is available after approving assessments * Add individual approve/reject, relevance analysis, and reassess options - Add individual Approve/Reject buttons for mapped evidence - Add See Relevance Analysis option with relevance score explanation - Add individual Approve/Reject buttons for evidence assessments - Add Reassess Evidence option to regenerate assessments - Add new "Understand mapped evidence" section explaining relevance scores and analysis * Distinguish report-level vs section-level approve/reject workflows - Report overview: Approve All / Reject All for entire report - Section-level: Individual Approve / Reject per evidence item or assessment - Clarify the two-level review structure for both Map Evidence and Assess Evidence * Consolidate intro into benefits-focused overview section Replace detailed "Understand mapped evidence" and "Understand evidence assessments" sections with a concise "How do evidence mapping and assessment work?" section that explains the benefits for new users. * Format overview section as definition list * Remove What's next section * Remove unused evidence-assessment-example.png * Add What's next section with listing to assess-compliance Links to the manual compliance assessment workflow for users who prefer traditional evidence linking and assessment selection. * Document Remap Evidence button in report overview * Replace 'report overview' with visible UI terminology Changed 'From the report overview' to 'For the entire report' and removed references to non-existent 'Overview page' label. * Simplify navigation: click Validation link directly * Address review comments from Fernanda * Fix link warning * docs: Add Databricks integration documentation (#1301) (#1316) * docs: Add Databricks Unity Catalog integration documentation Add documentation for the new Databricks integration that enables linking ValidMind inventory records to Databricks Unity Catalog resources including models, datasets, and agents. - Add Databricks to supported connections in configure-connections.qmd - Add Databricks to model inventory integrations in managing-integrations.qmd - Add step-by-step connection and linking examples in integrations-examples.qmd Closes sc-14813 * Add placeholder columns to integrations grid layout * Fix Databricks connection field names to match UI - databricks host → workspace url - client id → sql warehouse id - client secret → personal access token * Fix Databricks link model steps to match UI * docs: Fix Databricks connection field descriptions - client id: Update to describe SQL Warehouse ID - client secret: Update to describe PAT token secret * docs: Add developer workflow for running validations on Databricks data Add a new section explaining how to run validation notebooks against Databricks-hosted data, with a link to the quickstart notebook and a mermaid diagram showing the data flow between platforms. * Split examples up into separate files. * File rename * Remove terminology ambiguity * Edits * Add missing headings for steps * Add configuration screenshot --------- Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
What and why?
Documents the Map Evidence and Assess Evidence features (sc-14812). These AI-assisted tools help validators link relevant evidence to validation guidelines and generate structured compliance assessments.
Changes:
map-and-assess-evidence.qmdcovering:assess-compliance.qmd)How to test
Try the live preview:
What needs special review?
Dependencies, breaking changes, and deployment notes
Release notes
Added documentation for AI-assisted evidence mapping and compliance assessment features in validation reports, including the Map Evidence and Assess Evidence workflows.
Checklist