-
Notifications
You must be signed in to change notification settings - Fork 5.5k
Add comprehensive evaluation infrastructure for spec and plan templates #1479
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
kfinkels
wants to merge
109
commits into
github:main
Choose a base branch
from
tikalk:adding_eval
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…stitution handling - Added logic to setup-plan.ps1 to handle constitution and team directives file paths, ensuring they are set in the environment. - Implemented sync_team_ai_directives function in specify_cli to clone or update the team-ai-directives repository. - Updated init command in specify_cli to accept a team-ai-directives repository URL and sync it during project initialization. - Enhanced command templates (implement.md, levelup.md, plan.md, specify.md, tasks.md) to incorporate checks for constitution and team directives. - Created new levelup command to capture learnings and draft knowledge assets post-implementation. - Improved task generation to include execution modes (SYNC/ASYNC) based on the implementation plan. - Added tests for new functionality, including syncing team directives and validating outputs from setup and levelup scripts.
# Conflicts: # .github/workflows/scripts/create-github-release.sh
Co-authored-by: factory-droid[bot] <138933559+factory-droid[bot]@users.noreply.github.com>
This reverts commit 952c676.
This reverts commit ba69392.
…optimized testing, GitHub issues integration, and code quality automation
…ized testing infrastructure, GitHub issues integration, and code quality automation
… placeholders - Update /specify command template to include context population instructions - Modify create-new-feature.sh to intelligently populate context.md fields - Add mode-aware context population (build vs spec modes) - Update PowerShell equivalent script - Fix bash syntax error in check-prerequisites.sh - Ensure context.md passes validation without [NEEDS INPUT] markers Closes context.md population bug that was blocking basic workflow.
…sure, and session persistence in roadmap
- Updated quickstart guide to clarify the new automation scripts in Bash and PowerShell, including step-by-step instructions for project initialization and specification creation. - Revised upgrade documentation to improve clarity on handling existing directories and agent setup. - Refactored Bash and PowerShell scripts for creating new features to streamline branch number retrieval and improve error handling. - Added support for new agents (Qoder CLI and IBM Bob) in context update scripts and CLI initialization. - Improved checklist and requirement templates to ensure clarity and completeness in specifications. - Enhanced agent configuration to include new agents with appropriate metadata. - Added cautionary notes in task-to-issues template to prevent issues creation in incorrect repositories.
…ity in complex systems
… of spec and plan template outputs using PromptFoo with Claude Sonnet 4.5.
- Create run-auto-error-analysis.sh script for automated spec evaluation - Add run-automated-error-analysis.py with Claude-powered categorization - Evaluate specs with binary pass/fail and failure categorization - Generate detailed CSV reports and summary files - Update .gitignore to exclude analysis results - Document automated and manual error analysis workflows in README - Mark Week 1 (Error Analysis Foundation) as completed in workplan Provides two error analysis options: 1. Automated (Claude API) - fast, batch evaluation 2. Manual (Jupyter) - deep investigation and exploration
Implement keyboard-driven web interface for reviewing generated specs, providing 10x faster review workflow. Includes auto-save, progress tracking, and JSON export capabilities. Update documentation with complete annotation tool guide and usage instructions.
…or improved navigation and maintainability
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Key Features
Test plan
evals/scripts/run-promptfoo-eval.shevals/scripts/run-annotation-tool.sh