An AI-powered system that autonomously evaluates whether a GitHub Pull Request satisfies the requirements in a Jira ticket using a multi-agent LangGraph workflow.
- Multi-Agent Evaluation: Utilizes chained specialized agents (Parser, Analyzer, Reasoner, TestGen, Synthesizer).
- MCP Native: Integrates with Model Context Protocol to securely surface Jira and GitHub Data to agents.
- Explainable AI: Maps PR diff snippets back to specific Jira Acceptance Criteria.
- Confidence Scoring: Heuristic-based scoring on pass/fail ratio of requirements.
- Premium Dashboard: Neon-dark aesthetic built with Next.js and Tailwind.
backend/: FastAPI API server and LangGraph agents.frontend/: Next.js Web Dashboard.docs/: Architecture diagrams and deeper documentation.
- Python 3.10+
- Node.js 18+
- OpenAI API Key
- Change to the backend directory:
cd backend - Create virtualenv:
python -m venv venv - Activate:
.\venv\Scripts\activate(Windows) orsource venv/bin/activate(Mac/Linux) - Install requirements:
pip install -r requirements.txt(or if not present, install manually listed in script) - Create
.envfile from snippet inbackend/and insertOPENAI_API_KEY. (Jira/GitHub keys are optional for mock testing). - Run server:
uvicorn main:app --reload
- Change to frontend directory:
cd frontend - Install dependencies:
npm install - Check and install explicit dependencies:
npm i lucide-react - Run server:
npm run dev
Open http://localhost:3000. Enter a valid PR URL and Jira Ticket (or use the mock default). Click Evaluate!
Note: For Hackathon demo purposes, without Jira tokens, querying for SPEC-123 will trigger a robust mocked requirement response flow.