-
Notifications
You must be signed in to change notification settings - Fork 0
Home
Welcome to the official Wiki for AI-Hackathon-Judge — an AI-powered platform designed to evaluate hackathon projects automatically using repository analysis and demo video evaluation.
- Introduction
- Problem Statement
- Solution Overview
- Features
- System Architecture
- Tech Stack
- Installation Guide
- Usage Guide
- Evaluation Criteria
- Security Considerations
- Limitations
- Future Enhancements
- Contribution Guidelines
- License
AI-Hackathon-Judge is an intelligent judging system that analyzes GitHub repositories and project demo videos to provide instant scores and structured feedback for hackathon submissions.
It minimizes human bias, saves judging time, and ensures consistent evaluation across all participants.
Traditional hackathon judging faces multiple challenges:
- Manual evaluation is time-consuming
- Human bias affects fairness
- Judges may not fully review large codebases
- Feedback is delayed or insufficient
- Technical depth is often overlooked
AI-Hackathon-Judge automates the judging process by:
- Fetching and analyzing GitHub repositories
- Evaluating demo videos (link-based)
- Using AI models to assess code quality, innovation, and documentation
- Generating structured scores and feedback
- Providing instant results via a web interface
- 🔍 GitHub Repository Analysis
- 🎥 Demo Video Evaluation
- 🤖 AI-Based Scoring Engine
- 📊 Instant Results & Feedback
- 🧠 Bias-Free Judging
- ⚡ Fast & Scalable Architecture
- 🌐 Web-Based Interface
- 💻 Optional Desktop Executable Support
Frontend (React + Vite)
↓
FastAPI Backend (Python)
↓
AI Model (Gemini API)
↓
Scoring & Feedback Engine
- Frontend – User interface for input and result visualization
- Backend API – Handles logic, processing, and AI interaction
- AI Engine – Performs analysis and scoring
- GitHub API – Fetches repository data
- React
- Vite
- Tailwind CSS
- Axios
- Python
- FastAPI
- Uvicorn
- Google Gemini API
- GitHub REST API
- PyInstaller (for local builds)
git clone https://github.com/Daku3011/AI-Hackathon-Judge
cd backend
pip install -r requirements.txtCreate a .env file:
GEMINI_API_KEY=your_api_key_hereRun the backend server:
uvicorn main:app --reloadcd frontend
npm install
npm run dev- Open the web interface
- Enter:
- GitHub repository URL
- Demo video link (optional)
- Click Evaluate
- AI processes the submission
- View:
- Final score
- Category-wise analysis
- Strengths & weaknesses
- AI-generated feedback
| Category | Description |
|---|---|
| Code Quality | Readability, structure, best practices |
| Innovation | Creativity and uniqueness |
| Documentation | README clarity and setup instructions |
| Feasibility | Practicality and real-world usability |
| Presentation | Demo explanation and clarity |
- API keys are stored using environment variables
- No sensitive user data is stored
- Repositories are accessed in read-only mode
- Public deployment should include rate limiting
- Refer to
SECURITY.mdfor vulnerability reporting
- AI judgment depends on prompt and model behavior
- Video evaluation is limited to metadata and transcripts
- Private repositories require user-provided access
- Incomplete documentation may affect scoring accuracy
- Admin & judge dashboards
- Multi-AI consensus scoring
- Plagiarism detection
- Team-based evaluation
- PDF scorecard export
- Authentication & role management
- Cloud deployment templates
Contributions are welcome!
- Fork the repository
- Create a feature branch
- Write clean, documented code
- Commit with clear messages
- Submit a pull request
This project is licensed under the MIT License.
You are free to use, modify, and distribute it with proper attribution.
AI-Hackathon-Judge aims to modernize hackathon judging by making it faster, smarter, and fairer using AI.