Live Demo: https://main.d3it84bzc6jetr.amplifyapp.com
Bug Fix Arena is a developer-focused web app for discovering beginner-friendly open source issues, turning them into structured challenges, and building the workflow needed to ship a real fix. The current product stops short of full in-browser code execution on purpose. The MVP focuses on issue discovery, challenge shaping, progress tracking, submissions, scoring, and GitHub ingestion.
This README is written for three audiences:
- Recruiters who need to understand the product and why it is interesting
- Developers who want to run the project locally
- Collaborators who want to contribute without reverse-engineering the repo
Open Source Bug Fix Arena takes noisy raw GitHub issue data and turns it into a more intentional contributor experience:
- discover real issues tagged with
good first issueorhelp wanted - inspect a normalized challenge detail page with repository context and suggested approach guidance
- save, start, complete, and submit work against a challenge
- track points, leaderboard position, and submission history
- sync live GitHub issues into Postgres with manual admin control
The product direction is not “build another issue board.” It is “make bug-fix work feel structured, game-like, and easier to start.”
- Home page with product framing and featured challenges
- Browse page with search, filtering, sorting, pagination, and shareable URLs
- Challenge details page with:
- issue summary
- labels
- repository metadata
- reward and difficulty
- suggested skills
- related issues from the same repository
- practical workflow guidance
- Lightweight demo auth scaffold using an HTTP-only cookie
- Dashboard for saved, started, completed challenges, and submission history
- Manual PR-based submission flow with draft and review-ready statuses
- First-pass scoring and leaderboard
- Internal admin sync page for GitHub ingestion
- Mock fallback catalog so the product still works when GitHub is unavailable
- High-value automated tests around normalization, catalog state, scoring, and server-side workflows
The codebase stays intentionally simple and production-shaped:
app/Next.js App Router routes, route-level loading states, and error boundariescomponents/reusable UI, layout, challenge, dashboard, leaderboard, and submission moduleslib/auth/lightweight demo session handling and admin gatinglib/challenges/normalization, catalog URL state, persistence helpers, and view modelslib/config/challenge labels, scoring rules, language metadata, and other centralized constantslib/data/catalog assembly, filtering, sorting, recommendations, and mock fallback datalib/db/Prisma client setup and enum-safe domain/database mapperslib/engagement/save/start/complete flows, dashboard queries, scoring logic, leaderboard querieslib/github/GitHub API client, normalization, constants, and live challenge discoverylib/submissions/submission lifecycle, URL normalization, persistence, and server actionslib/sync/manual ingestion workflow, deduplication, archival, and sync audit queriesprisma/database schema, migrations, and seed scripttests/Vitest fixtures and high-value unit/integration-style coverage
The main runtime pattern is:
- fetch or read challenge data
- normalize it into internal domain records
- render server-first pages from stable internal types
- mutate state through server actions or server-only services
- Next.js 16 App Router
- TypeScript
- React 19
- Tailwind CSS 4
- PostgreSQL
- Prisma ORM
- Vitest for focused automated tests
- GitHub REST API
The current Prisma schema models the first real product loop:
Usercontributor profile, demo-auth identity, related scores, engagements, submissions, and sync runsRepositorynormalized GitHub repository metadataChallengenormalized GitHub issue or mock challenge, plus sync metadata and repository relationshipChallengeEngagementone record per user per challenge for saved, started, and completed statesSubmissionone record per user per challenge for PR-based submission stateScorepersisted score summary for leaderboard readsChallengeSyncRunaudit log for manual GitHub ingestion jobs
Relationship summary:
- one
Repositoryhas manyChallengerecords - one
Usercan have manyChallengeEngagementandSubmissionrecords - one
Challengecan have manyChallengeEngagementandSubmissionrecords - one
Userhas oneScore
This structure is intentionally ready for verified submissions later without needing to redesign the core tables.
The GitHub layer is split so each concern stays reusable:
lib/github/client.tslow-level API requests, auth headers, timeout handling, and rate-limit-aware errorslib/github/normalize.tsmaps GitHub repositories and issues into internalRepositoryRecordandChallengeRecordshapeslib/github/service.tssearches GitHub for qualifying issues and enriches them with repository datalib/sync/service.tsupserts normalized GitHub challenges into Postgres, deduplicates by GitHub identity and(repositoryId, issueNumber), and archives stale issues when it is safe to do solib/data/catalog.tsprefers persisted synced GitHub challenges, then falls back to live GitHub, then falls back to mock data
The integration is deliberately resilient:
- missing token -> use mock fallback
- GitHub failure -> use mock fallback
- rate limit -> use mock fallback with a user-facing notice
- partial repository enrichment -> keep usable data without silently losing the whole catalog
npm installCopy .env.example to .env.local and fill in the values.
npm run db:generatenpx prisma db push
npm run db:seednpm run devOpen http://localhost:3000.
Useful local flows:
- sign in through the demo auth button to unlock dashboard features
- visit
/admin/syncto trigger a manual GitHub ingestion run - use the leaderboard and submissions pages to inspect score and workflow state
| Variable | Required | Purpose |
|---|---|---|
DATABASE_URL |
Yes | Primary runtime database connection string |
DIRECT_URL |
Recommended | Direct database connection for Prisma CLI operations |
GITHUB_TOKEN |
Recommended | Authenticated GitHub API access for live challenge discovery |
GITHUB_API_TIMEOUT_MS |
Optional | Timeout for GitHub requests before falling back |
ADMIN_GITHUB_USERNAMES |
Optional | Comma-separated allowlist for /admin/sync |
Example:
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/bug_fix_arena
DIRECT_URL=postgresql://postgres:postgres@localhost:5432/bug_fix_arena
GITHUB_TOKEN=github_pat_replace_me
GITHUB_API_TIMEOUT_MS=8000
ADMIN_GITHUB_USERNAMES=morganleeNotes:
DIRECT_URLis especially useful for hosted Postgres providers such as Supabase where pooled runtime access and direct migration access differ- if
GITHUB_TOKENis missing, the app still runs with the seeded mock catalog - in non-production environments, the admin sync page can fall back to any
signed-in user if
ADMIN_GITHUB_USERNAMESis left blank - Amplify SSR deployments also need those server-side variables forwarded into
.env.productionduring the build. This repo includes anamplify.ymlfile that does that automatically for the runtime variables the app needs.
npm run devstart the development servernpm run buildcreate a production build using Webpacknpm run startrun the production servernpm run lintrun ESLintnpm run testrun Vitest oncenpm run test:watchrun Vitest in watch modenpm run db:generategenerate the Prisma clientnpm run db:seedseed the database with mock MVP data
The current test suite is intentionally concentrated on the logic with the highest product risk:
- GitHub normalization
- catalog URL parsing and link generation
- score calculation and rank thresholds
- submission action behavior
- engagement and submission service flows
Run tests with:
npm run testWhat is still intentionally light:
- presentational component coverage
- full route rendering tests
- real database-backed integration tests
- live GitHub networking tests
Those areas are lower leverage right now than the normalization and state-transition layer.
If you want to contribute:
- start with the internal domain types in
types/ - keep external data normalized at the boundary
- prefer server-rendered flows unless interactivity clearly requires client state
- update tests when touching catalog shaping, scoring, sync, or submission logic
- preserve the fallback path so the app stays usable without GitHub
Additional contributor-oriented documentation lives in docs/developer-guide.md.
Near-term roadmap:
- move GitHub sync from manual admin action to scheduled background execution
- replace manual completion with verified submission review
- add richer maintainer context to challenge details
- improve submission review states and score event history
- expand catalog persistence and search depth
- authentication is a lightweight demo scaffold, not production identity
- challenge completion is still partly manual
- PR verification is not implemented yet
- GitHub discovery still depends on public issue quality and API rate limits
- there is no sandboxed execution or automated patch validation yet
- the scoring model is intentionally simple and not anti-cheat hardened
- search and filtering are still optimized for MVP-scale catalogs
The long-term product direction is a more serious contribution workflow:
- Verified submissions accept and score work based on real PR state, review outcomes, and repository matching
- AI hints offer scoped debugging guidance, onboarding help, and repository-specific learning support without replacing the contributor’s judgment
- Sandbox execution run validation workflows or lightweight test execution in a safe isolated environment
- Tournaments package groups of issues into time-boxed competitions or themed events
- Maintainer workflows let maintainers curate issues, review submissions, and use the product as a structured contributor funnel
For deeper implementation notes, contribution patterns, and key runtime flows, see docs/developer-guide.md.