This guide is for collaborators working on Open Source Bug Fix Arena after the initial MVP.
The current product loop is:
- ingest or fetch GitHub issues
- normalize them into internal challenge records
- let users browse and inspect those challenges
- let users save, start, complete, and submit work
- reflect that engagement in the dashboard and leaderboard
The codebase is intentionally server-first. Most pages render from server-side queries and pure domain helpers, with client components used only where user interaction requires them.
app/route entry points, loading states, and error boundariescomponents/reusable interface modules grouped by domainlib/auth/demo session auth and admin access checkslib/challenges/normalization helpers, persistence helpers, catalog URL state, and view modelslib/config/shared constants and product ruleslib/data/catalog shaping, filtering, sorting, mock data, and recommendationslib/db/Prisma client setup and mappers between Prisma and domain typeslib/engagement/dashboard reads, engagement mutations, scoring, and leaderboard querieslib/github/GitHub client, normalization, and live discoverylib/submissions/submission normalization, lifecycle, persistence, and actionslib/sync/GitHub ingestion and sync audit logicprisma/schema, migrations, and seed flowtests/high-value unit and integration-style tests
lib/data/catalog.tsis the main catalog orchestrator- it prefers persisted synced GitHub challenges first
- if there are no persisted GitHub challenges, it attempts a live GitHub fetch
- if GitHub is unavailable or incomplete, it falls back to seeded mock data
- all challenge data is normalized into internal
ChallengeRecordtypes before rendering
This fallback behavior is a product requirement, not just a developer convenience. Do not remove it casually.
/admin/synctriggers a manual sync actionlib/sync/service.tsfetches qualifying GitHub issues and normalizes them- repository and challenge records are upserted into Postgres
- existing GitHub-sourced challenges can be marked inactive when a sync window is complete enough to trust archival
- each run creates a
ChallengeSyncRunaudit record
If you change sync behavior, preserve:
- deduplication by GitHub identity and repository issue number
- explicit sync logs
- safe archival behavior
- clear failure states
- engagement state lives in
ChallengeEngagement - score summaries live in
Score lib/engagement/service.tsowns the main read/write behaviorlib/engagement/scoring.tsholds point calculations and rank derivation
Current scoring is intentionally simple and manually triggered. Future verified submissions should build on this rather than bypass it.
- submissions are stored separately from engagement state
- one user currently has one submission record per challenge
lib/submissions/service.tsowns persistencelib/submissions/lifecycle.tsholds pure lifecycle rules- the challenge page exposes the current submission form
This separation is deliberate:
- engagement answers “what is the user doing with the challenge?”
- submission answers “what artifact did the user submit?”
When external data enters the app, normalize it once and keep the rest of the app working with internal domain types. Avoid letting raw GitHub or Prisma shapes spread through page components.
If you need to change:
- labels
- difficulty mapping
- language metadata
- score thresholds
put the rule in lib/config/ or another narrow domain module instead of
repeating literals across the app.
Prefer:
- server components for data reads
- server actions for mutations that belong to a page flow
- small client components only where form state or browser interactivity is needed
Avoid moving catalog shaping or persistence behavior into client code.
When you touch:
- normalization
- catalog filtering/sorting/state helpers
- scoring
- sync logic
- submission lifecycle
add or update tests. These are the areas most likely to cause product-level regressions.
npm install
npm run db:generate
npx prisma db push
npm run db:seed
npm run devQuality checks:
npm run lint
npm run test
npm run buildWhen implementing verified submissions, do not couple score awarding directly to the raw PR URL field. Introduce a verification step or score event layer so leaderboard totals can be audited and recomputed.
If AI hints are added, keep them advisory. They should enrich the challenge brief, not replace repository context or hide the source issue.
If sandbox execution is introduced later, keep it outside the main page request path. Validation jobs should be asynchronous and explicitly tied to submission state.
If maintainer tooling lands, keep the contributor-facing experience separate from internal review operations. The current admin sync page is intentionally minimal and should not become a dumping ground for unrelated admin features.