My solutions for Advent of Code challenges.
Quick version:
- Clone the repository
- Copy and configure environment:
cp .env.example .env(update paths) - Run
make newto create fresh boilerplate (archives my solutions) - Add your session cookie to
.sessionfile (see Auto-Submission Setup) - Start the auto-fetch server:
./launchers/start_server_daemon.command - Write your solution in the
solve()function - Run it:
make run-dXpY(just runs your code) - Test it:
make test-dXpY(runs tests, auto-submits if they pass)
Where X is the day number and Y is the part (1 or 2). Example: make test-d1p1
These solutions are preserved exactly as they were when I first solved each puzzle. I stopped working on them the moment I got the correct answer. They are unoptimised and most certainly are inefficient, as I wish to document my progress authentically as I grow as an engineer.
I'll come back to these later when I'm more experienced and refactor them to see how much I've improved.
Each day follows a consistent pattern:
2025/
└── day01/
├── part1.go # Part 1 solution
├── part2.go # Part 2 solution
├── input.txt # Your puzzle input
├── testcases.txt # Test cases + expected outputs
└── problem.md # Problem description (auto-converted from HTML)
Note: benchmark_test.go is generated on-demand when running benchmarks, not stored in the repo.
The auto-fetch server handles everything automatically.
First time setup: See SETUP.md for configuration instructions.
Start the server:
# Daemon mode (runs in background, auto-restarts)
./launchers/start_server_daemon.command
# Foreground mode (legacy)
./start_server.command
# Or run manually
cd server && node server.jsServer management:
./launchers/server_status.command # Check if running
./launchers/stop_server.command # Stop daemon
./launchers/server_logs.command # View logsThe server will:
- ⏰ Auto-fetch at 4:00:05 PM AEDT when puzzles unlock
- 📥 Download problem description →
dayN/problem.md - 📥 Download your personal input →
dayN/input.txt - 🤖 Use Claude SDK to extract and populate test cases →
testcases.txt - 🔄 Auto-fetch Part 2 when you complete Part 1
- 💤 Run continuously in the background
Manual fetch (server must be running):
make fetch # Fetch today's puzzle
make fetch-<day> # Fetch specific day (e.g., make fetch-4)Note: Manual fetch commands require the server to be running for Claude-powered test case extraction.
- Wait for auto-fetch (or run
make fetch-<day>) - Implement your solution in the
solve()function (part1.goorpart2.go) - Iterate with
make run-dXpYto see your answer without running tests - Submit with
make test-dXpYwhen ready - runs tests, auto-submits if they pass
# Just run your solution (no tests, no submission)
make run-dXpY
# Test + auto-submit (runs tests, submits if they pass)
make test-dXpYWhere X is the day number and Y is the part (1 or 2). Examples: make run-d1p1, make test-d12p2
Use make run-dXpY when developing. Use make test-dXpY when ready to submit.
Note: Auto-submission is controlled by the autoSubmit constant in each solution file (default: true).
The test harness will:
- ✅ Run your solution against test cases from
testcases.txt - ✅ If tests pass, run against your real input from
input.txt - ✅ Auto-submit the answer to Advent of Code
To enable auto-submission:
- Log in to adventofcode.com
- Open DevTools (F12) → Application/Storage → Cookies
- Copy the
sessioncookie value - Create a
.sessionfile in the project root:echo "your_session_cookie_here" > .session
The .session file is git-ignored for security.
The make new command archives existing solutions and creates fresh boilerplate:
make newThis will:
- Archive current solutions to
archive/2025_TIMESTAMP.tar.gz - Delete and recreate the
2025/directory with fresh boilerplate for all 12 days - Create
.session.examplefile if.sessiondoesn't exist
Perfect for:
- Starting a new year
- Letting others use your setup
- Resetting to clean slate
Your old solutions are safely archived!
# Benchmark individual solutions (where X is day, Y is part 1 or 2)
make bench-dXpY # Examples: make bench-d1p1, make bench-d3p2
# Run all benchmarks and update README table
make bench-allOr run directly:
cd 2025/day01 && go test -tags=part1 -bench=BenchmarkPart1 -benchmem
cd benchmark/cmd && go run main.go # Run all and update README| Day | Part 1 | Part 2 |
|---|---|---|
| 1 | ⭐ 175.28 µs/op | ⭐ 175.14 µs/op |
| 2 | ⭐ 99.46 ms/op | ⭐ 604.75 ms/op |
| 3 | ⭐ 53.32 µs/op | ⭐ 66.25 µs/op |
| 4 | ⭐ 69.66 µs/op | ⭐ 800.97 µs/op |
| 5 | ⭐ 1.16 ms/op | ⭐ 27.79 µs/op |
Solutions are written in Go.