What's happening right now
Right now, every PR that goes into main merges on good faith, and if something looked fine but broke behavior upstream, it would cause havoc. We do have two Copilot review runs in Actions history from March, but that's just static analysis, which doesn't run anything or catch behavioral regressions.
This hit recently, where an accidental commit slipped through, and tests were failing, which took me a good while to debug and fix (#182). Something like this running would've caught it way before it landed.
What I think would be better
We already have a proper testing infrastructure, over 200 tests spanning database and unit tests. A GitHub Actions workflow that triggers on every PR to main, spins up the stack with docker compose -f dev_docker-compose.yml up -d, installs requirements-test.txt and runs pytest -v, that's the gate. If a merge breaks something, it shows up in the run, and the reviewer can decide what to do with it.
Once that's solid, we can layer on API endpoint validation and broader regression checks, but we can start with the test suite.
Questions before I draft anything
- Are we okay spinning up Docker Compose in CI, or do we need a different approach?
- Do we want to block merges on failures from the start, or run non-blocking first to build confidence?
Happy to draft the initial workflow YAML once we're aligned on scope.
What's happening right now
Right now, every PR that goes into
mainmerges on good faith, and if something looked fine but broke behavior upstream, it would cause havoc. We do have two Copilot review runs in Actions history from March, but that's just static analysis, which doesn't run anything or catch behavioral regressions.This hit recently, where an accidental commit slipped through, and tests were failing, which took me a good while to debug and fix (#182). Something like this running would've caught it way before it landed.
What I think would be better
We already have a proper testing infrastructure, over 200 tests spanning database and unit tests. A GitHub Actions workflow that triggers on every PR to
main, spins up the stack withdocker compose -f dev_docker-compose.yml up -d, installsrequirements-test.txtand runspytest -v, that's the gate. If a merge breaks something, it shows up in the run, and the reviewer can decide what to do with it.Once that's solid, we can layer on API endpoint validation and broader regression checks, but we can start with the test suite.
Questions before I draft anything
Happy to draft the initial workflow YAML once we're aligned on scope.