This guide explains how to run tests for the Open Orchestrator project.
make testmake test-fastmake test-covThe project includes a Docker-based test environment for isolated, reproducible testing across platforms.
- Docker and Docker Compose installed
- Docker daemon running
# Run automated test suite
make test-docker
# Or using docker compose directly
docker compose -f docker-compose.test.yml up --buildFor manual test execution and debugging:
# Start interactive shell
make test-docker-interactive
# Or using docker compose directly
docker compose -f docker-compose.test.yml run --rm test-interactive
# Inside the container, run specific tests
pytest tests/test_skill_installer.py -v
pytest -m gh_cli # Run only tests requiring GitHub CLI- Automatically runs the full test suite with coverage
- Generates coverage reports in
htmlcov/andcoverage.xml - Mounts project directory for live code changes
- Injects
GITHUB_TOKENenvironment variable for PR tests
- Provides interactive bash shell for manual testing
- Same environment as test-runner
- Useful for debugging test failures
Dockerfile.test: Python 3.11-slim with git, tmux, curl, and all project dependenciesdocker-compose.test.yml: Service definitions and environment configuration
Tests are categorized using pytest markers:
# Run only tests requiring GitHub CLI
pytest -m gh_cli
# Run only tmux-dependent tests
pytest -m tmux
# Run only Textual TUI tests
pytest -m textual
# Run only slow tests (>1 second)
pytest -m slow
# Exclude slow tests
pytest -m "not slow"- Target: 90% coverage for new modules
- Source:
src/open_orchestrator/ - Branch coverage enabled
- Reports: Terminal, HTML, and XML formats
# Terminal report with missing lines
pytest --cov=src/open_orchestrator --cov-report=term-missing
# HTML report (opens in browser)
make test-cov
# All report formats
make test- HTML:
htmlcov/index.html - XML:
coverage.xml - Terminal: Displayed after test run
tests/
├── conftest.py # Shared fixtures (30+ fixtures)
├── test_docker_infrastructure.py # Docker & pytest configuration tests
├── test_skill_installer.py # SkillInstaller unit tests
├── test_dashboard.py # Dashboard TUI integration tests
├── test_process_manager.py # ProcessManager unit & CLI tests
├── test_hooks.py # HookService unit tests
├── test_session.py # SessionManager unit tests
├── test_pr_linker.py # PRLinker unit tests
├── test_status.py # StatusTracker & TokenUsage tests
├── test_worktree.py # WorktreeManager unit tests
├── test_tmux_manager.py # TmuxManager unit tests
├── test_cli.py # CLI integration tests
├── test_cleanup.py # CleanupService tests
└── test_sync.py # SyncService tests
tests/conftest.py provides 30+ reusable fixtures including:
temp_directory: Temporary directory for file operationsgit_repo: Initialized git repositorygit_worktree: Git worktree setupcli_runner: Click CLI test runnermock_libtmux_server: Mocked tmux operationsmock_subprocess: Mocked subprocess callsskills_source_dir: Temporary skills directoryhooks_config: Mock hook configuration- And many more...
make lintmake format- ruff: Fast Python linter (replaces flake8, isort, etc.)
- mypy: Static type checker
The Docker test environment is designed for use in CI/CD pipelines:
# Example GitHub Actions workflow
- name: Run tests in Docker
run: docker compose -f docker-compose.test.yml up --build --exit-code-from test-runnerIf Docker build fails, try:
# Rebuild without cache
docker compose -f docker-compose.test.yml build --no-cacheIf you encounter permission issues with volume mounts:
# Check Docker user permissions
docker compose -f docker-compose.test.yml run --rm test-interactive whoamiFor debugging test failures:
# Run specific test with verbose output
pytest tests/test_skill_installer.py::TestSkillInstaller::test_symlink_installation -vv
# Run with pdb debugger on failure
pytest --pdb tests/test_skill_installer.py
# Show local variables on failure
pytest -l tests/test_skill_installer.py-
Write test first:
# Create/edit test file vim tests/test_new_feature.py -
Run test (should fail):
pytest tests/test_new_feature.py -v
-
Implement feature:
vim src/open_orchestrator/core/new_feature.py
-
Run test again (should pass):
pytest tests/test_new_feature.py -v
-
Check coverage:
pytest tests/test_new_feature.py --cov=src/open_orchestrator/core/new_feature
-
Run linting:
make lint
-
Format code:
make format
-
Run full test suite:
make test
Remove test artifacts and caches:
make cleanThis removes:
.pytest_cache/htmlcov/.coveragecoverage.xml.mypy_cache/.ruff_cache/__pycache__/directories*.pycfiles