This project uses uv for dependency management.
# Install uv if you haven't already
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone the repository
git clone https://github.com/Use-Tusk/drift-python-sdk.git
cd drift-python-sdk
# Create virtual environment and install dependencies
uv sync --all-extrasThis project uses ruff for linting/formatting and ty for type checking.
uv run ruff check drift/ tests/ --fix # Lint and auto-fix
uv run ruff format drift/ tests/ # Format
uv run ty check drift/ tests/ # Type checkuv run pytest tests/unit/ -v
# Run with coverage
uv run pytest tests/unit/ -v --cov=drift --cov-report=term-missing
# Run a specific test file
uv run pytest tests/unit/test_json_schema_helper.py -v
uv run pytest tests/unit/test_adapters.py -v
# Run a specific test class or function
uv run pytest tests/unit/test_metrics.py::TestMetricsCollector -v
uv run pytest tests/unit/test_metrics.py::TestMetricsCollector::test_record_spans_exported -v# Flask/FastAPI integration tests
timeout 30 uv run pytest tests/integration/ -vE2E tests validate full instrumentation workflows using Docker containers. They record real API interactions and verify replay behavior using the Tusk CLI.
-
Build the base Docker image (required before running any e2e test):
docker build -t python-e2e-base:latest -f drift/instrumentation/e2e_common/Dockerfile.base . -
Docker and Docker Compose must be installed.
Run all tests (e2e + stack):
./run-all-e2e-tests.sh # Run all tests sequentially
./run-all-e2e-tests.sh -c 2 # Run 2 tests concurrently
./run-all-e2e-tests.sh -c 0 # Run all tests in parallel
./run-all-e2e-tests.sh --instrumentation-only # Run only e2e tests
./run-all-e2e-tests.sh --stack-only # Run only stack testsRun a single instrumentation's e2e test:
cd drift/instrumentation/flask/e2e-tests
./run.sh
# Or with a custom port:
./run.sh 8001| Instrumentation | Location | Services |
|---|---|---|
| Flask | drift/instrumentation/flask/e2e-tests/ |
None (external APIs) |
| FastAPI | drift/instrumentation/fastapi/e2e-tests/ |
None (external APIs) |
| Django | drift/instrumentation/django/e2e-tests/ |
None (external APIs) |
| Redis | drift/instrumentation/redis/e2e-tests/ |
Redis 7 |
| Psycopg | drift/instrumentation/psycopg/e2e-tests/ |
PostgreSQL 13 |
| Psycopg2 | drift/instrumentation/psycopg2/e2e-tests/ |
PostgreSQL 13 |
Each e2e test directory contains:
e2e-tests/
├── Dockerfile # Builds on python-e2e-base
├── docker-compose.yml # Service orchestration
├── run.sh # External runner script
├── entrypoint.py # Test orchestrator (setup → record → test)
├── requirements.txt # Python dependencies
├── .tusk/config.yaml # Tusk CLI configuration
└── src/
├── app.py # Test application
└── test_requests.py # HTTP request script
View traces after a test:
cd drift/instrumentation/flask/e2e-tests
# JSONL files contain one JSON object per line, use jq to format them
cat .tusk/traces/*.jsonl | jq .View service logs:
cat .tusk/logs/*Run test app locally (outside Docker):
cd drift/instrumentation/flask/e2e-tests
pip install -r requirements.txt
TUSK_DRIFT_MODE=RECORD python src/app.py
# In another terminal:
python src/test_requests.pyFor more details, see drift/instrumentation/README-e2e-tests.md.
Stack tests validate multiple instrumentations working together in realistic application architectures (e.g., Django + PostgreSQL, FastAPI + Redis). They catch bugs at integration points that don't surface in isolated e2e testing.
# Run a specific stack test
cd drift/stack-tests/django-postgres
./run.sh
# Or run all tests (including stack tests) from the root
./run-all-e2e-tests.shFor available tests and details, see drift/stack-tests/README.md.
| Document | Description |
|---|---|
docs/context-propagation.md |
Context propagation behavior, edge cases, and patterns |
drift/instrumentation/README-e2e-tests.md |
E2E test architecture and debugging |
drift/stack-tests/README.md |
Stack tests for multi-instrumentation scenarios |
Releases are automated using GitHub Actions. When a GitHub Release is created, the package is automatically built and published to PyPI using trusted publishing.
Prerequisites:
- GitHub CLI (
gh) installed and authenticated - On the
mainbranch with no uncommitted changes - Local branch up to date with remote
Use the release script to bump the version, create a tag, and publish a GitHub Release:
# Patch release (0.1.5 → 0.1.6)
./scripts/release.sh patch
# Minor release (0.1.5 → 0.2.0)
./scripts/release.sh minorThe script will:
- Run preflight checks (lint, format, tests)
- Calculate the next version from
pyproject.toml - Update the version in
pyproject.toml - Commit and tag the version bump
- Push to origin and create a GitHub Release
Once the release is created, GitHub Actions will automatically:
- Build the Python package with
uv build - Publish to PyPI using trusted publishing (no API tokens needed)
For testing purposes, you can also trigger the publish workflow manually from the Actions tab with an optional version override (e.g., 0.1.6-test). This is useful for testing the publishing process without affecting the main version.