Thanks for your interest in TIDE! Here's how to contribute.
git clone https://github.com/RightNowAI/TIDE.git
cd TIDE
pip install -e ".[test]" # installs with CUDA kernels if GPU availableNo GPU locally? Use TIDE_NO_CUDA=1 pip install -e ".[test]" for CPU-only.
# CPU-only tests (fast, no GPU)
pytest tests/ -k "not cuda and not kernels" -v
# Full suite (requires CUDA GPU)
pytest tests/ -v
# Cloud GPU tests
modal run modal_setup/ci_app.pyWe use ruff for linting:
pip install ruff
ruff check python/ tests/python/TIDE/-- Python package (runtime, calibration, adapters)csrc/-- CUDA kernels (C++/CUDA)tests/-- Test suiteexamples/-- Example scriptsmodal_setup/-- Cloud GPU infrastructurebenchmarks/-- Performance benchmarks
Most models work automatically via UniversalAdapter. If your model doesn't,
add a built-in adapter:
- Create
python/TIDE/adapters/mymodel.pyimplementingBaseAdapter - Register it in
python/TIDE/adapters/auto.pyunderADAPTER_REGISTRY - Add a test in
tests/test_adapters.py
Kernels are in csrc/kernels/. All kernels:
- Are templated on
scalar_t(float, __half, __nv_bfloat16) - Use
dtype_utils.cuhfor load/store helpers - Expose named C entry points (not templates) for linking
- Accumulate in float32 for numerical stability
After modifying kernels, rebuild: pip install -e .
- Fork and create a branch
- Make your changes
- Run
pytest tests/ -v(at least CPU tests) - Run
ruff check python/ tests/ - Open a PR with a clear description
Please include:
- Model name and size
- GPU type
- PyTorch and transformers versions
- Full error traceback