You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix installation across all platforms: ONNX builds, MPS float64, Docker CPU
- Install chatterbox with --no-deps on ALL paths (CPU, NVIDIA, cu128, ROCm)
to prevent ONNX source build failures and torch version conflicts.
Chatterbox dependencies (conformer, diffusers, transformers, s3tokenizer,
omegaconf, resampy) are now listed explicitly in all requirements files.
onnx==1.16.0 is pinned to guarantee pre-built wheels.
- Fix Apple Silicon Turbo model crash ("Cannot convert a MPS Tensor to
float64 dtype") by forcing float32 in s3tokenizer and voice_encoder.
Applied in chatterbox-v2 fork (cc03573) and as automatic post-install
patch in start.py for users of other chatterbox versions.
- New lightweight Dockerfile.cpu based on python:3.10-slim instead of
the 4GB+ nvidia/cuda base image. docker-compose-cpu.yml updated.
- Default config.yaml device changed from "cuda" to "auto" for correct
auto-detection on all hardware (CUDA, MPS, CPU).
- Removed deprecated docker-compose version tags from all compose files.
- Updated README: Python 3.10 recommended (3.13+ not supported),
manual install instructions include --no-deps step, new troubleshooting
entries for ONNX builds and torch version errors.
Fixes#23, #44, #79, #93, #105, #107, #113, #121
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@@ -81,11 +81,16 @@ This server is based on the architecture and UI of our [Dia-TTS-Server](https://
81
81
82
82
**Switching models is effortless:** Simply select your preferred model from the engine selector dropdown at the top of the Web UI. No restarts, no configuration changes required—just instant hot-swapping to test quality, speed, and language support across the complete Chatterbox family.
83
83
84
-
### 🖥️ Fixed NVIDIA Blackwell / CUDA 12.8 and AMD ROCm installation
84
+
### 🖥️ Installation fixes across all platforms
85
85
86
+
-**All platforms:** Chatterbox is now installed with `--no-deps` across all installation paths (CPU, NVIDIA, cu128, ROCm). This eliminates ONNX source build failures, torch version conflicts, and CMake errors that affected many users. Chatterbox's dependencies (conformer, diffusers, transformers, s3tokenizer, etc.) are now listed explicitly in each requirements file with `onnx==1.16.0` pinned to guarantee pre-built wheels.
87
+
-**Apple Silicon / MPS:** Fixed Turbo model crash ("Cannot convert a MPS Tensor to float64 dtype") by forcing float32 in s3tokenizer and voice_encoder. Fix applied in the chatterbox-v2 fork and also as an automatic post-install patch in `start.py` for users of other chatterbox versions. Thanks to @jonas3245 (#93).
88
+
-**Docker CPU:** New lightweight `Dockerfile.cpu` based on `python:3.10-slim` instead of the 4GB+ NVIDIA CUDA base image. `docker-compose-cpu.yml` now uses this smaller image. Removed deprecated `version` tags from all docker-compose files.
89
+
-**config.yaml:** Default device changed from `cuda` to `auto` for correct auto-detection on all hardware (CUDA, MPS, CPU).
90
+
-**Python version:** Clarified that **Python 3.10 is recommended**. Python 3.13+ is not supported due to missing wheels for torch and ONNX. The Windows launcher's Portable Mode handles this automatically.
86
91
-**Blackwell (CUDA 12.8):** Fixed `requirements-nvidia-cu128.txt` to properly install PyTorch 2.9.0 with CUDA 12.8 (`sm_120` support) for RTX 5060 Ti, 5070, 5070 Ti, 5080, and 5090 GPUs. The `Dockerfile.cu128` now correctly installs chatterbox with `--no-deps` to prevent PyTorch downgrade.
87
92
-**AMD ROCm:** Fixed ROCm installation by switching to PyTorch's official ROCm 6.1 wheel index (`torch==2.5.1+rocm6.1`), which resolves the previous `torch==2.6.0` / `torchaudio==2.5.1` version conflict. A new `requirements-rocm-init.txt` installs the ROCm PyTorch stack before other dependencies. Both `Dockerfile.rocm` and `start.py` now use a two-step install to prevent pip from replacing ROCm torch wheels with CPU-only versions.
88
-
- Thanks to community contributors in issues #20, #58, #64, #89, #92, #98, #109, #114, and #122 for testing and reporting solutions.
93
+
- Thanks to community contributors in issues #20, #23, #44, #58, #64, #79, #89, #92, #93, #98, #105, #107, #109, #113, #114, #121, and #122 for testing and reporting solutions.
89
94
90
95
### 🧰 Automated launcher + easy updates
91
96
@@ -225,7 +230,7 @@ This server application enhances the underlying `chatterbox-tts` engine with the
225
230
## 🔩 System Prerequisites
226
231
227
232
***Operating System:** Windows 10/11 (64-bit) or Linux (Debian/Ubuntu recommended).
228
-
***Python:** Version 3.10 or later ([Download](https://www.python.org/downloads/)). *When using Portable Mode on Windows, Python is only needed on the machine where you first set up the application. The target machine (where you copy/share the folder to) does not need Python installed at all.*
233
+
***Python:** Version **3.10 recommended**([Download](https://www.python.org/downloads/release/python-31011/)). Python 3.11 and 3.12 also work but may require building some dependencies from source. **Python 3.13+ is not supported** — several key dependencies (torch, ONNX) lack pre-built wheels for 3.13. On Windows, the launcher's Portable Mode automatically uses Python 3.10 regardless of your system Python version. *When using Portable Mode, Python is only needed on the machine where you first set up the application.*
229
234
***Git:** For cloning the repository ([Download](https://git-scm.com/downloads)).
230
235
***Internet:** For downloading dependencies and models from Hugging Face Hub.
231
236
***Disk Space:** 10GB+ recommended (for dependencies and model cache).
@@ -480,11 +485,12 @@ This is the most straightforward option and works on any machine without a compa
<summary><strong>💡 How This Works</strong></summary>
487
-
The `requirements.txt` file is specially crafted for CPU users. It tells `pip` to use PyTorch's CPU-specific package repository and pins compatible versions of `torch` and `torchvision`. This prevents `pip` from installing mismatched versions, which is a common source of errors.
493
+
The `requirements.txt` file installs CPU PyTorch and all server dependencies. Chatterbox is installed separately with `--no-deps` to prevent pip from pulling in conflicting torch versions or triggering ONNX source builds.
488
494
</details>
489
495
490
496
---
@@ -493,12 +499,13 @@ The `requirements.txt` file is specially crafted for CPU users. It tells `pip` t
493
499
494
500
For users with NVIDIA GPUs. This provides the best performance for RTX 20/30/40 series.
495
501
496
-
**Prerequisite:** Ensure you have the latest NVIDIA drivers installed.
502
+
**Prerequisite:** Ensure you have the latest NVIDIA drivers installed.**Python 3.10 recommended** (3.11/3.12 also work; 3.13+ is not supported).
**After installation, verify that PyTorch can see your GPU:**
@@ -509,7 +516,7 @@ If `CUDA available:` shows `True`, your setup is correct!
509
516
510
517
<details>
511
518
<summary><strong>💡 How This Works</strong></summary>
512
-
The `requirements-nvidia.txt` file instructs `pip` to use PyTorch's official CUDA 12.1 package repository. It pins specific, compatible versions of `torch`, `torchvision`, and `torchaudio` that are built with CUDA support. This guarantees that the versions required by `chatterbox-tts` are met with the correct GPU-enabled libraries, preventing conflicts.
519
+
The `requirements-nvidia.txt` file installs PyTorch with CUDA 12.1 support plus all server dependencies. Chatterbox is installed separately with `--no-deps` to prevent pip from downgrading the CUDA torch to a CPU version or triggering ONNX source builds.
513
520
</details>
514
521
515
522
---
@@ -1199,9 +1206,10 @@ lspci | grep VGA
1199
1206
### Apple Silicon (MPS) Issues
1200
1207
1201
1208
***MPS Not Available:** Ensure you have macOS 12.3+ and an Apple Silicon Mac. Verify with `python -c "import torch; print(torch.backends.mps.is_available())"`
1209
+
***Turbo Model Float64 Error:** If you see "Cannot convert a MPS Tensor to float64 dtype", update to the latest version. This is now fixed in the chatterbox-v2 fork (s3tokenizer and voice_encoder force float32). The `start.py` launcher also applies this patch automatically.
1202
1210
***Installation Conflicts:** If you encounter version conflicts, follow the exact Apple Silicon installation sequence in Option 4, installing PyTorch first before other dependencies.
1203
-
***ONNX Build Errors:**Use the specific ONNX version `pip install onnx==1.16.0`as showninthe installation steps.
1204
-
***Model Loading Errors:** Ensure `config.yaml` has `device: mps`in the `tts_engine` section.
1211
+
***ONNX Build Errors:**Now resolved — `onnx==1.16.0`is pinnedinall requirements files to use pre-built wheels. If you still hit issues, ensure you're using Python 3.10.
1212
+
* **Model Loading Errors:** Ensure `config.yaml` has `device: auto` (or `device: mps`) in the `tts_engine` section.
1205
1213
1206
1214
### NVIDIA GPU Issues
1207
1215
@@ -1222,6 +1230,8 @@ lspci | grep VGA
1222
1230
1223
1231
### General Issues
1224
1232
1233
+
* **ONNX / wheel build failures:** This is usually caused by Python 3.13+ or a missing pre-built wheel. Use Python 3.10 and ensure `onnx==1.16.0` is pinned. The updated requirements files handle this automatically.
1234
+
* **"No matching distribution found for torch==2.5.1+cu121":** You're likely on Python 3.13+ which doesn't have torch 2.5.1 wheels. Downgrade to Python 3.10 or use the Windows launcher's Portable Mode which handles this automatically.
1225
1235
***Import Errors (e.g., `chatterbox-tts`, `librosa`):** Ensure virtual environment is active and dependencies installed successfully. Try reinstalling: `python start.py --reinstall`
1226
1236
***`libsndfile` Error (Linux):** Run `sudo apt install libsndfile1`.
1227
1237
***Model Download Fails:** Check internet connection. `ChatterboxTTS.from_pretrained()` will attempt to download from Hugging Face Hub. Ensure `model.repo_id`in`config.yaml` is correct.
0 commit comments