Skip to content

Commit 3990199

Browse files
authored
Merge pull request #27 from SharpAI/feature/speculative-decoding-ci
test: add speculative decoding E2E test to CI pipeline
2 parents c6e6212 + 92565a9 commit 3990199

6 files changed

Lines changed: 680 additions & 5 deletions

File tree

.github/workflows/ci.yml

Lines changed: 136 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -106,3 +106,139 @@ jobs:
106106
name: ci-test-logs
107107
path: /tmp/SwiftLM-test-*.log
108108
retention-days: 7
109+
110+
# ── Speculative Decoding E2E (dual-model: 0.8B draft + 4B main) ──
111+
# Uses the standard macos-15 runner (7 GB RAM).
112+
# We test the 4B main model which safely fits within memory.
113+
speculative-decoding:
114+
runs-on: macos-15
115+
timeout-minutes: 45
116+
needs: ci # Only run after core CI passes
117+
steps:
118+
- uses: actions/checkout@v4
119+
with:
120+
submodules: recursive
121+
122+
- name: Install Metal Toolchain
123+
run: xcodebuild -downloadComponent MetalToolchain || true
124+
125+
- name: Cache Swift packages
126+
uses: actions/cache@v4
127+
with:
128+
path: .build
129+
key: ${{ runner.os }}-spm-SwiftLM-v2-${{ hashFiles('Package.resolved') }}
130+
restore-keys: |
131+
${{ runner.os }}-spm-SwiftLM-v2-
132+
133+
- name: Clear stale module cache
134+
run: find .build -type d -name ModuleCache -exec rm -rf {} + 2>/dev/null || true
135+
136+
- name: Resolve dependencies
137+
run: swift package resolve
138+
139+
- name: Build (Release)
140+
run: swift build -c release
141+
142+
- name: Install MLX Metal library
143+
run: |
144+
python3 -m venv /tmp/mlx_venv
145+
/tmp/mlx_venv/bin/pip install --quiet mlx
146+
cp /tmp/mlx_venv/lib/python*/site-packages/mlx/lib/mlx.metallib .build/release/
147+
148+
- name: Cache MLX models (draft + main)
149+
uses: actions/cache@v4
150+
with:
151+
path: ~/.cache/huggingface
152+
key: mlx-speculative-qwen35-0.8b-9b
153+
154+
- name: Run speculative decoding E2E
155+
env:
156+
HF_HUB_DOWNLOAD_TIMEOUT: "900"
157+
run: |
158+
chmod +x tests/test-speculative.sh
159+
for attempt in 1 2 3; do
160+
echo "Attempt $attempt of 3..."
161+
if tests/test-speculative.sh .build/release/SwiftLM 15414; then
162+
exit 0
163+
fi
164+
if [ "$attempt" -lt 3 ]; then
165+
echo "Test failed, retrying in 10s..."
166+
sleep 10
167+
fi
168+
done
169+
echo "All attempts failed"
170+
exit 1
171+
172+
- name: Upload speculative test logs on failure
173+
if: failure()
174+
uses: actions/upload-artifact@v4
175+
with:
176+
name: speculative-test-logs
177+
path: /tmp/SwiftLM-test-speculative.log
178+
retention-days: 7
179+
180+
# ── Speculative Decoding Memory Evaluation ──
181+
# Runs the 9B model with NUM_DRAFT_TOKENS=2 to check peak
182+
# memory compression/efficiency. Allowed to OOM/fail.
183+
speculative-decoding-eval:
184+
runs-on: macos-15
185+
timeout-minutes: 45
186+
needs: ci
187+
continue-on-error: true
188+
steps:
189+
- uses: actions/checkout@v4
190+
with:
191+
submodules: recursive
192+
193+
- name: Install Metal Toolchain
194+
run: xcodebuild -downloadComponent MetalToolchain || true
195+
196+
- name: Cache Swift packages
197+
uses: actions/cache@v4
198+
with:
199+
path: .build
200+
key: ${{ runner.os }}-spm-SwiftLM-v2-${{ hashFiles('Package.resolved') }}
201+
restore-keys: |
202+
${{ runner.os }}-spm-SwiftLM-v2-
203+
204+
- name: Clear stale module cache
205+
run: find .build -type d -name ModuleCache -exec rm -rf {} + 2>/dev/null || true
206+
207+
- name: Resolve dependencies
208+
run: swift package resolve
209+
210+
- name: Build (Release)
211+
run: swift build -c release
212+
213+
- name: Install MLX Metal library
214+
run: |
215+
python3 -m venv /tmp/mlx_venv
216+
/tmp/mlx_venv/bin/pip install --quiet mlx
217+
cp /tmp/mlx_venv/lib/python*/site-packages/mlx/lib/mlx.metallib .build/release/
218+
219+
- name: Run speculative evaluation E2E
220+
env:
221+
HF_HUB_DOWNLOAD_TIMEOUT: "900"
222+
run: |
223+
chmod +x tests/test-speculative-eval.sh
224+
for attempt in 1 2 3; do
225+
echo "Attempt $attempt of 3..."
226+
if tests/test-speculative-eval.sh .build/release/SwiftLM 15414; then
227+
exit 0
228+
fi
229+
if [ "$attempt" -lt 3 ]; then
230+
echo "Test failed, retrying in 10s..."
231+
sleep 10
232+
fi
233+
done
234+
echo "All attempts failed"
235+
exit 1
236+
237+
- name: Upload speculative eval logs on failure
238+
if: failure()
239+
uses: actions/upload-artifact@v4
240+
with:
241+
name: speculative-eval-logs
242+
path: /tmp/SwiftLM-test-speculative-eval.log
243+
retention-days: 7
244+

Package.resolved

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

Package.swift

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ let package = Package(
1313
// Local Apple MLX Swift fork for C++ extensions
1414
.package(url: "https://github.com/SharpAI/mlx-swift.git", branch: "main"),
1515
// Apple's LLM library built on MLX Swift (SharpAI fork — with GPU/CPU layer partitioning)
16-
.package(url: "https://github.com/ericjlake/mlx-swift-lm.git", branch: "feat/ssd-streaming-10x"),
16+
.package(url: "https://github.com/SharpAI/mlx-swift-lm.git", branch: "main"),
1717
// HuggingFace tokenizers + model download
1818
.package(url: "https://github.com/huggingface/swift-transformers", .upToNextMinor(from: "1.2.0")),
1919
// Lightweight HTTP server (Apple-backed Swift server project)

README.md

Lines changed: 70 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,8 @@ Benchmark results for `gemma-4-26b-a4b-it-4bit` (26B MoE, 4-bit) on M5 Pro 64 GB
8181
- 🔌 **OpenAI-compatible**: Drop-in replacement for OpenAI SDKs (`/v1/chat/completions`, streaming, etc).
8282
- 🧠 **Smart Model Routing**: Loads HuggingFace format models directly, with native Safetensors parsing.
8383
- ⚡️ **TurboQuantization Integrated**: Custom low-level MLX Metal primitives that apply extremely fast quantization for KV caching out-of-the-box.
84-
- 💾 **SSD Expert Streaming**: *Experimental* zero-copy streaming that swaps Mixture of Experts (MoE) layers directly from the NVMe SSD to the GPU command buffer without trashing macOS Unified Memory (prevents Watchdog OS kernel panics on 122B+ models).
84+
- 💾 **SSD Expert Streaming (10x)**: High-performance NVMe streaming that loads Mixture of Experts (MoE) layers directly from SSD to GPU — engineered by [@ericjlake](https://github.com/ericjlake), achieving **10x speedup** (0.58 → 5.91 tok/s) on 122B+ models with only ~10 GB resident memory. Uses cross-projection batching, concurrent pread (QD=24), asyncEval pipeline, and runtime top-k expert selection.
85+
- 🔮 **Speculative Decoding**: Load a small draft model (e.g. 9B) alongside a large main model to generate candidate tokens and verify in bulk — accelerating in-RAM inference.
8586
- 🎛️ **Granular Memory Control**: Integrated Layer Partitioning (`--gpu-layers`) and Wisdom Auto-Calibration for squeezing massive models into RAM.
8687

8788
---
@@ -146,6 +147,64 @@ Reference implementations: [`turboquant-mlx`](https://github.com/sharpner/turboq
146147

147148
---
148149

150+
## 💾 SSD Expert Streaming: 10x MoE Speedup
151+
152+
SwiftLM implements a **rewritten SSD expert streaming pipeline** (engineered by [Eric Lake](https://github.com/ericjlake)) that achieves 10x generation speedup for massive Mixture of Experts (MoE) models running on memory-constrained Apple Silicon. This enables running models like **Qwen3.5-122B** (69.6 GB) and **Qwen3.5-397B** (209 GB) on a **64 GB Mac** by streaming expert weights from NVMe SSD.
153+
154+
### Benchmark Results (M1 Ultra 64GB, Qwen3.5-122B-A10B-4bit)
155+
156+
| Configuration | tok/s | vs. Original | Notes |
157+
|---|---|---|---|
158+
| Original `--stream-experts` | 0.58 | baseline | Sequential pread, 1 NVMe queue |
159+
| **This PR (top-k=8, full quality)** | **4.95** | **8.5×** | All 8 experts evaluated |
160+
| **This PR (top-k=6, default)** | **5.20** | **9.0×** | Recommended default |
161+
| **This PR (top-k=4, speed mode)** | **5.91** | **10.2×** | Best quality/speed tradeoff |
162+
| **This PR (top-k=2, turbo mode)** | **6.52** | **11.2×** | Still coherent output |
163+
164+
> Memory stable at **~10.6 GB resident**, no swap activity. Tested over 200-token generation runs.
165+
166+
### The Approach: Small Model Helps Large Model
167+
168+
A novel aspect of this architecture is the **dual-model speculative decoding** pattern: a small draft model (e.g. Qwen3.5-9B at 73 tok/s) runs **entirely in RAM** while the large MoE model (e.g. 122B) streams experts from SSD. The draft model generates candidate tokens at high speed, and the main model verifies them in bulk — dramatically reducing the number of SSD-bound generation rounds needed.
169+
170+
> **Important finding:** Speculative decoding is **counterproductive for SSD-streaming MoE** specifically. The verify pass sends N+1 tokens, each routing to *different* experts — SSD I/O scales with the *union* of all positions' expert selections. Speculative decoding is therefore routed exclusively to **in-RAM models**.
171+
172+
### Optimization Techniques
173+
174+
1. **Cross-Projection Batching**: Collapses ~1,400 per-expert `eval()` calls down to ~48 per token by orchestrating gate/up/down projections together in `SwitchGLU`.
175+
2. **Concurrent NVMe pread (QD=24)**: Replaces sequential pread with `DispatchQueue.concurrentPerform`, saturating the NVMe controller's queue depth (8 experts × 3 projections = 24 parallel reads).
176+
3. **AsyncEval Pipeline with Speculative Pread**: Overlaps GPU compute with SSD I/O — uses previous-token routing to speculatively pre-load experts for the next token during the GPU async window (~70% hit rate). Only missed experts (~30%) require on-demand pread after routing sync.
177+
4. **Persistent Metal Buffers**: Expert weight buffers are allocated once per `SwitchGLU` layer and reused across tokens, eliminating per-token allocation overhead.
178+
5. **Runtime Top-K Expert Selection**: The `SWIFTLM_TOP_K` environment variable reduces the number of active experts per token at runtime without model recompilation — trading marginal quality for significant speed gains.
179+
180+
### Key Engineering Findings
181+
182+
| Finding | Detail |
183+
|---|---|
184+
| **GPU compute is the bottleneck** | At steady state, GPU compute is ~190ms of ~200ms per-token time. The OS page cache serves ~90% of expert reads from RAM. |
185+
| **Don't cache experts in application memory** | An LRU expert cache *stole* from the OS page cache and regressed performance (4.84 → 4.01 tok/s). Let the kernel manage it. |
186+
| **MambaCache requires checkpoint rollback** | Unlike attention KV caches (trim = decrement offset), Mamba's recurrent state integrates all history and cannot be partially undone. We implemented `checkpoint()`/`restore()` for speculative decoding on hybrid Attention+Mamba architectures (Qwen3.5). |
187+
188+
### Usage
189+
190+
```bash
191+
# Standard SSD streaming (recommended, top-k=6):
192+
SWIFTLM_TOP_K=6 SwiftLM --port 8002 \
193+
--model <path>/Qwen3.5-122B-A10B-4bit --stream-experts
194+
195+
# Speed mode (top-k=4):
196+
SWIFTLM_TOP_K=4 SwiftLM --port 8002 \
197+
--model <path>/Qwen3.5-122B-A10B-4bit --stream-experts
198+
199+
# With speculative decoding (in-RAM models only):
200+
SwiftLM --port 8002 \
201+
--model <path>/Qwen3.5-27B-4bit \
202+
--draft-model <path>/Qwen3.5-9B-4bit \
203+
--num-draft-tokens 4
204+
```
205+
206+
---
207+
149208
## 💻 Benchmarks & Testing
150209

151210
Run our automated benchmark suites via the interactive script:
@@ -226,8 +285,10 @@ curl http://localhost:5413/v1/chat/completions \
226285
| `--max-tokens` | `2048` | Max tokens limit per generation |
227286
| `--prefill-size`| `512` | Prompt prefill chunk size (micro-batching for long contexts) |
228287
| `--gpu-layers` | `model_default`| Restrict the amount of layers allocated to GPU hardware |
229-
| `--stream-experts` | `false` | Enable experimental SSD streaming for MoE model expert matrices |
288+
| `--stream-experts` | `false` | Enable SSD expert streaming for MoE models (10x speedup) |
230289
| `--turbo-kv` | `false` | Enable TurboQuant 3-bit KV cache compression |
290+
| `--draft-model` | (none) | Draft model path/ID for speculative decoding (in-RAM models only) |
291+
| `--num-draft-tokens` | `4` | Number of draft tokens per speculation round |
231292

232293
## 📦 Requirements
233294

@@ -247,7 +308,13 @@ The model instantly woke up from "whispering" whitespace and successfully respon
247308

248309
## 🙏 Acknowledgments & Credits
249310

250-
`SwiftLM` leverages the powerful foundation of the Apple MLX community and relies heavily on the open-source ecosystem. While the custom C++ implementations, Metal optimizations, and high-performance pipeline architecture were engineered natively for this engine, we owe massive thanks to the following projects for their indispensable reference materials and underlying protocols:
311+
`SwiftLM` leverages the powerful foundation of the Apple MLX community and relies heavily on the open-source ecosystem. While the custom C++ implementations, Metal optimizations, and high-performance pipeline architecture were engineered natively for this engine, we owe massive thanks to the following projects and contributors for their indispensable reference materials and underlying protocols:
312+
313+
### Contributors
314+
315+
- **[Eric Lake](https://github.com/ericjlake)** — Engineered the **SSD Expert Streaming 10x rewrite** ([PR #26](https://github.com/SharpAI/SwiftLM/pull/26)), achieving 10× generation speedup on 122B+ MoE models via cross-projection batching, concurrent NVMe pread (QD=24), asyncEval pipeline with speculative pread, and runtime top-k expert selection. Also implemented the **speculative decoding infrastructure** with `DraftModelRef`, dual-model loading, and **MambaCache checkpoint/restore** for hybrid Attention+Mamba architectures.
316+
317+
### Projects & References
251318

252319
- **[mlx-swift](https://github.com/ml-explore/mlx-swift)** — The core Apple MLX wrapper bringing Metal-accelerated operations into the Swift ecosystem.
253320
- **[mlx-lm](https://github.com/ml-explore/mlx/tree/main/mlx_lm)** — The official Python language models implementation, serving as the core inspiration for our chunked-prefill architecture and attention manipulation logic.

0 commit comments

Comments
 (0)