feat(ruvllm): TurboQuant KV cache & vector compression#297
Merged
Conversation
Implement data-oblivious KV cache and embedding compression based on TurboQuant (ICLR 2026). Two-stage pipeline: PolarQuant (Hadamard rotation + scalar quantization) + QJL residual correction (1-bit), achieving ~3.5 bits per value with geometry-preserving compression. New modules: - turbo_quant.rs: Core TurboQuantCompressor with compress/decompress, TurboQuantCacheTier for KV cache, TurboQuantEmbeddingStore for RuVector integration, asymmetric inner product for attention - TurboQuantKvCache: Three-tier cache (FP16 hot + TurboQuant cold) integrated into kv_cache.rs with auto-migration Key features: - 2.5/3.0/3.5/4.0 bit configurations with QJL residual toggle - ~6x memory reduction on cold tier, preserves inner product geometry - Bitstream packing handles non-byte-aligned bit widths - Embedding store with batch build, search, and nearest-neighbor - 13 passing tests covering roundtrip, compression, inner products, batch ops, KV cache tier, eviction, and embedding search https://claude.ai/code/session_011ogX2uc7Zf8d8aQ3UAbNcd
Comprehensive research document covering TurboQuant (ICLR 2026) and its mapping to ruvLLM. Covers algorithm details, performance results, integration architecture, PiQ3 comparison, risks/mitigations, and implementation summary. https://claude.ai/code/session_011ogX2uc7Zf8d8aQ3UAbNcd
Resolve Code Quality CI failure by applying cargo fmt. Co-Authored-By: claude-flow <ruv@ruv.net>
35 tasks
…benchmarks - Add rotated-domain inner product (skip inverse Hadamard via orthogonal invariance: <Hq,Hk> = <q,k>), ~2x faster for attention computation - Add batch-optimized variant that rotates query once across all keys - Add Criterion benchmark suite: compression, decompression, inner product, KV cache ops, embedding store, dimension scaling, memory efficiency - 5 new tests verifying optimized methods match original results - All 18 TurboQuant tests passing Co-Authored-By: claude-flow <ruv@ruv.net>
8 tasks
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
TurboQuantKvCachethree-tier cache (FP16 hot + TurboQuant ~3.5-bit cold) with auto-migrationTurboQuantEmbeddingStorefor RuVector-compatible compressed vector searchKey metrics
Files changed
crates/ruvllm/src/quantize/turbo_quant.rscrates/ruvllm/src/quantize/mod.rscrates/ruvllm/src/kv_cache.rsCacheTier::TurboQuant,TurboQuantKvCacheintegrationdocs/research/quantization-edge/08-turboquant-kv-cache-compression.mdTest plan
cargo build -p ruvllm --features quantizesucceedscargo test -p ruvllm --features quantize -- turbo_quant— 13/13 tests passhttps://claude.ai/code/session_011ogX2uc7Zf8d8aQ3UAbNcd