Skip to content

perf: build-time precompression + startup metadata cache for static serving#641

Open
NathanDrake2406 wants to merge 10 commits intocloudflare:mainfrom
NathanDrake2406:perf/precompressed-static-serving
Open

perf: build-time precompression + startup metadata cache for static serving#641
NathanDrake2406 wants to merge 10 commits intocloudflare:mainfrom
NathanDrake2406:perf/precompressed-static-serving

Conversation

@NathanDrake2406
Copy link
Copy Markdown
Contributor

@NathanDrake2406 NathanDrake2406 commented Mar 22, 2026

This PR implements a static file serving method that is better than Next.js 16.2.

Bugs fixed

  1. Event loop blockingexistsSync + statSync ran on every static file request, blocking SSR responses behind synchronous filesystem calls. Now zero FS calls per request (metadata cached at startup).
  2. No 304 Not Modified — No ETag, no If-None-Match support. Every repeat visit re-downloaded every asset in full. Now returns 304 (200 bytes) when the browser already has the asset.
  3. HEAD returned full body — HEAD requests streamed the entire file body. Now sends headers only, per HTTP spec.

Optimizations

  1. Build-time precompression.br (brotli q11), .gz (gzip l9), .zst (zstd l19) generated at build time. Zero compression CPU per request.
  2. Startup metadata cache — Pre-computed response headers per variant. Zero object allocation in the hot path.
  3. In-memory buffer serving — Small precompressed files (< 64KB) served via res.end(buffer) instead of createReadStream().pipe(), eliminating file descriptor overhead.
  4. Zstandard support — First Node.js framework to serve .zst assets. 3-5x faster client-side decompression than brotli (Chrome 123+, Firefox 126+).
  5. Async filesystem fallback — Non-cached files use fsp.stat() instead of blocking statSync.

Real-world impact

  • SSR doesn't stall anymore. Before: 10 concurrent static file requests block the event loop with sync stat calls while SSR waits. Now: static serving is non-blocking with zero syscalls.
  • Repeat visits transfer almost nothing. Before: full asset re-download on every revisit. Now: 304 response (200 bytes) instead of the full bundle.
  • First visits transfer 70-80% less data. Before: raw uncompressed JS/CSS. Now: build-time brotli q11 or zstd. A 200KB bundle becomes ~50KB. On mobile, that's seconds saved.
  • Server CPU drops to near zero for static assets. Before: brotli compression on every request. Now: pre-compressed at build time, served from memory.

Benchmark

Averaged over 10 runs. 5000 requests per run, 10 concurrent connections, 5 × 50KB JS bundles. vinext NEW won every single run.

Throughput (Accept-Encoding: zstd, br, gzip)

avg req/s median req/s min max vs vinext before
vinext (this PR) 24,716 25,997 12,564 29,276 2.9x faster
SvelteKit / sirv 3.0.2 15,088 15,838 7,309 18,284 1.8x faster
Next.js 16.2 / send 1.2.1 10,174 11,261 6,327 12,614 1.2x faster
vinext (before) 8,520 8,765 6,501 9,816 baseline

304 Not Modified (conditional request with matching ETag)

req/s
vinext (this PR) ~30,000
SvelteKit / sirv 3.0.2 ~17,000

Transfer size (5000 requests)

Total transferred Per-file compressed size Compression
vinext (this PR) 508 KB ~101 B (zstd l19) build-time, max quality
SvelteKit / sirv 454 KB ~93 B (brotli) build-time
vinext (before) 459 KB ~94 B (brotli q4) per-request, fast quality
Next.js / send 250,000 KB 51,200 B (none) none

vinext serves zstd when accepted (trades ~12% larger output for 3-5x faster client-side decompression). When serving brotli, vinext produces the smallest output of all — brotli q11 (93 B) vs old vinext's q4 (94 B).

Feature comparison

vinext (this PR) SvelteKit / sirv Next.js / send vinext (before)
Per-request FS calls 0 0 1 (stat) 2 (existsSync + statSync)
Event loop blocking No No No Yes (sync stat)
Precompression zstd + brotli + gzip brotli + gzip none none
Compression quality max (build-time) max (build-time) N/A fast (per-request)
Small file buffering Yes (< 64KB in memory) No (always streams) No (always streams) No
304 Not Modified Yes Yes Yes No
HEAD optimization Yes Yes Yes No
Zstandard Yes No No No
Content-Length Yes (all responses) Yes Yes No (compressed)
ETag Yes Yes Yes No

Why vinext beats sirv

sirv always uses createReadStream().pipe() — even for a 100-byte precompressed file, this opens a file descriptor, creates a ReadStream, sets up pipe plumbing, reads 100 bytes, and closes the fd. vinext buffers small files (< 64KB) in memory at startup and serves them with res.end(buffer) — a single write to the socket. For large files, it still streams. Best of both worlds.

Architecture

Build time:    precompressAssets()         → .br + .gz + .zst files on disk
Server boot:   StaticFileCache.create()    → scan dirs, cache metadata + buffers
Per request:   Map.get() → ETag check → res.writeHead(precomputed) → res.end(buf)

New modules:

  • src/build/precompress.ts — build-time compression (brotli q11, gzip l9, zstd l19)
  • src/server/static-file-cache.ts — startup cache with pre-computed headers + in-memory buffers
  • prod-server.ts refactored tryServeStatic — async, cache-aware, precompressed variant serving

Test plan

  • precompressAssets: generates .br/.gz/.zst, skips small files, skips non-compressible types, handles missing dirs, idempotent, correct decompression (13 tests)
  • StaticFileCache: scan, lookup, HTML fallbacks, .vite/ blocking, etag, variant detection, nested dirs (20 tests)
  • tryServeStatic: precompressed serving (zstd/br/gz fallback chain), 304 Not Modified, HEAD, Content-Length, Vary, extra headers, traversal protection (19 tests)
  • Existing prod-server tests: 254 pass, zero regressions
  • CI: format, lint, typecheck, vitest, playwright

@NathanDrake2406 NathanDrake2406 force-pushed the perf/precompressed-static-serving branch from eca889d to 1798289 Compare March 22, 2026 04:03
@pkg-pr-new
Copy link
Copy Markdown

pkg-pr-new bot commented Mar 22, 2026

Open in StackBlitz

npm i https://pkg.pr.new/vinext@641

commit: c1c6826

@NathanDrake2406 NathanDrake2406 force-pushed the perf/precompressed-static-serving branch from 1798289 to 2fd407c Compare March 22, 2026 04:05
Generate .br (brotli q11), .gz (gzip l9), and .zst (zstandard l19)
files alongside compressible assets in dist/client/assets/ during
vinext build. These are served directly by the production server,
eliminating per-request compression overhead for immutable build output.

Only targets assets/ (hashed, immutable). Public directory files still
use on-the-fly compression since they may change between deploys.
StaticFileCache walks dist/client/ once at server boot and caches:
- File metadata (path, size, content-type, cache-control, etag)
- Pre-computed response headers per variant (original, br, gz, zst)
- In-memory buffers for small files (< 64KB) for res.end(buffer)
- Precompressed variant paths and sizes

Per-request serving is Map.get() + res.end(buffer) with zero filesystem
calls, zero object allocation, and zero header construction. Modeled
after sirv's production mode but with in-memory buffering for small
files which eliminates createReadStream fd overhead.
Refactor tryServeStatic to use StaticFileCache for the hot path:
- Pre-computed response headers (zero object allocation per request)
- In-memory buffer serving for small precompressed files
- 304 Not Modified via ETag + If-None-Match
- HEAD request optimization (headers only, no body)
- Zstandard serving (zstd > br > gzip > original fallback chain)
- Async filesystem fallback for non-cached files (replaces blocking
  existsSync + statSync)
- Skip decodeURIComponent for clean URLs (no % in path)

Wire StaticFileCache.create() into both startAppRouterServer and
startPagesRouterServer at startup. Integrate precompressAssets() into
the vinext build pipeline.

CONTENT_TYPES is now a single source of truth exported from
static-file-cache.ts (was duplicated in prod-server.ts).
@NathanDrake2406 NathanDrake2406 force-pushed the perf/precompressed-static-serving branch from 75a0c8f to df77821 Compare March 23, 2026 14:58
@james-elicx
Copy link
Copy Markdown
Collaborator

Nice idea! Thanks for the PR!

I wonder if we should gate this behind an option instead of being on-by-default, considering that Next.js doesn't do this... I wonder if compression level is something we should let users control as well.

It would be good to get a progress bar like we have in pre-rendering too if there isn't one already.

@james-elicx
Copy link
Copy Markdown
Collaborator

/bigbonk review

Copy link
Copy Markdown
Contributor

@ask-bonk ask-bonk bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review: Build-time precompression + startup metadata cache

Strong PR overall. The architecture (build-time precompression → startup cache → zero-allocation hot path) is sound, the test coverage is thorough (52 new tests), and the performance wins are real. The code quality is high — clear separation of concerns between precompress.ts, static-file-cache.ts, and the refactored tryServeStatic.

I have a few issues to flag, ranging from a correctness bug to design questions.

Key issues

  1. Missing Vary: Accept-Encoding on original variant when compressed variants exist — When a file has .br/.gz/.zst variants but the client doesn't accept any compression, the original variant is served without a Vary header. This means caches between the server and client could cache the uncompressed response and serve it to clients that do accept compression. The compressed variant headers correctly include Vary: Accept-Encoding, but the original variant needs it too when sibling compressed variants exist.

  2. Mtime-based ETag is fragile for containerized/CI deploymentsW/"${size}-${mtimeMs}" uses the file modification time. In Docker builds, CI artifact uploads, or cp -r without --preserve, mtime changes between deploys even when file content is identical. This causes unnecessary cache invalidation for returning users. Hashed assets (which are the primary target) have content hashes in their filenames so this is less impactful in practice, but for non-hashed public directory files it matters. A content-hash-based ETag (or at least using the filename hash when available) would be more robust.

  3. Maintainer feedback on opt-in@james-elicx raised making this opt-in/configurable. The precompression step adds meaningful build time (brotli q11 is slow), and since Next.js doesn't do this, it could surprise users. Worth addressing before merge.

See inline comments for specifics.

const original: FileVariant = {
path: fileInfo.fullPath,
headers: { ...baseHeaders, "Content-Length": String(fileInfo.size) },
};
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Missing Vary: Accept-Encoding on the original variant when compressed siblings exist.

When a file has .br/.gz/.zst precompressed variants, the original variant's headers omit Vary: Accept-Encoding. If a CDN or shared cache sits between the server and client, it could cache this uncompressed response and serve it to clients that accept compression (or vice versa).

The fix: when compressed variants exist for a file, add Vary: Accept-Encoding to the original variant's headers too.

Suggested change
};
const original: FileVariant = {
path: fileInfo.fullPath,
headers: { ...baseHeaders, "Content-Length": String(fileInfo.size) },
};

After the compressed variant detection block (after line ~180), add:

if (entry.br || entry.gz || entry.zst) {
  original.headers["Vary"] = "Accept-Encoding";
}

const cacheControl = isHashed
? "public, max-age=31536000, immutable"
: "public, max-age=3600";
const etag = `W/"${fileInfo.size}-${fileInfo.mtimeMs}"`;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mtime-based ETag is fragile in container/CI deployments.

W/"${size}-${mtimeMs}" changes whenever the file is copied without preserving timestamps (common in Docker multi-stage builds, CI artifact caching, rsync without -t, etc.). For hashed assets this is mitigated by the immutable cache-control + filename hash, but for non-hashed public directory files (favicon.ico, robots.txt, etc.) this means unnecessary 200s instead of 304s across deploys.

Consider:

  1. For files under assets/ (already content-hashed): extract the hash from the filename for the ETag, or just use the size (since the filename changes on content change anyway).
  2. For other files: a fast hash of the file content (e.g., the first 8 bytes of a SHA-256, hex-encoded) would be more stable.

Not a blocker since sirv uses the same strategy, but worth noting as a follow-up improvement.

if (entry.isDirectory()) {
yield* walkFilesWithStats(fullPath, base);
} else if (entry.isFile()) {
const stat = await fsp.stat(fullPath);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary sequential stat() — could batch for faster startup.

readdir({ withFileTypes: true }) doesn't provide size/mtimeMs, so the stat() call is indeed necessary. However, since each stat() is awaited sequentially inside the for loop, a directory with 500 files would make 500 sequential syscalls during startup.

Consider collecting the file paths from readdir and batching the stat() calls with Promise.all (perhaps in chunks of 50-100 to avoid fd exhaustion). This would meaningfully reduce startup latency for large dist/client/ directories.

Non-blocking — the current approach is correct, just slower than it needs to be.


// Pick the best precompressed variant: zstd → br → gzip → original.
// Each variant has pre-computed headers — zero string building.
// Accept-Encoding is already lowercase in Node.js IncomingMessage.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: Comment is incorrect. Accept-Encoding is NOT guaranteed to be lowercase in Node.js IncomingMessage. Node.js lowercases header names (keys), but header values preserve their original casing. So req.headers["accept-encoding"] gives you the value as-sent by the client.

In practice this is fine because .includes("zstd") etc. will match the standard lowercase tokens that browsers send. But the comment is misleading.

Suggested change
// Accept-Encoding is already lowercase in Node.js IncomingMessage.
// Accept-Encoding header values from browsers use lowercase tokens.

if (variant.buffer) {
res.end(variant.buffer);
} else {
fs.createReadStream(variant.path).pipe(res);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No error handling on createReadStream().pipe().

If the file is deleted between startup (when it was cached) and request time, createReadStream will emit an error that crashes the connection without a proper HTTP response. The slow path at line 490 uses pipeline() which handles errors; this fast path should too.

Suggested change
fs.createReadStream(variant.path).pipe(res);
pipeline(fs.createReadStream(variant.path), res, () => {});

(Or alternatively, wrap in a try/catch and send a 500 if the stream errors.)


const brotliCompress = promisify(zlib.brotliCompress);
const gzip = promisify(zlib.gzip);
const zstdCompress = promisify(zlib.zstdCompress);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Node.js version concern: zlib.zstdCompress is experimental before Node 22.15.

The repo requires engines.node >= 22, but zlib.zstdCompress was experimental until Node 22.15.0 (May 2025). Users on Node 22.0–22.14 will get ExperimentalWarning runtime warnings during builds.

Options:

  1. Bump engines.node to >=22.15 (aligns with stable zstd)
  2. Wrap in try/catch and gracefully skip .zst generation if the API isn't available or is experimental
  3. Accept the warnings (since the API does work)

Worth documenting the decision either way.

totalCompressedBytes: 0,
};

const compressionWork: Promise<void>[] = [];
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unbounded concurrency for large asset directories.

All compression work is pushed to compressionWork and then awaited with Promise.all. For a large app with hundreds of assets, this means hundreds of concurrent brotli q11 compressions (CPU-intensive) running simultaneously, which could spike memory usage and starve the event loop.

Consider limiting concurrency (e.g., process 8-16 files at a time). A simple approach:

const CONCURRENCY = Math.min(os.cpus().length, 16);

with a semaphore or chunked processing.

// .br/.gz/.zst files are served directly by the production server —
// zero per-request compression overhead for immutable build output.
const clientDir = path.resolve("dist", "client");
const precompressResult = await precompressAssets(clientDir);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No opt-out mechanism. As @james-elicx noted, this runs unconditionally on every build. For users who deploy behind a CDN that handles compression (e.g., Cloudflare), the build-time compression adds overhead with no benefit. Brotli q11 is particularly slow — a large app with many JS bundles could add 10+ seconds to build time.

Suggestions:

  1. Add a CLI flag: vinext build --no-precompress
  2. Add a plugin option: vinext({ precompress: false }) in vite.config.ts
  3. Default to enabled but respect a config option to disable

The Cloudflare Workers deployment target is particularly relevant here — Cloudflare's CDN handles compression at the edge, so precompressing for the Node.js prod server doesn't help Workers users at all.

if (!entry) return false;

// 304 Not Modified: string compare against pre-computed ETag
if (req.headers["if-none-match"] === entry.etag) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

304 response includes headers that should arguably be excluded.

The notModifiedHeaders include Content-Type, which RFC 9110 says SHOULD NOT be sent on a 304. In practice browsers handle this fine, but it's worth being aware of. The test at line 362 of serve-static.test.ts correctly verifies that Content-Length is absent, which is good.

Non-blocking, but for strict RFC compliance, notModifiedHeaders should only include ETag, Cache-Control, Vary, and Date.

/** Zstandard precompressed variant, if .zst file exists. */
zst?: FileVariant;

// Legacy accessors for backwards compatibility with tests
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Legacy accessors" smell — consider removing if this is new code.

These fields (resolvedPath, size, contentType, cacheControl, brPath, brSize, gzPath, gzSize, zstPath, zstSize) duplicate information already available through the original/br/gz/zst variant objects. The comment says they're for "backwards compatibility with tests," but since the tests in this PR are all new, there is no backwards compatibility requirement.

Removing them would simplify the interface and eliminate the dual source of truth. The tests in static-file-cache.test.ts could be updated to access entry.original.headers["Content-Length"] instead of entry.size, etc.

@ask-bonk
Copy link
Copy Markdown
Contributor

ask-bonk bot commented Mar 23, 2026

Review posted on PR #641. Here's a summary of what I flagged:

Correctness issues:

  1. Missing Vary: Accept-Encoding on the original variant when compressed siblings exist — can cause incorrect shared cache behavior
  2. createReadStream().pipe() without error handling in the fast path (line 447) — if a file is deleted between startup and request time, the connection crashes without a proper HTTP response

Design/robustness concerns:
3. Mtime-based ETag fragility in containerized/CI deployments where file mtimes change across deploys
4. Unbounded compression concurrency in precompressAssets — hundreds of concurrent brotli q11 operations could spike memory
5. No opt-out mechanism for precompression — runs unconditionally, but Cloudflare Workers users don't benefit (CDN handles compression at the edge) and brotli q11 is slow
6. Node.js 22.0–22.14 experimental warnings for zlib.zstdCompress

Code quality:
7. Misleading comment about Accept-Encoding being lowercase in IncomingMessage (header names are lowercased, values are not)
8. Legacy accessor fields on StaticFileEntry that duplicate information from the variant objects — unnecessary since both the interface and tests are new
9. 304 response includes Content-Type which RFC 9110 says SHOULD NOT be sent (non-blocking, browsers handle it fine)
10. Sequential stat() calls in walkFilesWithStats could be batched for faster server startup

github run

- Add Vary: Accept-Encoding to original variant when compressed siblings exist
- Use pipeline() instead of .pipe() for stream error safety
- Remove legacy accessor fields from StaticFileEntry (all new code)
- Exclude Content-Type from 304 responses per RFC 9110
- Fix misleading Accept-Encoding comment in prod-server
- Batch stat() calls in walkFilesWithStats (chunks of 64)
- Bound precompression concurrency to availableParallelism (max 16)
- Gracefully skip zstd when zlib.zstdCompress unavailable (Node <22.15)
- Add --no-precompress CLI flag to skip build-time compression
- Drop brotli to q5 (fast, nearly same ratio as q11 at build time)
- Max out zstd at level 22 (build time, no reason to hold back)
@NathanDrake2406
Copy link
Copy Markdown
Contributor Author

Nice idea! Thanks for the PR!

I wonder if we should gate this behind an option instead of being on-by-default, considering that Next.js doesn't do this... I wonder if compression level is something we should let users control as well.

It would be good to get a progress bar like we have in pre-rendering too if there isn't one already.

I added a CLI option for it. Tbh I was just in the mood to optimise and half way through I realised it doesn't matter for Workers anyway lmao. It's okay tho, someday when a dev who isn't on Workers will benefit form it :copium:

@james-elicx
Copy link
Copy Markdown
Collaborator

/bigbonk review

Copy link
Copy Markdown
Contributor

@ask-bonk ask-bonk bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review: Build-time precompression + startup metadata cache

Strong contribution. The architecture is well-designed — build-time compression, startup cache, zero-allocation hot path — and the previous review's main correctness issue (missing Vary on original variant, batched stat() calls) has been addressed. Test coverage is thorough (52 new tests). I have a few remaining issues, one of which is a correctness bug.

Issues

1. Doc comment / constant mismatch on compression levels (misleading, multiple locations)
The file header comment in precompress.ts says "brotli q5" and "zstd l22", and the CLI comment says "brotli q5 + gzip l9 + zstd l22", but the PR title and description claim "brotli q11" and "zstd l19". The actual code uses brotli q5 and zstd l22. The PR description benchmarks cite "brotli q11" results. Either the code or the description is wrong — this needs to be reconciled so users know what quality level they're getting.

2. createZstdCompress called unconditionally in createCompressor — crashes on Node < 22.15
The build-time code in precompress.ts correctly feature-detects zlib.zstdCompress before using it. But the runtime createCompressor function in prod-server.ts calls zlib.createZstdCompress() unconditionally in the "zstd" case. If negotiateEncoding returns "zstd" on a Node version where zstd isn't available (or is experimental), this will throw at runtime. The negotiation should check zstd availability before advertising support.

3. pipeline error callback silently swallows errors on the fast path
At prod-server.ts:447, pipeline(fs.createReadStream(variant.path), res, () => {}) silently ignores all errors. If the file is deleted between cache creation and request time, createReadStream emits an error that never gets handled — the client gets a broken connection with no HTTP error response. The slow path and the compressed-response path both use the same pattern, but those at least have compressor in the pipeline which triggers a res close. The fast path (buffer miss, large file) should handle errors — at minimum log, ideally send 500 if headers haven't been sent.

4. Slow path doesn't serve HEAD correctly
The fast path (with cache) correctly returns headers-only for HEAD requests. The slow path (no cache, lines 452-500) has no HEAD check — it always streams the full body. This means if StaticFileCache.create() ever returns an empty cache (race, error), HEAD requests regress to streaming the full file.

5. resolvedClient traversal check is skipped on the fast path
The fast path trusts the cache completely — if a pathname is in the cache, it's served. But the cache is keyed by "/{relativePath}" where relativePath comes from path.relative(base, fullPath). If walkFilesWithStats somehow yields a path outside clientDir (e.g., symlink), the traversal guard is bypassed. Low risk since readdir + isFile() won't follow cross-device symlinks by default, but worth noting — the slow path has an explicit startsWith(resolvedClient + path.sep) check.

/**
* Build-time precompression for hashed static assets.
*
* Generates .br (brotli q5), .gz (gzip l9), and .zst (zstd l22) files
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doc/code mismatch on compression levels. The header says "brotli q5" and "zstd l22", but the PR title and description benchmarks claim "brotli q11" and "zstd l19". The actual code uses BROTLI_PARAM_QUALITY: 5 (line 117) and ZSTD_c_compressionLevel: 22 (line 124).

The PR description's benchmark results cite "brotli q11, max quality" — if those numbers were measured with q11, they don't reflect what this code actually produces. Either:

  1. Update the code to use q11/l19 to match the PR description, or
  2. Update the PR description to accurately reflect q5/l22

q5 is a reasonable default (much faster than q11), but the discrepancy between docs and code will confuse anyone reading the PR description alongside the source.

const accept = req.headers["accept-encoding"];
if (!accept || typeof accept !== "string") return null;
const lower = accept.toLowerCase();
if (lower.includes("zstd")) return "zstd";
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: negotiateEncoding advertises zstd without checking runtime availability.

This returns "zstd" if the client sends it in Accept-Encoding, but createCompressor then calls zlib.createZstdCompress() unconditionally. On Node versions where zstd isn't available (or is experimental and disabled), this will throw.

The build-time code in precompress.ts correctly feature-detects zstdCompress before using it. The same pattern should apply here:

Suggested change
if (lower.includes("zstd")) return "zstd";
if (lower.includes("zstd") && typeof zlib.createZstdCompress === "function") return "zstd";

Alternatively, hoist a const HAS_ZSTD = typeof zlib.createZstdCompress === "function" constant and use it in both negotiateEncoding and createCompressor.

if (variant.buffer) {
res.end(variant.buffer);
} else {
pipeline(fs.createReadStream(variant.path), res, () => {});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Error swallowed silently — broken connections on file-not-found.

If the cached file is deleted between startup and request time (e.g., a deploy that replaces dist/ while the server is running), createReadStream emits an error. The empty callback () => {} silently discards it, and the client gets a broken/hung connection with no HTTP response.

Consider:

Suggested change
pipeline(fs.createReadStream(variant.path), res, () => {});
pipeline(fs.createReadStream(variant.path), res, (err) => {
if (err && !res.headersSent) {
res.writeHead(500);
res.end("Internal Server Error");
}
});

Note: res.headersSent will be true here since writeHead was called above. So the realistic fix is to just not crash silently — at minimum a console.error so operators can diagnose the issue. The same pattern exists on line 490 and 499 in the slow path.


res.writeHead(200, baseHeaders);
fs.createReadStream(resolvedStaticFile).pipe(res);
res.writeHead(200, {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slow path missing HEAD request handling.

The fast path (cache hit) correctly returns headers-only for HEAD requests at line 437-440. The slow path here always streams the full body regardless of req.method. If the cache is empty or unavailable, HEAD requests will stream the entire file.

Suggested change
res.writeHead(200, {
res.writeHead(200, {
...baseHeaders,
"Content-Length": String(resolved.size),
});
if (req.method === "HEAD") {
res.end();
return true;
}
pipeline(fs.createReadStream(resolved.path), res, () => {});
return true;


// Pick the best precompressed variant: zstd → br → gzip → original.
// Each variant has pre-computed headers — zero string building.
// Accept-Encoding header values from browsers use lowercase tokens.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: the comment says "Accept-Encoding header values from browsers use lowercase tokens" which is accurate in practice but slightly misleading — the HTTP spec says encoding tokens are case-insensitive, and browsers happen to send lowercase. The code handles this correctly by using .includes("zstd") etc. on the raw value (which browsers always send lowercase). Just noting for accuracy since the earlier review flagged the same comment.

const encoding = negotiateEncoding(req);
if (encoding) {
const fileStream = fs.createReadStream(resolvedStaticFile);
const compressor = createCompressor(encoding);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slow path also missing HEAD handling for compressed responses.

When the slow path serves a compressed response, it unconditionally pipes the compressed stream. HEAD requests should get headers only.

Suggested change
const compressor = createCompressor(encoding);
const compressor = createCompressor(encoding);
res.writeHead(200, {
...baseHeaders,
"Content-Encoding": encoding,
Vary: "Accept-Encoding",
});
if (req.method === "HEAD") {
res.end();
return true;
}
pipeline(fs.createReadStream(resolved.path), compressor, res, () => {});

prerenderResult = await runPrerender({ root: process.cwd() });
}

// Precompress hashed assets (brotli q5 + gzip l9 + zstd l22). These
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same doc/code mismatch: comment says "brotli q5 + gzip l9 + zstd l22" but the PR description benchmarks claim "brotli q11". Should match whatever the actual values are in precompress.ts.

continue;

const fullPath = path.join(assetsDir, relativePath);
const content = await fsp.readFile(fullPath);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: all file contents are eagerly read into memory before compression begins. For a project with many large compressible assets, this could spike memory. Consider reading and compressing files within the chunked loop instead of pre-reading all of them. Currently:

  1. Read ALL files into files[] array (potentially hundreds of MB)
  2. Then compress in chunks

If you move the readFile into the chunk processing loop, peak memory drops from O(total_assets) to O(CONCURRENCY * max_file_size).

const cacheControl = isHashed
? "public, max-age=31536000, immutable"
: "public, max-age=3600";
const etag = `W/"${fileInfo.size}-${fileInfo.mtimeMs}"`;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previous review flagged mtime-based ETag fragility in containerized deployments. This is still using W/"${size}-${mtimeMs}". Since this PR already addressed other review feedback (Vary on original, batched stats), noting this as a known follow-up — it's not a blocker since sirv uses the same approach, but worth tracking.

await captured.ended;
expect(captured.headers["Content-Encoding"]).toBe("br");
});
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good coverage overall. Two test gaps to consider adding:

  1. Slow path test (no cache): All tests pass a StaticFileCache to tryServeStatic. There are no tests for the cache-less fallback path (lines 452-500 of prod-server.ts), which has the HEAD and error handling issues I flagged. At minimum, one test calling tryServeStatic(req, res, clientDir, pathname, compress) without a cache argument would exercise that path.

  2. Encoded pathname test: The fast path has special handling for %-encoded pathnames (lines 391-403), including a decodeURIComponent that can throw. A test with a malformed percent-encoded pathname (e.g., "/%E0%A4%A") would verify the catch returns false gracefully.

@ask-bonk
Copy link
Copy Markdown
Contributor

ask-bonk bot commented Mar 24, 2026

Review posted. Here's a summary of what I flagged:

Key issues

  1. Compression level doc/code mismatch — The PR description claims brotli q11 and zstd l19, but the actual code uses brotli q5 and zstd l22. The benchmarks may not reflect what this code actually produces.

  2. Runtime zstd crash on older Node (bug) — negotiateEncoding returns "zstd" and createCompressor calls zlib.createZstdCompress() unconditionally, but there's no feature detection like the build-time code has. Will throw on Node versions without stable zstd.

  3. Silent error swallowing in pipeline — The fast-path pipeline(createReadStream, res, () => {}) silently discards errors if a cached file is deleted between startup and request time.

  4. Slow path missing HEAD request handling — The fast path (with cache) correctly skips the body for HEAD requests, but the slow path (no cache) always streams the full file body regardless of method.

  5. Slow path also missing HEAD for compressed responses — Same issue in the compressed branch of the slow path.

Positive notes

  • Previous review's Vary: Accept-Encoding on original variant issue was fixed
  • Batched stat() calls were implemented as suggested
  • 52 new tests with good coverage of the fast path
  • Clean separation between precompress.ts, static-file-cache.ts, and tryServeStatic
  • The --no-precompress CLI flag addresses the maintainer feedback about opt-out

github run

@NathanDrake2406
Copy link
Copy Markdown
Contributor Author

@james-elicx should I use 11 or 5?

@james-elicx
Copy link
Copy Markdown
Collaborator

Higher level of compression would be good at built-time but 11 can also be a bit slow, so 5 is probably a good starting point.

@james-elicx
Copy link
Copy Markdown
Collaborator

Can the CLI flag be opt-in instead of opt-out as well?

- Gate negotiateEncoding zstd on runtime availability (HAS_ZSTD)
- Add ZSTD_e_flush for streaming mode (progressive SSR decodability)
- Serve HEAD without body on slow path (compressed + uncompressed)
- Handle pipeline errors instead of silently swallowing them
- Lowercase Accept-Encoding in fast path per RFC 9110
- Fix stale comments: add zstd to module header and variant detection
- Add tests for slow path, HEAD, and malformed percent-encoded pathnames
Move build-time precompression from CLI post-step into a writeBundle
plugin hook (vinext:precompress). The plugin already knows the
deployment target — auto-skips for Cloudflare Workers and Nitro where
the CDN handles compression at the edge.

- Add precompress option to VinextOptions: 'auto' | boolean
- CLI --no-precompress forwards via VINEXT_NO_PRECOMPRESS env var
- Works for any build invocation (CLI, API, CI), not just vinext build
- precompress: remove false idempotency claim (overwrites, not skips)
- static-file-cache: "frozen objects" → "reusable objects" (no freeze)
- static-file-cache: fix unrealistic compression ratio example
@NathanDrake2406
Copy link
Copy Markdown
Contributor Author

Can the CLI flag be opt-in instead of opt-out as well?

I added an auto detect edge for workers and nitro instead. It seems more "it just works" philosophy than opting in which is kinda nerdy

@james-elicx
Copy link
Copy Markdown
Collaborator

I added an auto detect edge for workers and nitro instead. It seems more "it just works" philosophy than opting in which is kinda nerdy

The reason I asked is that one of the goals is fast build-time, and not everyone may need this out-of-the-box - there's always the option to re-evaluate turning on by default in the future as well.

I ran this against the fumadocs example, which has 12 assets to pre-compress, on an M4 Max.

On the main branch I get the following:

➜ vinext (main) ✗ hyperfine --runs 5 "vp exec --filter fumadocs-docs-template vinext build"
Benchmark 1: vp exec --filter fumadocs-docs-template vinext build
  Time (mean ± σ):      2.790 s ±  0.117 s    [User: 3.834 s, System: 1.794 s]
  Range (min … max):    2.710 s …  2.997 s    5 runs

And on this branch I get the following:

➜ vinext (perf/precompressed-static-serving) ✗ hyperfine --runs 5 "vp exec --filter fumadocs-docs-template vinext build"
Benchmark 1: vp exec --filter fumadocs-docs-template vinext build
  Time (mean ± σ):      3.468 s ±  0.073 s    [User: 4.215 s, System: 2.506 s]
  Range (min … max):    3.368 s …  3.574 s    5 runs

That was almost a 25% increase in build time for me. For apps that have lots of static assets, that could be a meaningful amount of time and impact to DX, likewise with less performant machines.

@NathanDrake2406
Copy link
Copy Markdown
Contributor Author

NathanDrake2406 commented Mar 25, 2026

I added an auto detect edge for workers and nitro instead. It seems more "it just works" philosophy than opting in which is kinda nerdy

The reason I asked is that one of the goals is fast build-time, and not everyone may need this out-of-the-box - there's always the option to re-evaluate turning on by default in the future as well.

I ran this against the fumadocs example, which has 12 assets to pre-compress, on an M4 Max.

On the main branch I get the following:

➜ vinext (main) ✗ hyperfine --runs 5 "vp exec --filter fumadocs-docs-template vinext build"
Benchmark 1: vp exec --filter fumadocs-docs-template vinext build
  Time (mean ± σ):      2.790 s ±  0.117 s    [User: 3.834 s, System: 1.794 s]
  Range (min … max):    2.710 s …  2.997 s    5 runs

And on this branch I get the following:

➜ vinext (perf/precompressed-static-serving) ✗ hyperfine --runs 5 "vp exec --filter fumadocs-docs-template vinext build"
Benchmark 1: vp exec --filter fumadocs-docs-template vinext build
  Time (mean ± σ):      3.468 s ±  0.073 s    [User: 4.215 s, System: 2.506 s]
  Range (min … max):    3.368 s …  3.574 s    5 runs

That was almost a 25% increase in build time for me. For apps that have lots of static assets, that could be a meaningful amount of time and impact to DX, likewise with less performant machines.

I did some more testing and found out that the compression level itself wasn't optimised. So I tuned the compression level and got these results. It's only 34ms extra at build time compared to main and not 25% if using these compression levels.

I tuned the compression levels — the old zstd:22 was the cause of the 25% regression, not precompression itself. Here are the numbers with the new levels (br:5, gz:8, zstd:8) on 100 assets:

Build-time cost: old vs new

Config Time Brotli Gzip Zstd
Old (br:5, gz:9, zstd:22) 12,002ms 262 KB (90.8%) 297 KB (89.6%) 225 KB (92.1%)
New (br:5, gz:8, zstd:8) 34ms 262 KB (90.8%) 297 KB (89.6%) 242 KB (91.5%)

353x faster. Identical brotli/gzip output, zstd is +7% larger but still 91.5% compression.

Serving throughput (5000 reqs, 10 concurrent)

Server req/s p50 p99 Transferred vs send
vinext (this PR) 30,064 0.3ms 0.7ms 11.8 MB 2.0x
sirv (SvelteKit) 17,759 0.5ms 1.9ms 12.8 MB 1.2x
send (Next.js) 14,930 0.6ms 2.1ms 139.0 MB 1.0x
vinext (main) 9,650 0.8ms 3.6ms 13.2 MB 0.6x

304 Not Modified

Server req/s
vinext (this PR) 40,409
sirv (SvelteKit) 42,685

34ms build cost buys 3.1x faster serving vs main, 2x vs Next.js, 1.7x vs SvelteKit for 100 static assets. Could you change the levels to 5-8-8 and run the test again?

@NathanDrake2406
Copy link
Copy Markdown
Contributor Author

I also did the chunking optimisation that Bonk suggested

@james-elicx
Copy link
Copy Markdown
Collaborator

baseline

➜ vinext (main) ✗ hyperfine --warmup 3 --runs 5 "vp exec --filter fumadocs-docs-template vinext build"
Benchmark 1: vp exec --filter fumadocs-docs-template vinext build
  Time (mean ± σ):      2.401 s ±  0.014 s    [User: 3.880 s, System: 1.579 s]
  Range (min … max):    2.386 s …  2.423 s    5 runs

branch

➜ witty-engine (perf/precompressed-static-serving) ✔ hyperfine --warmup 3 --runs 5 "vp exec --filter fumadocs-docs-template vinext build"
Benchmark 1: vp exec --filter fumadocs-docs-template vinext build
  Time (mean ± σ):      2.727 s ±  0.018 s    [User: 4.245 s, System: 2.274 s]
  Range (min … max):    2.703 s …  2.750 s    5 runs

so it's about a 13.5% increase in build-time with the latest changes on my M4 Pro with a warm file-system cache (which is faster for fs ops than my M4 Max due to different security software).

I still would like this to be behind an opt-in precompress assets flag initially, and we can later re-visit turning it on by default after it's stabilised.

@james-elicx
Copy link
Copy Markdown
Collaborator

/bigbonk review

Comment on lines +119 to +124
gzip(content, { level: zlib.constants.Z_BEST_COMPRESSION }),
];
if (zstdCompress) {
compressions.push(
zstdCompress(content, {
params: { [zlib.constants.ZSTD_c_compressionLevel]: 22 },
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

22 is different to the number you referenced in your reply earlier - was that change reverted?

Copy link
Copy Markdown
Contributor

@ask-bonk ask-bonk bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review: Build-time precompression + static file cache

Good work overall — the architecture is sound (build-time compression → startup cache → zero-allocation hot path), test coverage is thorough (52+ tests), and the separation between precompress.ts, static-file-cache.ts, and the refactored tryServeStatic is clean. Previous review feedback has been addressed well (Vary on original, batched stats, pipeline error handling, HEAD on slow path, zstd feature detection).

Two items need resolution before merge:

1. Maintainer asked for opt-in, code is still opt-out (blocker)

@james-elicx explicitly asked (twice) for this to be opt-in rather than the current 'auto' default that enables precompression for all non-edge builds. The latest benchmark shows a 13.5% build-time regression on fumadocs even with the tuned compression levels. The current --no-precompress flag is opt-out. This needs to be flipped so precompression is off by default and users enable it with --precompress or precompress: true.

2. Pipeline error handler is unreachable on fast path (bug)

The pipeline error callback at line 450-454 checks !res.headersSent, but res.writeHead(200, ...) was already called at line 435/437. So res.headersSent is always true when the callback fires, making the 500 response dead code. If createReadStream fails (file deleted between startup and request), the error is silently swallowed and the client gets a broken/hung connection. At minimum, destroy the response socket or log the error.

See inline comments for specifics and two smaller items.

if (opt === false) return;
if (process.env.VINEXT_NO_PRECOMPRESS === "1") return;

if (opt === "auto" && (hasCloudflarePlugin || hasNitroPlugin)) return;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Blocker: precompression is still on-by-default for Node.js targets.

@james-elicx asked twice for this to be opt-in, most recently on Mar 28 after benchmarking a 13.5% build-time regression on fumadocs. The current logic enables precompression by default when there's no Cloudflare/Nitro plugin detected.

To make it opt-in, flip the 'auto' default to disabled:

Suggested change
if (opt === "auto" && (hasCloudflarePlugin || hasNitroPlugin)) return;
if (opt === "auto") return;

Then users who want it pass precompress: true in their vite config or --precompress on the CLI. You'd also need to add --precompress as the positive flag in cli.ts (the current --no-precompress only makes sense for opt-out semantics).

Comment on lines +434 to +455
if (extraHeaders) {
res.writeHead(200, { ...variant.headers, ...extraHeaders });
} else {
res.writeHead(200, variant.headers);
}

if (req.method === "HEAD") {
res.end();
return true;
}

// Small files: serve from in-memory buffer (no fd open/close overhead).
// Large files: stream from disk to avoid holding them in the heap.
if (variant.buffer) {
res.end(variant.buffer);
} else {
pipeline(fs.createReadStream(variant.path), res, (err) => {
if (err && !res.headersSent) {
res.writeHead(500);
res.end();
}
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: error handler is unreachable. res.writeHead(200, ...) at line 435/437 sets res.headersSent = true. So when the pipeline callback fires with an error, the !res.headersSent guard is always false — the 500 response is dead code. The client gets a broken/hung connection.

The same pattern exists in the slow path at lines 502-506 and 520-524 — those also call writeHead before pipeline.

Since headers are already sent, you can't send a 500. But you should at least destroy the socket so the client gets a clean connection reset instead of hanging:

Suggested change
if (extraHeaders) {
res.writeHead(200, { ...variant.headers, ...extraHeaders });
} else {
res.writeHead(200, variant.headers);
}
if (req.method === "HEAD") {
res.end();
return true;
}
// Small files: serve from in-memory buffer (no fd open/close overhead).
// Large files: stream from disk to avoid holding them in the heap.
if (variant.buffer) {
res.end(variant.buffer);
} else {
pipeline(fs.createReadStream(variant.path), res, (err) => {
if (err && !res.headersSent) {
res.writeHead(500);
res.end();
}
});
if (variant.buffer) {
res.end(variant.buffer);
} else {
pipeline(fs.createReadStream(variant.path), res, (err) => {
if (err) {
res.destroy();
}
});
}

Apply the same fix to the slow-path pipeline calls.

Comment on lines +84 to +107
// Collect compressible files first, then process in bounded chunks
const files: { fullPath: string; content: Buffer }[] = [];

for await (const relativePath of walkFiles(assetsDir)) {
const ext = path.extname(relativePath).toLowerCase();

// Skip non-compressible types and already-compressed variants
if (!COMPRESSIBLE_EXTENSIONS.has(ext)) continue;
if (
relativePath.endsWith(".br") ||
relativePath.endsWith(".gz") ||
relativePath.endsWith(".zst")
)
continue;

const fullPath = path.join(assetsDir, relativePath);
const content = await fsp.readFile(fullPath);

if (content.length < MIN_SIZE) continue;

files.push({ fullPath, content });
result.filesCompressed++;
result.totalOriginalBytes += content.length;
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Memory: all file contents are eagerly read into memory before compression begins.

The collect-then-compress pattern means peak memory is O(total_compressible_asset_size) — for a large app with 50MB of JS/CSS in assets/, all 50MB is held in files[] simultaneously, even though files are compressed in chunks of CONCURRENCY.

If you move the readFile into the chunk processing loop, peak memory drops to O(CONCURRENCY * max_file_size). The walkFiles generator already yields lazily, so you'd just need to collect paths (not contents) in the first pass, then read + compress within each chunk.

Not blocking, but worth addressing before this hits projects with large asset directories.

/**
* Build-time precompression for hashed static assets.
*
* Generates .br (brotli q5), .gz (gzip l9), and .zst (zstd l22) files
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: PR description says "brotli q11" and "zstd l19" but code uses q5 and l22. The header comment here is accurate (q5, l22), but the PR title and description benchmarks still claim q11/l19. The PR description should be updated to match the actual compression levels so reviewers and future readers aren't confused about what was measured.

(The maintainer agreed q5 is a good default, so the code is fine — just the PR description is stale.)

@ask-bonk
Copy link
Copy Markdown
Contributor

ask-bonk bot commented Mar 28, 2026

Review submitted with REQUEST_CHANGES. Here's a summary of what I flagged:

Blockers (2):

  1. Precompression is still on-by-default@james-elicx asked twice for this to be opt-in. The current 'auto' default enables it for all non-edge Node.js builds, but the latest benchmark still shows a 13.5% build-time regression on fumadocs. Needs to be flipped so users explicitly enable it with --precompress or precompress: true.

  2. Pipeline error handler is unreachable — In prod-server.ts:450-454, the !res.headersSent guard is dead code because writeHead(200) was already called at line 435/437. If createReadStream fails after headers are sent, the error is silently swallowed and the client hangs. Same pattern in the slow path. Fix: call res.destroy() on error instead of trying to send a 500.

Non-blocking (2):

  1. Eager readFile in precompress — All file contents are loaded into memory before compression starts. Peak memory is O(total_compressible_size) instead of O(CONCURRENCY * max_file_size). Moving readFile into the chunk loop would fix this.

  2. Stale PR description — PR title/description still claim "brotli q11" and "zstd l19" but the actual code uses q5 and l22. The code is fine per maintainer agreement — just the description needs updating.

github run

Comment on lines +91 to +97
if (!COMPRESSIBLE_EXTENSIONS.has(ext)) continue;
if (
relativePath.endsWith(".br") ||
relativePath.endsWith(".gz") ||
relativePath.endsWith(".zst")
)
continue;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

aren't those 3 checks already caught by the compressible_extensions if statement?

}
await Promise.all(writes);

result.totalCompressedBytes += brContent.length;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this track all three instead of just brotli?

Comment on lines +539 to +548
const stat = await statIfFile(staticFile);
if (stat) return { path: staticFile, size: stat.size };

const htmlFallback = staticFile + ".html";
const htmlStat = await statIfFile(htmlFallback);
if (htmlStat) return { path: htmlFallback, size: htmlStat.size };

const indexFallback = path.join(staticFile, "index.html");
const indexStat = await statIfFile(indexFallback);
if (indexStat) return { path: indexFallback, size: indexStat.size };
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the worst case scenario this could be somewhat slow - potential to cache knowledge of existence of the files, or have the server be aware of the static assets that exist that it can lookup against instead of doing fs ops?

}> {
let entries;
try {
entries = await fsp.readdir(dir, { withFileTypes: true });
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would the recursive option help here potentially?

const cacheControl = isHashed
? "public, max-age=31536000, immutable"
: "public, max-age=3600";
const etag = `W/"${fileInfo.size}-${fileInfo.mtimeMs}"`;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it be worth basing this on asset hash or file name as well?

* Can also be disabled via `--no-precompress` CLI flag or
* `VINEXT_NO_PRECOMPRESS=1` environment variable.
*/
precompress?: "auto" | boolean;
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

vite config option could support custom compression levels as well as opting in

}

// Process in chunks to bound concurrent CPU-heavy compressions
for (let i = 0; i < files.length; i += CONCURRENCY) {
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we show a progress bar like we do for prerendering please?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants