Skip to content

Using unix sockets#1

Open
dragosv wants to merge 4 commits intomainfrom
unixsocket
Open

Using unix sockets#1
dragosv wants to merge 4 commits intomainfrom
unixsocket

Conversation

@dragosv
Copy link
Owner

@dragosv dragosv commented Feb 23, 2026

Description

Using unix sockets directly instead of dusty.

Type of Change

  • [X ] Refactoring

Testing

Describe the tests you ran to verify your changes:

  • [ X] Integration tests

Summary by CodeRabbit

  • Documentation

    • Updated architecture/implementation guides, README and examples to describe a built-in HTTP/1.1 client over Unix domain sockets and simplified quick-start snippets.
  • Refactor

    • Removed external HTTP and async runtime dependencies; core networking and I/O now use the standard library.
  • Tests

    • Simplified integration tests by removing external runtime initialization; tests now rely on Docker availability checks.
  • Chores

    • Added macOS CI step to install and start Docker tooling and wait for Docker readiness; updated build dependency configuration.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request replaces the external dusty HTTP library with a custom HTTP/1.1 client implementation that communicates directly with the Docker Engine over Unix domain sockets using std.net.connectUnixSocket. This eliminates both the dusty and zio dependencies.

Changes:

  • Implemented a custom HTTP/1.1 client in docker_client.zig with buffered stream reading, header parsing, and chunked transfer encoding support
  • Removed all zio.Runtime initialization code from tests, examples, and documentation
  • Updated wait.zig to use std.http.Client for the HTTP wait strategy instead of dusty
  • Removed external dependencies from build.zig.zon and build.zig

Reviewed changes

Copilot reviewed 14 out of 14 changed files in this pull request and generated 9 comments.

Show a summary per file
File Description
src/docker_client.zig Added custom HTTP/1.1 implementation with HttpReader, request builder, response parser, and chunked encoding decoder; removed dusty client dependency
src/wait.zig Replaced dusty client with std.http.Client for HTTP wait strategy
src/integration_test.zig Removed zio runtime initialization from all tests
src/root.zig Updated documentation to reflect removal of zio dependency requirement
examples/basic.zig Removed zio runtime initialization; updated to use std.http.Client
build.zig.zon Removed dusty dependency
build.zig Removed dusty and zio imports from all build targets
README.md Removed zio-related documentation
QUICKSTART.md Removed zio runtime initialization instructions
PROJECT_SUMMARY.md Updated dependency information
IMPLEMENTATION_GUIDE.md Updated HTTP client documentation
ARCHITECTURE.md Updated architecture diagrams and dependency information
AGENTS.md Updated architecture rules

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

}) catch {
std.Thread.sleep(poll);
continue;
};
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Memory leak: The FetchResult returned by client.fetch() must have deinit() called to free allocated resources. Add defer result.deinit(alloc); after line 225.

Suggested change
};
};
defer result.deinit(alloc);

Copilot uses AI. Check for mistakes.
std.log.info("HTTP status: {d}", .{@intFromEnum(resp.status())});
const fetch_result = try http_client.fetch(.{
.location = .{ .url = url },
});
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Memory leak: The FetchResult returned by client.fetch() must have deinit() called to free allocated resources. Add defer fetch_result.deinit(allocator); after line 42.

Suggested change
});
});
defer fetch_result.deinit(allocator);

Copilot uses AI. Check for mistakes.
Comment on lines +126 to +143
var hdr_buf: [4096]u8 = undefined;
var fbs = std.io.fixedBufferStream(&hdr_buf);
const w = fbs.writer();

try w.print("{s} {s} HTTP/1.1\r\n", .{ method.name(), path });
try w.print("Host: localhost\r\n", .{});
if (content_type) |ct| {
try w.print("Content-Type: {s}\r\n", .{ct});
}
if (body) |b| {
try w.print("Content-Length: {d}\r\n", .{b.len});
}
try w.print("Connection: close\r\n\r\n", .{});

try stream.writeAll(fbs.getWritten());
if (body) |b| {
try stream.writeAll(b);
}
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential buffer overflow: If the formatted HTTP request header exceeds 4096 bytes, the fixed buffer will overflow. Consider checking fbs.pos against hdr_buf.len before each write operation, or use an ArrayList to allow dynamic growth. Large Content-Type values or very long paths could trigger this.

Copilot uses AI. Check for mistakes.
const size_line = try reader.readLine(&line_buf) orelse break;
const semi = std.mem.indexOfScalar(u8, size_line, ';') orelse size_line.len;
const trimmed = std.mem.trim(u8, size_line[0..semi], " ");
const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch break;
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Error handling issue: When parsing chunk size fails (line 213), the code silently breaks from the loop, potentially returning a truncated body without error. This could hide malformed chunked responses. Consider returning an error instead of breaking, or at least documenting this behavior.

Suggested change
const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch break;
const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch return error.InvalidResponse;

Copilot uses AI. Check for mistakes.
Comment on lines +185 to +187
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
if (meta.content_length) |cl| {
if (cl == 0) return allocator.dupe(u8, "");
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unnecessary allocation: When content_length is 0, calling allocator.dupe(u8, "") creates an allocation for an empty string. This allocates memory that must be freed by the caller. Consider using a compile-time constant empty string slice instead, or document that callers must always free the returned slice even when empty.

Suggested change
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
if (meta.content_length) |cl| {
if (cl == 0) return allocator.dupe(u8, "");
/// For a zero-length body, returns a shared empty slice that must not be freed.
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
if (meta.content_length) |cl| {
if (cl == 0) return &[_]u8{};

Copilot uses AI. Check for mistakes.
Comment on lines +185 to +200
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
if (meta.content_length) |cl| {
if (cl == 0) return allocator.dupe(u8, "");
const body_buf = try allocator.alloc(u8, cl);
errdefer allocator.free(body_buf);
try reader.readExact(body_buf);
return body_buf;
}

if (meta.chunked) {
return readChunkedBody(reader, allocator);
}

// No Content-Length and not chunked — read until connection close.
return readUntilClose(reader, allocator);
}
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HTTP spec violation: When both Content-Length and Transfer-Encoding: chunked headers are present, the code prioritizes Content-Length (line 186). According to RFC 7230 Section 3.3.3, Transfer-Encoding takes precedence and Content-Length must be ignored. This could cause incorrect parsing if a malicious or buggy server sends both headers. The condition should check for chunked encoding first.

Copilot uses AI. Check for mistakes.
Comment on lines 345 to +358
// imagePull returns a streaming JSON progress response.
// We use a streaming reader to drain it without buffering the whole body,
// which avoids hitting max_response_size for large image pulls.
var resp = try self.client.fetch(url, .{
.method = .post,
.unix_socket_path = self.socket_path,
.decompress = false,
});
defer resp.deinit();

const sc: u16 = @as(u16, @intCast(@intFromEnum(resp.status())));
if (sc != 200) return DockerClientError.ApiError;
// We open a dedicated connection and drain the stream to wait for
// completion without buffering the entire (potentially huge) body.
const stream = try std.net.connectUnixSocket(self.socket_path);
defer stream.close();

try sendHttpRequest(stream, .post, api_path, null, null);

var reader: HttpReader = .{ .stream = stream };
const meta = try parseResponseHead(&reader);
if (meta.status_code != 200) return DockerClientError.ApiError;

// Drain the progress stream to wait for completion.
const r = resp.reader();
var buf: [4096]u8 = undefined;
while (true) {
const n = r.readSliceShort(&buf) catch break;
if (n == 0) break;
}
reader.drain();
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Error handling issue: imagePull drains the response stream without parsing it for errors. The Docker API returns 200 OK even for image pull failures, embedding error information in the JSON stream. Silently draining means pull failures (e.g., image not found, authentication required) will not be detected. Consider parsing at least the last JSON object from the stream or buffering the last few bytes to check for error messages.

Copilot uses AI. Check for mistakes.
var body: std.ArrayList(u8) = .empty;
errdefer body.deinit(allocator);

var line_buf: [128]u8 = undefined;
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential buffer limitation: The line_buf is only 128 bytes, which limits chunk size lines to 128 characters. While chunk size lines are typically short (e.g., "1a3f"), if chunk extensions are present (e.g., "1a3f;name=value"), they could exceed this limit and cause an HttpHeaderTooLong error. Consider increasing the buffer size or handling long chunk-extension lines more gracefully.

Suggested change
var line_buf: [128]u8 = undefined;
// Use a reasonably large buffer to accommodate chunk extensions and trailer lines.
var line_buf: [4096]u8 = undefined;

Copilot uses AI. Check for mistakes.
Comment on lines +48 to +255
/// Buffered reader over a raw `std.net.Stream` for incremental HTTP
/// response parsing. Avoids one-byte syscalls by reading into an
/// internal 8 KiB buffer.
const HttpReader = struct {
stream: std.net.Stream,
buf: [8192]u8 = undefined,
pos: usize = 0,
len: usize = 0,

/// Read a single byte from the buffered stream.
fn readByte(self: *HttpReader) !u8 {
if (self.pos >= self.len) {
self.len = try self.stream.read(&self.buf);
self.pos = 0;
if (self.len == 0) return error.EndOfStream;
}
const b = self.buf[self.pos];
self.pos += 1;
return b;
}

/// Read exactly `dest.len` bytes from the buffered stream.
fn readExact(self: *HttpReader, dest: []u8) !void {
var written: usize = 0;
while (written < dest.len) {
if (self.pos >= self.len) {
self.len = try self.stream.read(&self.buf);
self.pos = 0;
if (self.len == 0) return error.EndOfStream;
}
const available = self.len - self.pos;
const needed = dest.len - written;
const to_copy = @min(available, needed);
@memcpy(dest[written .. written + to_copy], self.buf[self.pos .. self.pos + to_copy]);
written += to_copy;
self.pos += to_copy;
}
}

/// Read a line terminated by `\n`. Returns the line contents without
/// the trailing `\r\n`. Returns `null` when EOF is reached with no data.
fn readLine(self: *HttpReader, out: []u8) !?[]const u8 {
var i: usize = 0;
while (i < out.len) {
const b = self.readByte() catch |err| {
if (err == error.EndOfStream) {
return if (i == 0) null else out[0..i];
}
return err;
};
if (b == '\n') {
const end = if (i > 0 and out[i - 1] == '\r') i - 1 else i;
return out[0..end];
}
out[i] = b;
i += 1;
}
return error.HttpHeaderTooLong;
}

/// Discard all remaining data until EOF.
fn drain(self: *HttpReader) void {
self.pos = self.len;
while (true) {
self.len = self.stream.read(&self.buf) catch return;
if (self.len == 0) return;
}
}
};

/// Send an HTTP/1.1 request over a raw stream.
fn sendHttpRequest(
stream: std.net.Stream,
method: Method,
path: []const u8,
content_type: ?[]const u8,
body: ?[]const u8,
) !void {
var hdr_buf: [4096]u8 = undefined;
var fbs = std.io.fixedBufferStream(&hdr_buf);
const w = fbs.writer();

try w.print("{s} {s} HTTP/1.1\r\n", .{ method.name(), path });
try w.print("Host: localhost\r\n", .{});
if (content_type) |ct| {
try w.print("Content-Type: {s}\r\n", .{ct});
}
if (body) |b| {
try w.print("Content-Length: {d}\r\n", .{b.len});
}
try w.print("Connection: close\r\n\r\n", .{});

try stream.writeAll(fbs.getWritten());
if (body) |b| {
try stream.writeAll(b);
}
}

/// Parse the HTTP response status line and headers.
fn parseResponseHead(reader: *HttpReader) !ResponseMeta {
var line_buf: [8192]u8 = undefined;

// Status line: "HTTP/1.x NNN ..."
const status_line = try reader.readLine(&line_buf) orelse return error.InvalidResponse;
const first_space = std.mem.indexOfScalar(u8, status_line, ' ') orelse return error.InvalidResponse;
const after_space = status_line[first_space + 1 ..];
if (after_space.len < 3) return error.InvalidResponse;
const status_code = std.fmt.parseInt(u16, after_space[0..3], 10) catch return error.InvalidResponse;

var content_length: ?usize = null;
var chunked = false;

while (true) {
const header_line = try reader.readLine(&line_buf) orelse break;
if (header_line.len == 0) break;

const colon = std.mem.indexOfScalar(u8, header_line, ':') orelse continue;
const hdr_name = std.mem.trim(u8, header_line[0..colon], " ");
const hdr_value = std.mem.trim(u8, header_line[colon + 1 ..], " ");

if (std.ascii.eqlIgnoreCase(hdr_name, "content-length")) {
content_length = std.fmt.parseInt(usize, hdr_value, 10) catch null;
} else if (std.ascii.eqlIgnoreCase(hdr_name, "transfer-encoding")) {
if (std.ascii.eqlIgnoreCase(hdr_value, "chunked")) {
chunked = true;
}
}
}

return .{
.status_code = status_code,
.content_length = content_length,
.chunked = chunked,
};
}

/// Read the full response body according to the parsed metadata.
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
if (meta.content_length) |cl| {
if (cl == 0) return allocator.dupe(u8, "");
const body_buf = try allocator.alloc(u8, cl);
errdefer allocator.free(body_buf);
try reader.readExact(body_buf);
return body_buf;
}

if (meta.chunked) {
return readChunkedBody(reader, allocator);
}

// No Content-Length and not chunked — read until connection close.
return readUntilClose(reader, allocator);
}

/// Decode an HTTP chunked transfer-encoded body.
fn readChunkedBody(reader: *HttpReader, allocator: std.mem.Allocator) ![]const u8 {
var body: std.ArrayList(u8) = .empty;
errdefer body.deinit(allocator);

var line_buf: [128]u8 = undefined;

while (true) {
const size_line = try reader.readLine(&line_buf) orelse break;
const semi = std.mem.indexOfScalar(u8, size_line, ';') orelse size_line.len;
const trimmed = std.mem.trim(u8, size_line[0..semi], " ");
const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch break;

if (chunk_size == 0) {
// Drain optional trailers.
while (true) {
const trailer = try reader.readLine(&line_buf) orelse break;
if (trailer.len == 0) break;
}
break;
}

const old_len = body.items.len;
try body.ensureTotalCapacity(allocator, old_len + chunk_size);
body.items.len = old_len + chunk_size;
try reader.readExact(body.items[old_len..]);

// Consume trailing \r\n after chunk data.
_ = try reader.readLine(&line_buf);
}

return body.toOwnedSlice(allocator);
}

/// Read until the peer closes the connection.
fn readUntilClose(reader: *HttpReader, allocator: std.mem.Allocator) ![]const u8 {
var body: std.ArrayList(u8) = .empty;
errdefer body.deinit(allocator);

// Flush anything the HttpReader already buffered.
if (reader.pos < reader.len) {
try body.appendSlice(allocator, reader.buf[reader.pos..reader.len]);
reader.pos = reader.len;
}

var tmp: [8192]u8 = undefined;
while (true) {
const n = reader.stream.read(&tmp) catch break;
if (n == 0) break;
try body.appendSlice(allocator, tmp[0..n]);
}

return body.toOwnedSlice(allocator);
}
Copy link

Copilot AI Feb 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing test coverage: The custom HTTP/1.1 implementation (HttpReader, sendHttpRequest, parseResponseHead, readChunkedBody, readUntilClose) lacks unit tests. These functions handle complex parsing logic with multiple edge cases (chunked encoding, connection close, header parsing) that should be tested independently. Consider adding unit tests that mock socket streams to verify correct handling of various HTTP responses, including malformed inputs, edge cases like empty bodies, large chunks, invalid chunk sizes, and header line limits.

Copilot uses AI. Check for mistakes.
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

Copy link

Copilot AI commented Feb 24, 2026

@dragosv I've opened a new pull request, #2, to work on those changes. Once the pull request is ready, I'll request review from you.

@coderabbitai
Copy link

coderabbitai bot commented Feb 24, 2026

📝 Walkthrough

Walkthrough

Replaces external dusty HTTP library and zio runtime with a built-in HTTP/1.1 client using Unix domain sockets in Zig; implements low-level request/response parsing, updates tests, examples, docs, and CI/build config to reflect zero external dependencies.

Changes

Cohort / File(s) Summary
CI / Build
.github/workflows/ci.yml, build.zig, build.zig.zon
Added macOS Colima/Docker setup step in CI; removed dusty/zio dependency declarations and imports from build configuration.
Documentation & Guides
AGENTS.md, ARCHITECTURE.md, IMPLEMENTATION_GUIDE.md, PROJECT_SUMMARY.md, QUICKSTART.md, README.md, src/root.zig
Rewrote architecture and implementation narrative to describe a built-in stdlib HTTP/1.1 client over Unix sockets and removal of dusty/zio references; updated examples and quickstart to omit zio runtime lifecycle.
Core HTTP Implementation
src/docker_client.zig
Introduced low-level HTTP-on-socket implementation: Method enum, ResponseMeta, HttpReader, sendHttpRequest, response parsing (head/body), chunked and until-close readers, streaming request handling, updated DockerClient API, and extensive in-file tests; removed dusty-based client usage.
Tests, Wait Logic & Examples
src/integration_test.zig, src/wait.zig, examples/basic.zig
Removed zio runtime setup in tests and examples; replaced dusty client calls with std.http client or socket-based flows; adjusted wait/poll logic to use stdlib HTTP calls and simplified control flow.
Other source updates
src/...
Minor doc/comment updates across source to reflect new transport model and runtime assumptions.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 I nibble sockets, stitch each byte,
Dusty gone, the net gleams bright,
No runtime bells, just steady hops,
Chunked replies and tiny clops,
A rabbit cheers — the build delights! 🥕

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Using unix sockets' directly describes the primary architectural change made throughout the pull request, replacing the dusty HTTP library with direct Unix socket communication.
Description check ✅ Passed The description identifies the main change (replacing dusty with Unix sockets) and marks it as refactoring with integration tests run, but lacks detailed context about scope and impact of removing external dependencies.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch unixsocket

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (2)
src/wait.zig (1)

215-217: Hoist std.http.Client out of the polling loop to avoid per-iteration allocation

var client is declared inside the while body, so the entire client — including its connection pool — is allocated and torn down on every poll iteration. No connections are ever reused, and the allocator is hit once per retry.

Move the client to the scope of waitHttp so the pool is created once and connections are reused across retries:

♻️ Proposed refactor
+    var client: std.http.Client = .{ .allocator = alloc };
+    defer client.deinit();
+
     while (std.time.nanoTimestamp() < deadline) {
-        var client: std.http.Client = .{ .allocator = alloc };
-        defer client.deinit();
-
         const result = client.fetch(.{
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/wait.zig` around lines 215 - 217, The std.http.Client instance is being
allocated inside the polling loop in waitHttp (var client = .{ .allocator =
alloc }; with defer client.deinit()), causing the connection pool to be
recreated each iteration; move the client declaration and initialization out of
the while loop into the outer scope of waitHttp so the pool is created once,
reuse that client for all retries, and call client.deinit() (or otherwise clean
it up) after the loop completes rather than deferring inside the loop; update
references to the client variable in the loop body accordingly.
.github/workflows/ci.yml (1)

25-26: colima start has no timeout; consider caching Homebrew packages

Two operational concerns with the macOS Docker bootstrap step:

  1. colima start --memory 4 --cpu 2 has no explicit timeout. If the VM provisioning hangs (e.g. first-run download of the VM image), the step will block indefinitely and the job will only be cancelled when GitHub Actions hits its job-level timeout (6 h by default). Pass --timeout <minutes> to bound the failure window.

  2. brew install docker docker-compose colima re-installs from scratch on every CI run. Each run on a fresh macOS runner pays the full download+install cost. Consider caching via actions/cache on the Homebrew cache directory (~/Library/Caches/Homebrew) to speed up subsequent runs.

🛠️ Suggested hardening for the colima start line
-          colima start --memory 4 --cpu 2
+          colima start --memory 4 --cpu 2 --timeout 10
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 25 - 26, The workflow currently runs
"colima start --memory 4 --cpu 2" with no timeout and always runs "brew install
docker docker-compose colima"; update the CI step to add an explicit timeout
flag to the colima invocation (e.g. "--timeout <minutes>") so provisioning
cannot hang indefinitely, and add an actions/cache entry for Homebrew cache
(~/Library/Caches/Homebrew) so the "brew install docker docker-compose colima"
step uses cached bottles between runs to avoid re-downloading and speed CI.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/docker_client.zig`:
- Around line 184-199: The function readResponseBody currently prefers
meta.content_length over meta.chunked which violates HTTP/1.1; update
readResponseBody so that if meta.chunked is true it calls readChunkedBody
immediately (ignoring content_length), otherwise fall back to content_length
handling or readUntilClose; adjust logic around meta.content_length,
meta.chunked, and calls to readChunkedBody/readUntilClose in the
readResponseBody function to ensure chunked decoding wins.
- Around line 1189-1216: The tests for sendHttpRequest read the pipe only once
which can return a partial read; update both tests ("sendHttpRequest: writes a
valid GET request" and "sendHttpRequest: writes Content-Type and Content-Length
for POST with body") to read from the read-end until EOF instead of a single
std.posix.read call: add a small helper (e.g., readAllFromFd or readUntilEof)
that repeatedly reads into a buffer/appends into a slice until read returns 0,
then use that helper to produce the complete req slice checked by the
assertions; reference the existing tests/sendHttpRequest usage to locate where
to replace the single-read logic.
- Around line 209-214: readChunkedBody currently treats parse failures as a
normal loop break and returns partial data; instead detect invalid chunk-size
lines and return an explicit error so callers can detect protocol corruption. In
the loop that uses reader.readLine(&line_buf) to produce size_line and computes
semi, trimmed and chunk_size (the std.fmt.parseInt call), replace the "catch
break" behavior with returning a descriptive error (e.g., error.InvalidChunkSize
or a new error variant) that includes context (the offending size_line or
trimmed slice); likewise if std.mem.indexOfScalar or trimming yields unexpected
results, return an error rather than silently breaking so the caller of
readChunkedBody can fail-fast on malformed chunked bodies.

In `@src/wait.zig`:
- Line 221: The current assignment .method = if (std.mem.eql(u8, s.method,
"POST")) .POST else .GET silently treats any non-"POST" HttpStrategy.method
string as GET; update the handling in src/wait.zig by either (A) validating
s.method against known verbs (e.g., "GET","POST","PUT","DELETE","PATCH","HEAD",
etc.) and returning an error for unrecognized values instead of defaulting to
.GET, or (B) change HttpStrategy.method to type std.http.Method and adjust
callers so you can assign the enum directly (removing the string comparisons);
locate references to HttpStrategy.method and the expression using
std.mem.eql(u8, s.method, "POST") to implement the chosen fix and ensure callers
are updated to propagate errors or use std.http.Method.

---

Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 25-26: The workflow currently runs "colima start --memory 4 --cpu
2" with no timeout and always runs "brew install docker docker-compose colima";
update the CI step to add an explicit timeout flag to the colima invocation
(e.g. "--timeout <minutes>") so provisioning cannot hang indefinitely, and add
an actions/cache entry for Homebrew cache (~/Library/Caches/Homebrew) so the
"brew install docker docker-compose colima" step uses cached bottles between
runs to avoid re-downloading and speed CI.

In `@src/wait.zig`:
- Around line 215-217: The std.http.Client instance is being allocated inside
the polling loop in waitHttp (var client = .{ .allocator = alloc }; with defer
client.deinit()), causing the connection pool to be recreated each iteration;
move the client declaration and initialization out of the while loop into the
outer scope of waitHttp so the pool is created once, reuse that client for all
retries, and call client.deinit() (or otherwise clean it up) after the loop
completes rather than deferring inside the loop; update references to the client
variable in the loop body accordingly.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a2c27d6 and 89adad0.

📒 Files selected for processing (14)
  • .github/workflows/ci.yml
  • AGENTS.md
  • ARCHITECTURE.md
  • IMPLEMENTATION_GUIDE.md
  • PROJECT_SUMMARY.md
  • QUICKSTART.md
  • README.md
  • build.zig
  • build.zig.zon
  • examples/basic.zig
  • src/docker_client.zig
  • src/integration_test.zig
  • src/root.zig
  • src/wait.zig
💤 Files with no reviewable changes (4)
  • build.zig
  • QUICKSTART.md
  • README.md
  • src/integration_test.zig

Comment on lines +184 to +199
/// Read the full response body according to the parsed metadata.
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
if (meta.content_length) |cl| {
if (cl == 0) return allocator.dupe(u8, "");
const body_buf = try allocator.alloc(u8, cl);
errdefer allocator.free(body_buf);
try reader.readExact(body_buf);
return body_buf;
}

if (meta.chunked) {
return readChunkedBody(reader, allocator);
}

// No Content-Length and not chunked — read until connection close.
return readUntilClose(reader, allocator);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Chunked responses should override Content-Length.

If Transfer-Encoding: chunked is present alongside Content-Length, HTTP/1.1 requires chunked decoding to win. The current order reads content_length first, which can truncate a chunked response.

🔧 Suggested fix
 fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
-    if (meta.content_length) |cl| {
+    if (meta.chunked) {
+        return readChunkedBody(reader, allocator);
+    }
+    if (meta.content_length) |cl| {
         if (cl == 0) return allocator.dupe(u8, "");
         const body_buf = try allocator.alloc(u8, cl);
         errdefer allocator.free(body_buf);
         try reader.readExact(body_buf);
         return body_buf;
     }
-
-    if (meta.chunked) {
-        return readChunkedBody(reader, allocator);
-    }
 
     // No Content-Length and not chunked — read until connection close.
     return readUntilClose(reader, allocator);
 }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Read the full response body according to the parsed metadata.
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
if (meta.content_length) |cl| {
if (cl == 0) return allocator.dupe(u8, "");
const body_buf = try allocator.alloc(u8, cl);
errdefer allocator.free(body_buf);
try reader.readExact(body_buf);
return body_buf;
}
if (meta.chunked) {
return readChunkedBody(reader, allocator);
}
// No Content-Length and not chunked — read until connection close.
return readUntilClose(reader, allocator);
/// Read the full response body according to the parsed metadata.
fn readResponseBody(reader: *HttpReader, meta: ResponseMeta, allocator: std.mem.Allocator) ![]const u8 {
if (meta.chunked) {
return readChunkedBody(reader, allocator);
}
if (meta.content_length) |cl| {
if (cl == 0) return allocator.dupe(u8, "");
const body_buf = try allocator.alloc(u8, cl);
errdefer allocator.free(body_buf);
try reader.readExact(body_buf);
return body_buf;
}
// No Content-Length and not chunked — read until connection close.
return readUntilClose(reader, allocator);
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/docker_client.zig` around lines 184 - 199, The function readResponseBody
currently prefers meta.content_length over meta.chunked which violates HTTP/1.1;
update readResponseBody so that if meta.chunked is true it calls readChunkedBody
immediately (ignoring content_length), otherwise fall back to content_length
handling or readUntilClose; adjust logic around meta.content_length,
meta.chunked, and calls to readChunkedBody/readUntilClose in the
readResponseBody function to ensure chunked decoding wins.

Comment on lines +209 to +214
while (true) {
const size_line = try reader.readLine(&line_buf) orelse break;
const semi = std.mem.indexOfScalar(u8, size_line, ';') orelse size_line.len;
const trimmed = std.mem.trim(u8, size_line[0..semi], " ");
const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch break;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Don’t silently accept invalid chunk sizes.

readChunkedBody currently breaks on parse errors, returning partial data without signaling an invalid response. That can mask protocol corruption.

🔧 Suggested fix
-        const size_line = try reader.readLine(&line_buf) orelse break;
+        const size_line = try reader.readLine(&line_buf) orelse return error.InvalidResponse;
         const semi = std.mem.indexOfScalar(u8, size_line, ';') orelse size_line.len;
         const trimmed = std.mem.trim(u8, size_line[0..semi], " ");
-        const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch break;
+        const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch return error.InvalidResponse;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
while (true) {
const size_line = try reader.readLine(&line_buf) orelse break;
const semi = std.mem.indexOfScalar(u8, size_line, ';') orelse size_line.len;
const trimmed = std.mem.trim(u8, size_line[0..semi], " ");
const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch break;
while (true) {
const size_line = try reader.readLine(&line_buf) orelse return error.InvalidResponse;
const semi = std.mem.indexOfScalar(u8, size_line, ';') orelse size_line.len;
const trimmed = std.mem.trim(u8, size_line[0..semi], " ");
const chunk_size = std.fmt.parseInt(usize, trimmed, 16) catch return error.InvalidResponse;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/docker_client.zig` around lines 209 - 214, readChunkedBody currently
treats parse failures as a normal loop break and returns partial data; instead
detect invalid chunk-size lines and return an explicit error so callers can
detect protocol corruption. In the loop that uses reader.readLine(&line_buf) to
produce size_line and computes semi, trimmed and chunk_size (the
std.fmt.parseInt call), replace the "catch break" behavior with returning a
descriptive error (e.g., error.InvalidChunkSize or a new error variant) that
includes context (the offending size_line or trimmed slice); likewise if
std.mem.indexOfScalar or trimming yields unexpected results, return an error
rather than silently breaking so the caller of readChunkedBody can fail-fast on
malformed chunked bodies.

Comment on lines +1189 to +1216
test "sendHttpRequest: writes a valid GET request" {
const fds = try std.posix.pipe();
defer std.posix.close(fds[0]);
const write_stream = std.net.Stream{ .handle = fds[1] };
try sendHttpRequest(write_stream, .get, "/v1.46/_ping", null, null);
std.posix.close(fds[1]);
var buf: [4096]u8 = undefined;
const n = try std.posix.read(fds[0], &buf);
const req = buf[0..n];
try std.testing.expect(std.mem.startsWith(u8, req, "GET /v1.46/_ping HTTP/1.1\r\n"));
try std.testing.expect(std.mem.indexOf(u8, req, "Host: localhost\r\n") != null);
try std.testing.expect(std.mem.indexOf(u8, req, "Connection: close\r\n") != null);
}

test "sendHttpRequest: writes Content-Type and Content-Length for POST with body" {
const fds = try std.posix.pipe();
defer std.posix.close(fds[0]);
const write_stream = std.net.Stream{ .handle = fds[1] };
const body = "{\"Image\":\"alpine\"}";
try sendHttpRequest(write_stream, .post, "/v1.46/containers/create", "application/json", body);
std.posix.close(fds[1]);
var buf: [4096]u8 = undefined;
const n = try std.posix.read(fds[0], &buf);
const req = buf[0..n];
try std.testing.expect(std.mem.startsWith(u8, req, "POST /v1.46/containers/create HTTP/1.1\r\n"));
try std.testing.expect(std.mem.indexOf(u8, req, "Content-Type: application/json\r\n") != null);
try std.testing.expect(std.mem.indexOf(u8, req, "Content-Length: 18\r\n") != null);
try std.testing.expect(std.mem.endsWith(u8, req, body));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

CI failure: pipe reads can be partial—read until EOF in sendHttpRequest tests.

The failing test reads the pipe once; a short read can miss bytes and cause assertions to inspect incomplete data. This aligns with the CI failure reported for docker_client.test.sendHttpRequest. Reading until EOF will make the test deterministic.

🧪 Suggested fix (add helper + use it in both sendHttpRequest tests)
+fn readAllFromFd(fd: std.posix.fd_t, allocator: std.mem.Allocator) ![]u8 {
+    var out: std.ArrayList(u8) = .empty;
+    errdefer out.deinit(allocator);
+    var buf: [4096]u8 = undefined;
+    while (true) {
+        const n = try std.posix.read(fd, &buf);
+        if (n == 0) break;
+        try out.appendSlice(allocator, buf[0..n]);
+    }
+    return out.toOwnedSlice(allocator);
+}
+
 test "sendHttpRequest: writes a valid GET request" {
     const fds = try std.posix.pipe();
     defer std.posix.close(fds[0]);
     const write_stream = std.net.Stream{ .handle = fds[1] };
     try sendHttpRequest(write_stream, .get, "/v1.46/_ping", null, null);
     std.posix.close(fds[1]);
-    var buf: [4096]u8 = undefined;
-    const n = try std.posix.read(fds[0], &buf);
-    const req = buf[0..n];
+    const req = try readAllFromFd(fds[0], std.testing.allocator);
+    defer std.testing.allocator.free(req);
     try std.testing.expect(std.mem.startsWith(u8, req, "GET /v1.46/_ping HTTP/1.1\r\n"));
     try std.testing.expect(std.mem.indexOf(u8, req, "Host: localhost\r\n") != null);
     try std.testing.expect(std.mem.indexOf(u8, req, "Connection: close\r\n") != null);
 }
 
 test "sendHttpRequest: writes Content-Type and Content-Length for POST with body" {
     const fds = try std.posix.pipe();
     defer std.posix.close(fds[0]);
     const write_stream = std.net.Stream{ .handle = fds[1] };
     const body = "{\"Image\":\"alpine\"}";
     try sendHttpRequest(write_stream, .post, "/v1.46/containers/create", "application/json", body);
     std.posix.close(fds[1]);
-    var buf: [4096]u8 = undefined;
-    const n = try std.posix.read(fds[0], &buf);
-    const req = buf[0..n];
+    const req = try readAllFromFd(fds[0], std.testing.allocator);
+    defer std.testing.allocator.free(req);
     try std.testing.expect(std.mem.startsWith(u8, req, "POST /v1.46/containers/create HTTP/1.1\r\n"));
     try std.testing.expect(std.mem.indexOf(u8, req, "Content-Type: application/json\r\n") != null);
     try std.testing.expect(std.mem.indexOf(u8, req, "Content-Length: 18\r\n") != null);
     try std.testing.expect(std.mem.endsWith(u8, req, body));
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/docker_client.zig` around lines 1189 - 1216, The tests for
sendHttpRequest read the pipe only once which can return a partial read; update
both tests ("sendHttpRequest: writes a valid GET request" and "sendHttpRequest:
writes Content-Type and Content-Length for POST with body") to read from the
read-end until EOF instead of a single std.posix.read call: add a small helper
(e.g., readAllFromFd or readUntilEof) that repeatedly reads into a
buffer/appends into a slice until read returns 0, then use that helper to
produce the complete req slice checked by the assertions; reference the existing
tests/sendHttpRequest usage to locate where to replace the single-read logic.

.method = if (std.mem.eql(u8, s.method, "POST")) .post else .get,
const result = client.fetch(.{
.location = .{ .url = url },
.method = if (std.mem.eql(u8, s.method, "POST")) .POST else .GET,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Silent fallback to .GET for any HTTP method other than "POST"

The expression if (std.mem.eql(u8, s.method, "POST")) .POST else .GET means "HEAD", "PUT", "DELETE", "PATCH", and any other valid value of HttpStrategy.method are silently treated as GET. Since HttpStrategy.method is a free-form []const u8 (not an enum), a caller setting .method = "HEAD" gets no compile-time or runtime error — just a wrong request verb.

Enumerate the common methods explicitly and return an error for unrecognised values, or switch to std.http.Method directly in HttpStrategy:

🐛 Proposed fix
-            .method = if (std.mem.eql(u8, s.method, "POST")) .POST else .GET,
+            .method = if (std.mem.eql(u8, s.method, "POST")) .POST
+                      else if (std.mem.eql(u8, s.method, "HEAD")) .HEAD
+                      else if (std.mem.eql(u8, s.method, "PUT")) .PUT
+                      else if (std.mem.eql(u8, s.method, "DELETE")) .DELETE
+                      else .GET,

Or, cleaner: change HttpStrategy.method from []const u8 to std.http.Method and remove the string comparison entirely.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
.method = if (std.mem.eql(u8, s.method, "POST")) .POST else .GET,
.method = if (std.mem.eql(u8, s.method, "POST")) .POST
else if (std.mem.eql(u8, s.method, "HEAD")) .HEAD
else if (std.mem.eql(u8, s.method, "PUT")) .PUT
else if (std.mem.eql(u8, s.method, "DELETE")) .DELETE
else .GET,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/wait.zig` at line 221, The current assignment .method = if
(std.mem.eql(u8, s.method, "POST")) .POST else .GET silently treats any
non-"POST" HttpStrategy.method string as GET; update the handling in
src/wait.zig by either (A) validating s.method against known verbs (e.g.,
"GET","POST","PUT","DELETE","PATCH","HEAD", etc.) and returning an error for
unrecognized values instead of defaulting to .GET, or (B) change
HttpStrategy.method to type std.http.Method and adjust callers so you can assign
the enum directly (removing the string comparisons); locate references to
HttpStrategy.method and the expression using std.mem.eql(u8, s.method, "POST")
to implement the chosen fix and ensure callers are updated to propagate errors
or use std.http.Method.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
src/wait.zig (1)

221-221: ⚠️ Potential issue | 🟠 Major

Non-POST methods are still silently coerced to GET.

Line 221 keeps the same behavior: any method other than "POST" is sent as GET, which breaks valid values like "HEAD"/"PUT"/"DELETE".

🐛 Proposed fix
 fn waitHttp(s: HttpStrategy, target: StrategyTarget, alloc: std.mem.Allocator) !void {
@@
     const scheme = if (s.use_tls) "https" else "http";
     const url = try std.fmt.allocPrint(alloc, "{s}://{s}:{d}{s}", .{ scheme, host, mapped_port, s.path });
     defer alloc.free(url);
+    const method = std.meta.stringToEnum(std.http.Method, s.method) orelse
+        return error.InvalidHttpMethod;
@@
         const result = client.fetch(.{
             .location = .{ .url = url },
-            .method = if (std.mem.eql(u8, s.method, "POST")) .POST else .GET,
+            .method = method,
         }) catch {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/wait.zig` at line 221, The current assignment for .method coerces any
non-"POST" value to GET, losing valid methods like HEAD/PUT/DELETE; update the
mapping logic that assigns .method (using the s.method string) to explicitly
handle all expected HTTP verbs (e.g.,
"GET","POST","HEAD","PUT","DELETE","PATCH","OPTIONS", etc.) or convert the input
string into the library's HTTP method enum instead of defaulting to GET; locate
the code that reads s.method and change the conditional to a proper
switch/lookup that preserves the provided method (or uses a generic/custom enum
variant) so non-POST methods are not silently turned into GET.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@src/wait.zig`:
- Line 221: The current assignment for .method coerces any non-"POST" value to
GET, losing valid methods like HEAD/PUT/DELETE; update the mapping logic that
assigns .method (using the s.method string) to explicitly handle all expected
HTTP verbs (e.g., "GET","POST","HEAD","PUT","DELETE","PATCH","OPTIONS", etc.) or
convert the input string into the library's HTTP method enum instead of
defaulting to GET; locate the code that reads s.method and change the
conditional to a proper switch/lookup that preserves the provided method (or
uses a generic/custom enum variant) so non-POST methods are not silently turned
into GET.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 89adad0 and ee5865c.

📒 Files selected for processing (14)
  • .github/workflows/ci.yml
  • AGENTS.md
  • ARCHITECTURE.md
  • IMPLEMENTATION_GUIDE.md
  • PROJECT_SUMMARY.md
  • QUICKSTART.md
  • README.md
  • build.zig
  • build.zig.zon
  • examples/basic.zig
  • src/docker_client.zig
  • src/integration_test.zig
  • src/root.zig
  • src/wait.zig
💤 Files with no reviewable changes (4)
  • src/integration_test.zig
  • QUICKSTART.md
  • README.md
  • build.zig

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants