diff --git a/TEST.md b/TEST.md index 8cd9e5d..7211a06 100644 --- a/TEST.md +++ b/TEST.md @@ -1,242 +1,30 @@ -# Test Plan +# Testing Guide -This document describes the testing strategy for the certified-assets project and provides a PR-level checklist for implementation. +Tests are organized around three components. Each runs independently. ---- - -## Guiding Principles - -- **No BATS.** All tests use native Rust test infrastructure (`#[test]`, `cargo test`). -- **Three layers**: canister unit tests, plugin unit tests, and full E2E tests via the `icp` CLI. -- **Scope**: Proposal/governance workflows are out of scope. The `icp` CLI with the sync plugin replaces both `icx-asset` and `dfx deploy` from the old SDK project. - ---- - -## Layer 1 — Canister Unit Tests (`ic-certified-assets`) +## Canister (`ic-certified-assets`) **Location**: `ic-certified-assets/src/tests.rs` -**Run**: `cargo test -p ic-certified-assets` -**Status**: Comprehensive (5,277 lines). No new tests planned for this layer. +**Run**: `cargo test -p ic-certified-assets` -This test suite covers all canister behaviors using a mock system context (no live replica needed): +`ic-certified-assets` is the library crate behind `canister/`. Its unit tests cover all canister behaviors using a mock system context — no live replica needed: asset CRUD, encoding selection, HTTP semantics, certification, permissions, stable state, and streaming. -| Area | Coverage | -|---|---| -| Batch API | `create_batch`, `commit_batch`, `drop_batch`; atomicity; batch timeout; batch ID persistence across upgrades | -| Asset serving | Encoding selection (identity, gzip, brotli); `Accept-Encoding` negotiation; correct response body | -| SPA fallback | `index.html` served for missing paths when aliasing is enabled | -| Stable state | Upgrade/downgrade roundtrips; state survives canister upgrade | -| Streaming | Chunked delivery for large assets | -| HTTP semantics | Custom headers; `Cache-Control` / `max-age`; ETags; `Content-Integrity` | -| Aliases | Enable/disable aliasing behavior | -| IC environment | Root key encoding; public environment cookie | -| Certification | `ic_certification` tree insertions/deletions; V2 certification correctness | -| Permissions | `grant_permission`, `revoke_permission`, `list_permitted` | +**Add tests here when** you change anything inside `ic-certified-assets`: new canister endpoints, modified serving logic, certification changes, permission rules, or upgrade/downgrade behavior. ---- +## Plugin (`assets-sync`) -## Layer 2 — Library Unit Tests (`assets-sync/`) - -**Location**: Inline `#[cfg(test)]` modules in each source file. +**Location**: Inline `#[cfg(test)]` modules in each `assets-sync/src/*.rs` file **Run**: `cargo test -p assets-sync` -All sync business logic lives in the `assets-sync` library crate, which has no WIT or WASI dependencies and compiles natively. The plugin crate itself contains only the `WasiCall` transport wrapper and has no testable logic of its own. - -### 2a. `scan.rs` — Directory Scanning - -Tests for `scan()` and the private `walk()` function using `tempfile` fixtures: - -| Test | Asserts | -|---|---| -| Single file | Key is `/` with leading slash | -| Nested directory | Recursive walk; key is `//` | -| Dotfile skipped | `.hidden` and `.gitignore` do not appear in results | -| Empty directory | Returns empty `Vec` | -| Duplicate key across two source dirs | Returns `Err` with the offending key named | -| Multiple source dirs | Files from both dirs merged into one result | -| Symlink skipped | All symlinks (to files or directories) are excluded from results | - -### 2b. `content.rs` — MIME Detection and Encoding - -Tests for `encoders_for()`, `Content::load()`, `Content::encode()`, `Content::sha256()`: - -| Test | Asserts | -|---|---| -| `text/html` → encoder list | Returns `[Identity, Gzip]` | -| `text/css` → encoder list | Returns `[Identity, Gzip]` | -| `application/javascript` → encoder list | Returns `[Identity, Gzip]` | -| `text/javascript` → encoder list | Returns `[Identity, Gzip]` | -| `image/png` → encoder list | Returns `[Identity]` only | -| `application/wasm` → encoder list | Returns `[Identity]` only | -| Unknown extension → encoder list | Falls back to `APPLICATION_OCTET_STREAM`; returns `[Identity]` | -| `encode(Identity)` | Output data equals input data | -| `encode(Gzip)` | Output is valid gzip; decompressed equals input | -| `encode(Brotli)` | Output is valid brotli; decompressed equals input | -| `sha256()` | Same content produces same digest; different content produces different digest | -| `Content::load()` — HTML | Reads file bytes; infers `text/html` from `.html` extension | -| `Content::load()` — PNG | Infers `image/png` from `.png` extension | -| `Content::load()` — unknown | Falls back to `application/octet-stream` for unrecognised extensions | - -### 2c. `sync.rs` — Operation Diffing (`build_operations`) - -`build_operations` is a pure function: it takes `project_assets` and `canister_assets` maps and returns a `Vec`. It is tested inline via `#[cfg(test)]` without any canister calls. Because `assets-sync` has no WIT dependency, these tests compile and run natively with no extra stubs needed. +`assets-sync` is the library crate behind `plugin/`. It has no WASI dependency and compiles natively. Its unit tests cover all sync business logic: directory scanning, MIME detection and encoding, operation diffing, batch sequencing, canister API calls and pagination, and authorization. -| Test | Asserts | -|---|---| -| New asset (not on canister) | Emits `CreateAsset` + `SetAssetContent` ops | -| Unchanged asset (SHA256 matches) | `already_in_place = true`; emits no ops for that encoding | -| Updated asset (SHA256 differs) | Emits `SetAssetContent`; no `CreateAsset` | -| Deleted asset (on canister, not in project) | Emits `DeleteAsset` | -| Content-type mismatch (same key, MIME changed) | Emits `DeleteAsset` + `CreateAsset` + `SetAssetContent` | -| Stale encoding on canister (e.g. `gzip` present but project only has `identity`) | Emits `UnsetAssetContent` for the stale encoding | -| New encoding added (e.g. file now compressible) | Emits `SetAssetContent` for the new encoding | -| Empty project, non-empty canister | All canister assets deleted | -| Everything in sync | Returns empty `Vec`; `commit_batch` not called | -| Gzip skipped when compressed ≥ original size | No `SetAssetContent` op for `gzip` encoding | +**Add tests here when** you change any sync logic: how files are discovered, how encodings are chosen, how diffs are computed, how batch operations are sequenced, or how permissions are managed. Prefer this over E2E for new logic — tests are fast and require no infrastructure. ---- +## End-to-End (`e2e`) -## Layer 3 — E2E Integration Tests (`e2e/`) - -**Location**: New workspace member crate at `e2e/` +**Location**: `e2e/` **Run**: `cargo test -p e2e` -These tests verify the complete pipeline: `plugin.wasm` built → loaded by `icp` → assets synced to a live canister. - -### Infrastructure - -The `e2e/` crate uses: - -| Crate | Role | -|---|---| -| `assert_cmd` | Invokes `icp` as a subprocess and asserts exit code / stdout | -| `tempfile` | Provides throwaway asset directories and `icp.yaml` configs | -| `candid` | Decodes binary Candid responses into typed structs | -| `hex` | Decodes hex output from `icp canister call -o hex` | -| `serde_json` | Parses JSON output from `icp network status --json` | - -**Build script** (`e2e/build.rs`): Before any tests run, `build.rs` compiles `canister.wasm` (`wasm32-unknown-unknown`) and `plugin.wasm` (`wasm32-wasip2`) via nested `cargo build` invocations and exposes their paths as `CANISTER_WASM` / `PLUGIN_WASM` env vars baked into the test binary at compile time. - -**Setup per test**: -1. Copy a committed fixture directory into a `TempDir` and place the pre-built WASMs under `wasms/`. -2. Start a local network with `icp network start -d`; shut it down with `icp network stop` in test cleanup. -3. Run `icp deploy` to install the canister WASM and execute the plugin sync step. -4. Verify the resulting canister state with `icp canister call` using `-o hex` to obtain binary Candid, decoded into typed structs. - -#### Network lifecycle and teardown pattern - -`LocalNetwork::start(project_dir)` in `e2e/src/lib.rs` encapsulates the start/stop lifecycle: - -```rust -let _network = LocalNetwork::start(&project); // runs `icp network start -d` -// … test body … -// _network is dropped here → runs `icp network stop` -``` - -Key points: -- **Daemon mode** (`-d`): `icp network start -d` blocks until the replica is ready, then returns. - The replica process continues running in the background. -- **State directory**: the replica writes its state to `.icp/` inside the project directory. - Each test that uses a `tempfile::TempDir` as its project root therefore gets an isolated network state. -- **Teardown on panic**: `LocalNetwork` implements `Drop`, so `icp network stop` is called even when the test panics or the assertion fails. -- **Silent cleanup**: `Drop` ignores errors from `icp network stop` because the replica may have already exited. -- **Project root**: `icp` locates `icp.yaml` via a `--project-root-override=` flag rather than - relying on `$PWD` or `getcwd(2)`. The `icp_cmd(dir)` helper in `e2e/src/lib.rs` sets this flag - automatically; always use it instead of `Command::new("icp")` directly. - -#### Parsing `icp canister call` output - -Pass `-o hex` to `icp canister call` to receive the raw binary Candid response as a hex string instead of pretty-printed text. Decode it with `hex::decode` and then `candid::decode_args` into the typed structs defined in `e2e/src/lib.rs` (`AssetDetails`, `AssetEncodingDetails`). This avoids `candid_parser` and dynamic `IDLValue` traversal. - -### Test Scenarios - -#### Basic Sync Workflow - -| Test | Scenario | Asserts | -|---|---|---| -| Basic deploy | Empty canister, one HTML file | `/index.html` present in canister asset list | -| Basic deploy with proxy | Deploy via proxy canister | `/index.html` present after proxy-mode deploy | -| No-op sync | Run sync a second time without changes | Plugin logs "already up to date"; canister state unchanged | -| Content update | Modify HTML file content; re-sync | SHA256 on canister updated; other assets unchanged | -| Asset deletion | Remove a file from the local directory; re-sync | Key deleted from canister; remaining assets intact | -| Multi-directory | Two source dirs with non-overlapping files | All files from both dirs uploaded; keys namespaced correctly | - -#### Encoding Policy - -| Test | File type | Asserts | -|---|---|---| -| Text file gets gzip | `.html` / `.css` / `.js` | Canister holds both `identity` and `gzip` encodings | -| Binary file identity-only | `.png` / `.wasm` | Canister holds `identity` encoding only; no `gzip` | -| Gzip skipped when not smaller | Tiny text file where gzip output ≥ original | Only `identity` encoding stored | - -#### Large File / Chunking - -| Test | Scenario | Asserts | -|---|---|---| -| Multi-chunk upload | File > 1.9 MB | Plugin splits into multiple chunks; canister reconstructs correctly; SHA256 verified end-to-end | - -#### Asset Listing and Pagination - -| Test | Scenario | Asserts | -|---|---|---| -| Pagination | Sync > 100 assets | `list_assets` pagination loop retrieves all assets; count matches local files | - -#### Authorization - -| Test | Scenario | Asserts | -|---|---|---| -| Unauthorized identity | Sync with an identity that has no `Commit` permission | `icp` exits with non-zero; error message mentions permission | -| Proxy mode: permission grant | Sync in proxy mode where identity lacks `Commit` | Plugin grants permission via proxy; sync succeeds | -| Proxy mode: already permitted | Identity already has `Commit` | Grant step skipped (log message confirms); sync succeeds | - ---- - -## Mapping to Old SDK Test Coverage - -| Old SDK test | Replaced by | -|---|---| -| `ic-certified-assets/src/tests.rs` | `ic-certified-assets/src/tests.rs` (already ported and expanded) | -| `ic-asset/src/sync.rs` unit tests | `assets-sync` unit tests — `scan.rs` (Layer 2a) | -| `ic-asset/src/batch_upload/operations.rs` unit tests | `assets-sync` unit tests — `sync.rs::build_operations` (Layer 2c) | -| `ic-asset/src/asset/config.rs` unit tests | Not yet in scope (plugin has no `.ic-assets.json5` support yet) | -| `icx-asset.bash` (BATS) | E2E Layer 3 — basic sync, encoding, chunking, pagination | -| `assetscanister.bash` (BATS) — canister API behaviors | Covered by existing Layer 1 unit tests | -| `assetscanister.bash` — permission checks | E2E Layer 3 — authorization tests | -| `frontend.bash` (BATS) | Out of scope (`dfx deploy` UI/UX not applicable) | -| Playwright browser tests | Out of scope | -| Proposal / governance tests | Out of scope | - ---- - -## Checklist - -### Layer 2: Plugin Unit Tests - -- [x] **`scan.rs` unit tests** - Inline `#[cfg(test)]` module in `assets-sync/src/scan.rs`. Uses `tempfile` for fixtures. - Covers: single file, nested dirs, dotfile skip, empty dir, duplicate key error, multiple source dirs. - -- [x] **`content.rs` unit tests** - Add inline `#[cfg(test)]` module to `assets-sync/src/content.rs`. - Covers: `encoders_for` by MIME type, gzip/brotli round-trips, SHA256 determinism, identity passthrough. - -- [x] **`sync.rs::build_operations` unit tests** - Inline `#[cfg(test)]` module in `assets-sync/src/sync.rs`. No WIT constraint applies since `assets-sync` has no WASI dependency. - Covers: create, no-op, update, delete, type-mismatch recreate, stale encoding unset, new encoding set, gzip-not-smaller skip, empty-project delete-all, everything-in-sync. - -### Layer 3: E2E Tests - -- [x] **E2E infrastructure** - `e2e/` crate wired up with `build.rs`, fixture directory, `LocalNetwork` helper, and two smoke tests (`basic_deploy`, `basic_deploy_with_proxy`). New CI job added. - -- [x] **Basic sync E2E tests** - Covers: basic deploy, basic deploy with proxy, no-op sync, content update, asset deletion, multi-directory sync. - -- [ ] **Encoding policy E2E tests** - Covers: text gets gzip, binary identity-only, gzip skipped when not smaller. - -- [ ] **Chunking and pagination E2E tests** - Covers: multi-chunk upload for files > 1.9 MB, and list pagination with > 100 assets. +E2E tests verify that the canister and plugin work correctly together through the `icp` CLI against a live local replica. Covers the basic sync workflow: deploy, no-op re-sync, content update, deletion, and multi-directory sync. -- [ ] **Authorization E2E tests** - Covers: unauthorized sync rejects, proxy mode grants permission, proxy mode skips redundant grant. +**Add tests here when** you introduce a new top-level workflow or change how the plugin integrates with the CLI or canister in a way that unit tests cannot exercise — for example, a new deploy mode or wire-protocol changes. Keep this suite small; unit tests are preferred for logic coverage. diff --git a/assets-sync/src/canister.rs b/assets-sync/src/canister.rs index a3976c9..2fdc247 100644 --- a/assets-sync/src/canister.rs +++ b/assets-sync/src/canister.rs @@ -227,3 +227,97 @@ pub fn grant_permission_via_proxy( false, ) } + +#[cfg(test)] +mod tests { + use super::*; + use candid::CandidType; + use serde::de::DeserializeOwned; + use std::cell::RefCell; + use std::collections::VecDeque; + + struct PagedMock { + pages: RefCell>>, + } + + impl PagedMock { + fn new(pages: Vec>) -> Self { + Self { + pages: RefCell::new(VecDeque::from(pages)), + } + } + } + + impl CanisterCall for PagedMock { + fn call(&self, method: &str, _arg: A, _: CallType, _: bool) -> Result + where + A: CandidType, + R: CandidType + DeserializeOwned, + { + assert_eq!(method, "list"); + let page = self.pages.borrow_mut().pop_front().unwrap_or_default(); + let bytes = candid::encode_one(page).map_err(|e| e.to_string())?; + candid::decode_one(&bytes).map_err(|e| e.to_string()) + } + } + + fn mk_assets(n: usize) -> Vec { + (0..n) + .map(|i| AssetDetails { + key: format!("/asset-{i}"), + encodings: vec![], + content_type: "text/plain".to_string(), + }) + .collect() + } + + #[test] + fn list_assets_empty_canister_returns_empty() { + let result = list_assets(&PagedMock::new(vec![])).unwrap(); + assert!(result.is_empty()); + } + + #[test] + fn list_assets_single_partial_page_returns_all() { + // 5 assets — one partial page, then the implicit empty page terminates the loop. + let result = list_assets(&PagedMock::new(vec![mk_assets(5)])).unwrap(); + assert_eq!(result.len(), 5); + } + + #[test] + fn list_assets_partial_last_page_terminates_early() { + // 100 + 50: the 50-item page is smaller than the 100-item page, so the loop + // breaks without making a third request. + let mock = PagedMock::new(vec![mk_assets(100), mk_assets(50)]); + let result = list_assets(&mock).unwrap(); + assert_eq!(result.len(), 150); + assert!( + mock.pages.borrow().is_empty(), + "no third request should be made" + ); + } + + #[test] + fn list_assets_full_pages_then_empty_returns_all() { + // 100 + 100 + 0: two full pages followed by an empty page. + let result = list_assets(&PagedMock::new(vec![ + mk_assets(100), + mk_assets(100), + vec![], + ])) + .unwrap(); + assert_eq!(result.len(), 200); + } + + #[test] + fn list_assets_multiple_full_pages_then_partial() { + // 100 + 100 + 73: terminates on the smaller page. + let result = list_assets(&PagedMock::new(vec![ + mk_assets(100), + mk_assets(100), + mk_assets(73), + ])) + .unwrap(); + assert_eq!(result.len(), 273); + } +} diff --git a/assets-sync/src/sync.rs b/assets-sync/src/sync.rs index 9072692..c3965f4 100644 --- a/assets-sync/src/sync.rs +++ b/assets-sync/src/sync.rs @@ -333,11 +333,119 @@ fn build_operations( #[cfg(test)] mod tests { use super::*; - use crate::canister::{AssetDetails, AssetEncodingDetails, BatchOperationKind}; - use candid::Nat; - use std::collections::HashMap; + use crate::canister::{ + AssetDetails, AssetEncodingDetails, BatchOperationKind, CallType, CanisterCall, + }; + use candid::{CandidType, Nat, Principal}; + use serde::de::DeserializeOwned; + use std::cell::{Cell, RefCell}; + use std::collections::{HashMap, VecDeque}; use std::path::PathBuf; + // Mirrors the private CreateChunkResponse — same field name produces the same Candid encoding. + #[derive(CandidType)] + struct MockChunkResponse { + chunk_id: Nat, + } + + struct ChunkCounter(Cell); + + impl CanisterCall for ChunkCounter { + fn call(&self, method: &str, _arg: A, _: CallType, _: bool) -> Result + where + A: CandidType, + R: CandidType + DeserializeOwned, + { + assert_eq!(method, "create_chunk"); + let id = self.0.get(); + self.0.set(id + 1); + let bytes = candid::encode_one(MockChunkResponse { + chunk_id: Nat::from(id), + }) + .map_err(|e| e.to_string())?; + candid::decode_one(&bytes).map_err(|e| e.to_string()) + } + } + + #[test] + fn upload_chunks_empty_data_creates_one_chunk() { + let mock = ChunkCounter(Cell::new(0)); + let ids = upload_chunks(&mock, &Nat::from(1u32), "/f", "identity", &[]).unwrap(); + assert_eq!(ids.len(), 1); + } + + #[test] + fn upload_chunks_small_data_creates_one_chunk() { + let mock = ChunkCounter(Cell::new(0)); + let ids = upload_chunks(&mock, &Nat::from(1u32), "/f", "identity", &[0u8; 100]).unwrap(); + assert_eq!(ids.len(), 1); + } + + #[test] + fn upload_chunks_at_boundary_creates_one_chunk() { + let mock = ChunkCounter(Cell::new(0)); + let ids = upload_chunks( + &mock, + &Nat::from(1u32), + "/f", + "identity", + &[0u8; MAX_CHUNK_SIZE], + ) + .unwrap(); + assert_eq!(ids.len(), 1); + } + + #[test] + fn upload_chunks_one_over_boundary_creates_two_chunks() { + let mock = ChunkCounter(Cell::new(0)); + let ids = upload_chunks( + &mock, + &Nat::from(1u32), + "/f", + "identity", + &[0u8; MAX_CHUNK_SIZE + 1], + ) + .unwrap(); + assert_eq!(ids.len(), 2); + } + + #[test] + fn upload_chunks_double_boundary_creates_two_chunks() { + let mock = ChunkCounter(Cell::new(0)); + let ids = upload_chunks( + &mock, + &Nat::from(1u32), + "/f", + "identity", + &[0u8; MAX_CHUNK_SIZE * 2], + ) + .unwrap(); + assert_eq!(ids.len(), 2); + } + + #[test] + fn upload_chunks_returns_sequential_ids() { + let mock = ChunkCounter(Cell::new(7)); + // MAX_CHUNK_SIZE * 3 + 1 → div_ceil = 4 chunks. + let ids = upload_chunks( + &mock, + &Nat::from(1u32), + "/f", + "identity", + &[0u8; MAX_CHUNK_SIZE * 3 + 1], + ) + .unwrap(); + assert_eq!( + ids, + vec![ + Nat::from(7u32), + Nat::from(8u32), + Nat::from(9u32), + Nat::from(10u32), + ] + ); + } + fn mk_project_asset( key: &str, media_type: &str, @@ -551,6 +659,29 @@ mod tests { assert_eq!(ops.len(), 2); } + // prepare_asset itself skips gzip when the compressed output is not smaller + // than the identity bytes. All 256 distinct byte values are maximally + // incompressible: gzip's ~18-byte header alone exceeds the savings. + #[test] + fn prepare_asset_skips_gzip_when_not_smaller() { + use std::io::Write; + let mut f = tempfile::Builder::new().suffix(".txt").tempfile().unwrap(); + f.write_all(&(0u8..=255u8).collect::>()).unwrap(); + let source = AssetSource { + path: f.path().to_path_buf(), + key: "/test.txt".to_string(), + }; + let asset = prepare_asset(source, &HashMap::new()).unwrap(); + assert!( + asset.encodings.contains_key("identity"), + "identity must be present" + ); + assert!( + !asset.encodings.contains_key("gzip"), + "gzip must be absent when not smaller" + ); + } + // When gzip output is not smaller than identity, prepare_asset skips it, so // build_operations sees only the identity encoding and emits no gzip op. #[test] @@ -568,4 +699,140 @@ mod tests { BatchOperationKind::SetAssetContent(a) if a.content_encoding == "gzip" ))); } + + // ---- Authorization tests ---- + + // Mock for ensure_commit_permission: handles list_permitted and grant_permission only. + struct PermissionMock { + permitted: Vec, + // Tracks the `direct` flag for each grant_permission call. + grant_calls: RefCell>, + } + + impl PermissionMock { + fn new(permitted: Vec) -> Self { + Self { + permitted, + grant_calls: RefCell::new(vec![]), + } + } + } + + impl CanisterCall for PermissionMock { + fn call(&self, method: &str, _arg: A, _: CallType, direct: bool) -> Result + where + A: CandidType, + R: CandidType + DeserializeOwned, + { + match method { + "list_permitted" => { + let bytes = + candid::encode_one(self.permitted.clone()).map_err(|e| e.to_string())?; + candid::decode_one(&bytes).map_err(|e| e.to_string()) + } + "grant_permission" => { + self.grant_calls.borrow_mut().push(direct); + let bytes = candid::encode_one(()).map_err(|e| e.to_string())?; + candid::decode_one(&bytes).map_err(|e| e.to_string()) + } + _ => panic!("unexpected method: {method}"), + } + } + } + + // General-purpose scripted mock: pre-programs per-method response queues. + type MockQueue = RefCell, String>>>>; + + struct SyncMock { + queue: MockQueue, + } + + impl SyncMock { + fn new() -> Self { + Self { + queue: RefCell::new(HashMap::new()), + } + } + + fn push_ok(&self, method: &str, value: R) { + self.queue + .borrow_mut() + .entry(method.to_string()) + .or_default() + .push_back(Ok(candid::encode_one(value).unwrap())); + } + + fn push_err(&self, method: &str, err: &str) { + self.queue + .borrow_mut() + .entry(method.to_string()) + .or_default() + .push_back(Err(err.to_string())); + } + } + + impl CanisterCall for SyncMock { + fn call(&self, method: &str, _arg: A, _: CallType, _: bool) -> Result + where + A: CandidType, + R: CandidType + DeserializeOwned, + { + let response = self + .queue + .borrow_mut() + .entry(method.to_string()) + .or_default() + .pop_front() + .unwrap_or_else(|| panic!("no programmed response for '{method}'")); + match response { + Ok(bytes) => candid::decode_one(&bytes).map_err(|e| e.to_string()), + Err(e) => Err(e), + } + } + } + + // Proxy mode: identity absent from Commit list → grant_permission called via proxy. + #[test] + fn ensure_commit_permission_grants_via_proxy_when_absent() { + let identity = Principal::anonymous(); + let mock = PermissionMock::new(vec![]); + ensure_commit_permission(&mock, &identity.to_text()).unwrap(); + // grant_permission must be called exactly once with direct=false (routed via proxy). + assert_eq!(*mock.grant_calls.borrow(), vec![false]); + } + + // Proxy mode: identity already in Commit list → grant_permission not called. + #[test] + fn ensure_commit_permission_skips_grant_when_already_permitted() { + let identity = Principal::anonymous(); + let mock = PermissionMock::new(vec![identity]); + ensure_commit_permission(&mock, &identity.to_text()).unwrap(); + assert!(mock.grant_calls.borrow().is_empty()); + } + + // Direct mode: canister rejects create_batch with a permission error → sync propagates it. + #[test] + fn sync_propagates_permission_error_from_create_batch() { + let dir = tempfile::tempdir().unwrap(); + std::fs::write(dir.path().join("index.html"), b"").unwrap(); + + let mock = SyncMock::new(); + mock.push_ok("api_version", 2u16); + // Empty canister → build_operations will produce work → create_batch is called. + mock.push_ok("list", Vec::::new()); + mock.push_err("create_batch", "Caller does not have Commit permission"); + + let result = sync( + &mock, + &[dir.path().to_str().unwrap().to_string()], + &Principal::anonymous().to_text(), + None, + ); + + let err = result.unwrap_err(); + assert!( + err.contains("Commit permission"), + "expected permission error, got: {err}" + ); + } }