refactor(web-client): read borrows#29
Open
WiktorStarczewski wants to merge 6 commits into
Open
Conversation
Migrated from 0xMiden/miden-client#2080 (author: igamigo) as part of the web-sdk split. Original PR: 0xMiden/miden-client#2080 The patch contains 3-way merge conflicts; resolution needed before merge.
Collaborator
SantiagoPittella
left a comment
There was a problem hiding this comment.
I see that there is a small difference with the original PR by @igamigo , this does not include the changes to the docs/typedoc/web-client/classes/MidenClient.md and CHANGELOG.md
Resolves the committed conflict markers in:
- crates/web-client/src/account.rs (get_accounts):
The function header above the conflict was already migrated to the
new pattern (`let client = self.get_inner()?;` instead of
`if let Some(client) = self.get_mut_inner()`). The 'ours' branch of
the conflict still had the old body — including an orphan `else`
clause without a matching `if let`. Take 'theirs' (flat
`Ok(result.into_iter()...)`) which matches the surrounding code.
- crates/web-client/src/export.rs (export_account_file):
Same as above — the function had already been migrated; only the
indentation differed between sides. Take 'theirs' (8-space indent
matches the rest of the function body).
- crates/web-client/src/tags.rs (list_tags):
'ours' is the pre-refactor pattern (`&mut self` + get_mut_inner +
if-let), 'theirs' is the new pattern (`&self` + get_inner?). Take
'theirs' — this is the migration's whole point.
cargo check --workspace --target wasm32-unknown-unknown is clean against
current main (miden-client 0.14.4 from crates.io). No dep retarget needed
— upstream miden-client#2080 was closed without merging, and the web-sdk
infrastructure already supports the read-borrow signatures.
Apply nightly cargo fmt to the 3 conflict-resolved files. Also picks up pre-existing format drift in unrelated parts of those files that the upstream PR's cargo fmt commit (commit 'style: cargo fmt' on the original miden-client#2080) would have caught.
Reviewer feedback (@SantiagoPittella) noted that the migration of miden-client#2080 didn't carry the original PR's CHANGELOG entry over to web-sdk. This commit ports it. The original PR also touched docs/typedoc/web-client/classes/MidenClient.md on the miden-client side, but those edits were for lastAuthError() (from miden-client#2058) and waitForIdle() (from miden-client#2057) — both unrelated to the read-borrows refactor that #2080/this PR ships, and both were just riding along on the upstream branch. Web-sdk also doesn't check in typedoc output, so there's no equivalent file to update here.
WiktorStarczewski
added a commit
that referenced
this pull request
Apr 30, 2026
Observed flake: probe returns HTTP 200 once on the first attempt that clears the connection-refused phase, exits, tests start, ALL tests fail with 'TypeError: Failed to fetch' to the gRPC backend. The single-probe gate isn't strict enough — a one-shot 200 (e.g. tonic-health responding before the rest of the dispatcher is fully wired) currently passes. Upgrade the readiness signal to N consecutive HTTP successes spaced PROBE_INTERVAL apart (defaults: 3 successes, 0.5s apart), so the probe only declares the server ready after ~1s of demonstrably-stable response. Any non-success in the streak resets it to zero and the slow-poll loop resumes — so a momentary blip during init doesn't get counted twice on either side. Tracked occurrences across recent PR runs: web-sdk PR #23 ci-shard-4, PR #29 ci-shard-1 + ci-shard-4, PR #27 multiple shards.
WiktorStarczewski
added a commit
that referenced
this pull request
Apr 30, 2026
Observed flake: probe returns HTTP 200 once on the first attempt that clears the connection-refused phase, exits, tests start, ALL tests fail with 'TypeError: Failed to fetch' to the gRPC backend. The single-probe gate isn't strict enough — a one-shot 200 (e.g. tonic-health responding before the rest of the dispatcher is fully wired) currently passes. Upgrade the readiness signal to N consecutive HTTP successes spaced PROBE_INTERVAL apart (defaults: 3 successes, 0.5s apart), so the probe only declares the server ready after ~1s of demonstrably-stable response. Any non-success in the streak resets it to zero and the slow-poll loop resumes — so a momentary blip during init doesn't get counted twice on either side. Tracked occurrences across recent PR runs: web-sdk PR #23 ci-shard-4, PR #29 ci-shard-1 + ci-shard-4, PR #27 multiple shards.
This was referenced Apr 30, 2026
WiktorStarczewski
added a commit
that referenced
this pull request
Apr 30, 2026
Observed flake: probe returns HTTP 200 once on the first attempt that clears the connection-refused phase, exits, tests start, ALL tests fail with 'TypeError: Failed to fetch' to the gRPC backend. The single-probe gate isn't strict enough — a one-shot 200 (e.g. tonic-health responding before the rest of the dispatcher is fully wired) currently passes. Upgrade the readiness signal to N consecutive HTTP successes spaced PROBE_INTERVAL apart (defaults: 3 successes, 0.5s apart), so the probe only declares the server ready after ~1s of demonstrably-stable response. Any non-success in the streak resets it to zero and the slow-poll loop resumes — so a momentary blip during init doesn't get counted twice on either side. Tracked occurrences across recent PR runs: web-sdk PR #23 ci-shard-4, PR #29 ci-shard-1 + ci-shard-4, PR #27 multiple shards.
WiktorStarczewski
added a commit
that referenced
this pull request
Apr 30, 2026
Observed flake: probe returns HTTP 200 once on the first attempt that clears the connection-refused phase, exits, tests start, ALL tests fail with 'TypeError: Failed to fetch' to the gRPC backend. The single-probe gate isn't strict enough — a one-shot 200 (e.g. tonic-health responding before the rest of the dispatcher is fully wired) currently passes. Upgrade the readiness signal to N consecutive HTTP successes spaced PROBE_INTERVAL apart (defaults: 3 successes, 0.5s apart), so the probe only declares the server ready after ~1s of demonstrably-stable response. Any non-success in the streak resets it to zero and the slow-poll loop resumes — so a momentary blip during init doesn't get counted twice on either side. Tracked occurrences across recent PR runs: web-sdk PR #23 ci-shard-4, PR #29 ci-shard-1 + ci-shard-4, PR #27 multiple shards.
This was referenced Apr 30, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Migrated from miden-client#2080 (author: @igamigo) as part of the web-sdk split (#1992 / #2135).
The miden-client/Rust-side changes from the original PR remain on miden-client#2080. (Note: the upstream PR was closed without merging; this web-sdk-side migration compiles cleanly against current
mainmiden-client without needing the upstream change, so it's self-contained.)