Skip to content

fix: tunnel toggle spinner, stale proxy state, and Linux AppImage crash#144

Merged
zachsmith1 merged 7 commits intomainfrom
fix/tunnel-toggle-spinner
Apr 17, 2026
Merged

fix: tunnel toggle spinner, stale proxy state, and Linux AppImage crash#144
zachsmith1 merged 7 commits intomainfrom
fix/tunnel-toggle-spinner

Conversation

@zachsmith1
Copy link
Copy Markdown
Contributor

@zachsmith1 zachsmith1 commented Apr 14, 2026

Summary

Fix 1: Spurious spinner on tunnel toggle

After toggling a tunnel on/off, the code was optimistically calling `upsert_tunnel` with the API response. The response captures `accepted/programmed` at the moment of the call, which may be transiently false while Envoy Gateway reconciles the EPP update. This stomped the cache with stale status, causing a spinner to appear on the tunnel card even when the tunnel was healthy.

Removed the `upsert_tunnel` call; `bump_tunnel_refresh` now triggers an immediate re-poll that returns the correct status without overwriting the cache.

Fix 2: Stale proxy state allows traffic through disabled tunnels

When a tunnel is deleted and recreated with the same `host:port` endpoint but a new resource ID, the old proxy state entry lingers in the `ListenNode`. `tcp_proxy_exists` then sees the stale enabled entry and allows traffic through even when the new tunnel is disabled.

After syncing current tunnel states, scan for proxy entries whose `host:port` matches a current tunnel but whose ID is not in the current set, and remove them. Scoped to same-endpoint matches to avoid touching entries belonging to other projects.

Fix 3: Remove n0des diagnostics (superseded by iroh-services)

`N0DES_API_SECRET` / `BUILD_N0DES_API_SECRET` is superseded by `IROH_SERVICES_API_KEY` / `BUILD_IROH_SERVICES_API_KEY` wired up in `diagnostics.rs`. Removes `n0des_api_secret_from_env`, `build_n0des_client_opt`, `build_n0des_client`, and the `_n0des` fields from `ListenNode`/`ConnectNode`. A stale baked-in `BUILD_N0DES_API_SECRET` (old `n0des` prefix, now invalid with iroh-services 0.8+) was crashing the app on startup.

Fix 4: Bundle missing `libayatana-indicator3` in Linux AppImage

PR #141 bundled `libayatana-appindicator3` to fix the Linux startup panic, but missed `libayatana-indicator3` — a transitive runtime dependency of `libayatana-appindicator3`. The AppImage `dlopen` of `appindicator3` was still failing on distros without it installed (e.g. Fedora Silverblue).

Test plan

  • Toggle a tunnel off and on — confirm no spurious spinner appears
  • Delete a tunnel and recreate it with the same endpoint — confirm traffic is blocked when the new tunnel is disabled
  • Confirm other project tunnels with different endpoints are unaffected
  • Linux AppImage runs on Fedora Silverblue without the appindicator panic

Remove the optimistic upsert_tunnel call after set_enabled_active.
The API response captures accepted/programmed at the moment of the
call, which may be transiently false while EG reconciles the EPP
update. This caused a spurious spinner on the tunnel card.

bump_tunnel_refresh triggers an immediate re-poll which returns the
correct status without stomping on the cache.
When a tunnel is deleted and recreated with the same host:port but a
new resource ID, the old proxy state entry lingers in the ListenNode.
This causes tcp_proxy_exists to return true even when the new tunnel
is disabled, allowing traffic through when it shouldn't.

After syncing current tunnel states, scan for proxy entries whose
host:port matches a current tunnel but whose ID is not in the current
set, and remove them. Scoped to same-endpoint matches to avoid
touching entries belonging to other projects.
@zachsmith1 zachsmith1 marked this pull request as ready for review April 17, 2026 16:07
zachsmith1 and others added 4 commits April 17, 2026 09:10
N0DES_API_SECRET / BUILD_N0DES_API_SECRET is superseded by
IROH_SERVICES_API_KEY / BUILD_IROH_SERVICES_API_KEY wired up in
diagnostics.rs. Remove n0des_api_secret_from_env, build_n0des_client_opt,
build_n0des_client, and the _n0des fields from ListenNode/ConnectNode.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
libayatana-appindicator3 depends on libayatana-indicator3 at runtime.
PR #141 bundled libayatana-appindicator3 but missed this transitive dep,
so dlopen of appindicator3 still failed on distros without it installed.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@zachsmith1 zachsmith1 changed the title fix: tunnel toggle spinner and stale proxy state fix: tunnel toggle spinner, stale proxy state, and Linux AppImage crash Apr 17, 2026
@zachsmith1 zachsmith1 requested a review from Copilot April 17, 2026 21:49
@zachsmith1 zachsmith1 merged commit f4edc90 into main Apr 17, 2026
11 checks passed
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes several UX, correctness, and packaging issues around tunnel toggling/state synchronization, diagnostics configuration, and Linux AppImage runtime dependencies.

Changes:

  • Stop overwriting cached tunnel status with transient toggle responses; rely on immediate re-poll instead.
  • Prune stale local proxy-state entries when tunnels are recreated with the same endpoint but a new resource ID.
  • Remove legacy n0des diagnostics plumbing and bundle libayatana-indicator3 into the AppImage.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.

File Description
ui/src/views/proxies_list.rs Removes optimistic cache upsert on toggle; triggers refresh to avoid transient spinner.
lib/src/tunnels.rs Adds stale proxy-state cleanup for same-endpoint/different-ID entries during tunnel list sync.
lib/src/node.rs Removes n0des-related client/secret wiring; simplifies node construction.
.github/workflows/bundle.yml Bundles libayatana-indicator3 alongside existing AppImage libs to prevent runtime dlopen failure.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread lib/src/tunnels.rs
Comment on lines +281 to +285
let stale_ids: Vec<String> = self
.listen
.proxies()
.into_iter()
.filter(|p| {
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

self.listen.proxies() clones the full proxy list (via to_vec()) and is called once per tunnel, making this stale-scan O(tunnels × proxies) with repeated allocations. Consider pulling let proxies = self.listen.proxies(); out of the loop and computing all stale IDs in a single pass (e.g., precompute a set of current endpoints and collect matching stale IDs into a HashSet before removing).

Copilot uses AI. Check for mistakes.
Comment thread lib/src/tunnels.rs
.collect();
for id in stale_ids {
if let Err(err) = self.listen.remove_proxy_state(&id).await {
warn!(tunnel_id = %id, "Failed to remove stale proxy state: {err:#}");
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The structured log field tunnel_id is misleading here because id refers to a stale proxy/resource ID being removed, not a tunnel from the server list. Use a field name like resource_id/proxy_id (and keep tunnel_id for actual tunnel IDs) to avoid confusing diagnostics.

Suggested change
warn!(tunnel_id = %id, "Failed to remove stale proxy state: {err:#}");
warn!(proxy_id = %id, "Failed to remove stale proxy state: {err:#}");

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants