Skip to content

Comments

Fix/socket lifecycle cleanup#24

Closed
DylanLacey wants to merge 2 commits intoProfSynapse:mainfrom
DylanLacey:fix/socket-lifecycle-cleanup
Closed

Fix/socket lifecycle cleanup#24
DylanLacey wants to merge 2 commits intoProfSynapse:mainfrom
DylanLacey:fix/socket-lifecycle-cleanup

Conversation

@DylanLacey
Copy link
Contributor

I was having some problems with connection in general, but also from multiple clients; addressed them by wiring in some more socket logic and a multiplexer.

  • Socket lifecycle fix: handleSocketConnection never wired the raw socket's close/end events to transport.close(). When a connector process died, Protocol._transport stayed set permanently — all subsequent connections were silently rejected with "Already connected", and leaked FDs accumulated from the unhandled rejections. The fix listens for close/end on the raw socket and forwards to transport.close() so the SDK's Protocol resets, and destroys sockets on failed connect() to prevent FD leaks.

  • Optional multiplexer (nexus-mux.js): The plugin's IPC socket supports one transport at a time. For users running multiple MCP clients simultaneously (Claude Code, Claude Desktop, Claudian), the mux holds one persistent session with Obsidian and fans out to any number of clients via a proxy socket. It rewrites request IDs for correct response routing, intercepts session-lifecycle messages per-client, auto-starts a background daemon on first connection, and self-terminates after 5 minutes idle. Not required for single-client use.

…nections

When a connector process dies, the raw IPC socket closes but StdioServerTransport (designed for stdin/stdout) has no lifecycle listener. Protocol._transport stays set, rejecting all future connections with "Already connected". Leaked FDs accumulate from the unhandled rejections.

This change listens for close/end on the raw socket and forwards to transport.close() so Protocol resets. Destroys sockets on failed connect() to prevent FD leaks.
I found that having multiple connections to the MCP server (Say Claude Desktop AND Claude Code) was causing issues. Added a multiplexer sitting between MCP clients and the server.

The mux will automatically daemonize and then self-terminate after 5 minutes.
Copy link
Owner

@ProfSynapse ProfSynapse left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this — you've identified two real problems and addressed both cleanly.

Transport fix (IPCTransportManager.ts) ✅

This is correct and fixes a genuine bug. Without wiring close/end to transport.close(), a crashed connector permanently wedges Protocol._transport and all subsequent connections are silently rejected. The closed guard to prevent double-close and the FD cleanup on failed connect() are both the right calls. Happy to merge this independently.

Multiplexer (nexus-mux.js) — needs one fix before merge

The architecture is solid (daemon + stdio bridge, ID rewriting, lifecycle interception, idle shutdown) and the tests are thorough. One blocking issue:

The Obsidian socket path is hardcoded wrong.

const OBSIDIAN_SOCKET = '/tmp/nexus_mcp_core.sock';

The plugin generates vault-specific socket paths:

/tmp/nexus_mcp_${sanitizedVaultName}.sock

So the mux will only connect to Obsidian if the vault is literally named "core". For any other vault name it silently fails to connect.

The fix is to accept the Obsidian socket path as a CLI argument (or env var), then document it in the README setup instructions. Since the mux lives inside the vault's plugin dir, __dirname could also be used to derive it — up to you on the approach.

Suggestion: If you want the transport fix to land sooner, you could either split into two PRs or I can cherry-pick that commit — let me know which you prefer. Otherwise, fix the socket path and this is ready to go.

@DylanLacey
Copy link
Contributor Author

DylanLacey commented Feb 21, 2026 via email

@DylanLacey
Copy link
Contributor Author

Just went to fix this and looks like you already got to it, so I'll close this out. Thank you for the work; I was super excited to not have to build my own semantic search agent :P

@DylanLacey DylanLacey closed this Feb 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants