Conversation
…nections When a connector process dies, the raw IPC socket closes but StdioServerTransport (designed for stdin/stdout) has no lifecycle listener. Protocol._transport stays set, rejecting all future connections with "Already connected". Leaked FDs accumulate from the unhandled rejections. This change listens for close/end on the raw socket and forwards to transport.close() so Protocol resets. Destroys sockets on failed connect() to prevent FD leaks.
I found that having multiple connections to the MCP server (Say Claude Desktop AND Claude Code) was causing issues. Added a multiplexer sitting between MCP clients and the server. The mux will automatically daemonize and then self-terminate after 5 minutes.
ProfSynapse
left a comment
There was a problem hiding this comment.
Thanks for this — you've identified two real problems and addressed both cleanly.
Transport fix (IPCTransportManager.ts) ✅
This is correct and fixes a genuine bug. Without wiring close/end to transport.close(), a crashed connector permanently wedges Protocol._transport and all subsequent connections are silently rejected. The closed guard to prevent double-close and the FD cleanup on failed connect() are both the right calls. Happy to merge this independently.
Multiplexer (nexus-mux.js) — needs one fix before merge
The architecture is solid (daemon + stdio bridge, ID rewriting, lifecycle interception, idle shutdown) and the tests are thorough. One blocking issue:
The Obsidian socket path is hardcoded wrong.
const OBSIDIAN_SOCKET = '/tmp/nexus_mcp_core.sock';The plugin generates vault-specific socket paths:
/tmp/nexus_mcp_${sanitizedVaultName}.sock
So the mux will only connect to Obsidian if the vault is literally named "core". For any other vault name it silently fails to connect.
The fix is to accept the Obsidian socket path as a CLI argument (or env var), then document it in the README setup instructions. Since the mux lives inside the vault's plugin dir, __dirname could also be used to derive it — up to you on the approach.
Suggestion: If you want the transport fix to land sooner, you could either split into two PRs or I can cherry-pick that commit — let me know which you prefer. Otherwise, fix the socket path and this is ready to go.
|
Oh snap, good point! I'm AFK but will send a fix for the directory name
tomorrow.
…On Sat, 21 Feb 2026, 23:32 Professor Synapse, ***@***.***> wrote:
***@***.**** commented on this pull request.
Thanks for this — you've identified two real problems and addressed both
cleanly.
Transport fix (IPCTransportManager.ts) ✅
This is correct and fixes a genuine bug. Without wiring close/end to
transport.close(), a crashed connector permanently wedges
Protocol._transport and all subsequent connections are silently rejected.
The closed guard to prevent double-close and the FD cleanup on failed
connect() are both the right calls. Happy to merge this independently.
Multiplexer (nexus-mux.js) — needs one fix before merge
The architecture is solid (daemon + stdio bridge, ID rewriting, lifecycle
interception, idle shutdown) and the tests are thorough. One blocking issue:
*The Obsidian socket path is hardcoded wrong.*
const OBSIDIAN_SOCKET = '/tmp/nexus_mcp_core.sock';
The plugin generates vault-specific socket paths:
/tmp/nexus_mcp_${sanitizedVaultName}.sock
So the mux will only connect to Obsidian if the vault is literally named
"core". For any other vault name it silently fails to connect.
The fix is to accept the Obsidian socket path as a CLI argument (or env
var), then document it in the README setup instructions. Since the mux
lives inside the vault's plugin dir, __dirname could also be used to
derive it — up to you on the approach.
*Suggestion*: If you want the transport fix to land sooner, you could
either split into two PRs or I can cherry-pick that commit — let me know
which you prefer. Otherwise, fix the socket path and this is ready to go.
—
Reply to this email directly, view it on GitHub
<#24 (review)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAEWG3WDVKAVBCCT5OVBJET4NBM7LAVCNFSM6AAAAACV3H3VE6VHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZTQMZVGU2TKMJVGA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
|
Just went to fix this and looks like you already got to it, so I'll close this out. Thank you for the work; I was super excited to not have to build my own semantic search agent :P |
I was having some problems with connection in general, but also from multiple clients; addressed them by wiring in some more socket logic and a multiplexer.
Socket lifecycle fix:
handleSocketConnectionnever wired the raw socket'sclose/endevents totransport.close(). When a connector process died,Protocol._transportstayed set permanently — all subsequent connections were silently rejected with "Already connected", and leaked FDs accumulated from the unhandled rejections. The fix listens forclose/endon the raw socket and forwards totransport.close()so the SDK's Protocol resets, and destroys sockets on failedconnect()to prevent FD leaks.Optional multiplexer (
nexus-mux.js): The plugin's IPC socket supports one transport at a time. For users running multiple MCP clients simultaneously (Claude Code, Claude Desktop, Claudian), the mux holds one persistent session with Obsidian and fans out to any number of clients via a proxy socket. It rewrites request IDs for correct response routing, intercepts session-lifecycle messages per-client, auto-starts a background daemon on first connection, and self-terminates after 5 minutes idle. Not required for single-client use.