fix: forward server timeout to MCP SDK Client requests#21
fix: forward server timeout to MCP SDK Client requests#21provos wants to merge 2 commits intouniversal-tool-calling-protocol:mainfrom
Conversation
callTool() and listTools() were called without a per-request timeout, so the SDK's 60s default always fires before serverConfig.timeout. Pass the configured timeout through to the SDK request options.
There was a problem hiding this comment.
Pull request overview
This PR fixes MCP request timeout handling so UTCP’s configured serverConfig.timeout is also applied to the underlying @modelcontextprotocol/sdk client requests, preventing the SDK’s default 60s request timeout from firing first when longer server timeouts are configured.
Changes:
- Added a
_getTimeoutMs()helper to centralize converting MCP server timeout seconds → milliseconds (defaulting to 30s). - Updated
_withSession()to use_getTimeoutMs()rather than inline timeout calculation. - Passed
{ timeout: requestTimeout }intoclient.listTools()andclient.callTool()request options.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| const requestTimeout = this._getTimeoutMs(serverConfig); | ||
| const mcpToolsResult = await this._withSession(serverName, serverConfig, mcpCallTemplate.auth, | ||
| (client) => client.listTools() | ||
| (client) => client.listTools(undefined, { timeout: requestTimeout }) | ||
| ); |
There was a problem hiding this comment.
New behavior forwards a per-request timeout into the MCP SDK (listTools/callTool options). This is the core fix but isn’t covered by the existing MCP protocol tests. Please add a unit test that asserts the SDK client receives the expected timeout option (e.g., by monkeypatching _getOrCreateSession to return a fake client and asserting the passed options), so regressions don’t silently reintroduce the 60s default timeout issue.
| const requestTimeout = this._getTimeoutMs(serverConfig); | ||
| const result = await this._withSession(serverName, serverConfig, mcpCallTemplate.auth, | ||
| (client) => client.callTool({ name: actualToolName, arguments: toolArgs }) | ||
| (client) => client.callTool({ name: actualToolName, arguments: toolArgs }, undefined, { timeout: requestTimeout }) | ||
| ); |
There was a problem hiding this comment.
Same as listTools, callTool is now relying on passing a per-request timeout into the MCP SDK. Please add a unit test that verifies the timeout option is forwarded for tool execution as well (ideally without a long-running real server, by using a fake McpClient and asserting the arguments).
Problem
_withSessionwraps MCP operations inPromise.raceusingserverConfig.timeout, butcallTool()andlistTools()call the MCP SDK Client without a per-requesttimeoutoption. The SDK'sDEFAULT_REQUEST_TIMEOUT_MSEC(60s) therefore fires before the configured server timeout, making it impossible for callers to use timeouts longer than 60s.This affects any use case where MCP tool calls may take longer than 60 seconds (e.g., interactive approval flows, long-running computations).
Fix
Extract
_getTimeoutMs()helper and pass the configured timeout toclient.callTool()andclient.listTools()request options so the per-request timeout matches the server configuration.Test
Tested with a slow MCP server (65s delay per call) and
serverConfig.timeout = 120:Existing unit tests pass (
bun test packages/mcp/tests/).Summary by cubic
Forward the configured server timeout to MCP SDK client requests so listTools and callTool respect timeouts longer than 60s. Long-running tools now align with serverConfig.timeout.
Written for commit 75a3044. Summary will update on new commits.