Bug Report: classifyTopic fails when summarizer is configured with Anthropic endpoint
Environment
- OpenClaw version: 2026.4.29
- MemOS plugin: memos-local-openclaw-plugin v1.0.8
- Model:
anthropic/kimi-for-coding
- Endpoint:
https://api.kimi.com/coding/v1/messages
Problem
TaskProcessor.classifyTopic always fails with 404 when the MemOS summarizer is configured to use an Anthropic-compatible endpoint (e.g., Kimi Code's /messages endpoint).
Error Log
classifyTopic failed (anthropic/kimi-for-coding), trying next:
Error: OpenAI topic-classifier failed (404):
{"error":{"message":"The requested resource was not found","type":"resource_not_found_error"}}
Root Cause
classifyTopic in src/ingest/providers/index.ts always calls classifyTopicOpenAI(), which sends requests in OpenAI format (/chat/completions), regardless of the summarizer's configured provider.
| Component |
Format Used |
Endpoint |
summarizer (configured) |
Anthropic Messages |
/coding/v1/messages ✅ |
classifyTopic (hardcoded) |
OpenAI Chat Completions |
/coding/v1 ❌ |
When the endpoint only supports Anthropic format, the OpenAI-style classifyTopic request returns 404.
Why This Didn't Happen Before
- DeepSeek supports both OpenAI and Anthropic formats simultaneously, so OpenAI-style
classifyTopic works even when summarizer uses Anthropic config.
- Kimi Code's Anthropic endpoint (
/coding/v1/messages) only accepts Anthropic format, rejecting OpenAI-style requests.
Impact
- Every incoming message triggers a
classifyTopic call that fails and falls back.
- Wastes tokens (small per-call, but adds up).
- Pollutes logs with repeated 404 errors.
- Increases event-loop latency (related to observed Gateway slowdowns).
Proposed Fix
Option A: Make classifyTopic respect the summarizer's configured provider and call the appropriate implementation (OpenAI vs Anthropic).
Option B: Add a separate topicClassifier configuration key so users can explicitly set a different provider/model for topic classification.
Related Code
src/ingest/providers/index.ts: classifyTopic()
src/ingest/providers/openai.ts: classifyTopicOpenAI()
src/ingest/task-processor.ts: onChunksIngested() → classifyTopic()
Additional context: This was discovered while investigating why QQ Bot responses became slow. The repeated classifyTopic failures contribute to event-loop blocking in the OpenClaw Gateway.
Bug Report:
classifyTopicfails when summarizer is configured with Anthropic endpointEnvironment
anthropic/kimi-for-codinghttps://api.kimi.com/coding/v1/messagesProblem
TaskProcessor.classifyTopicalways fails with 404 when the MemOS summarizer is configured to use an Anthropic-compatible endpoint (e.g., Kimi Code's/messagesendpoint).Error Log
Root Cause
classifyTopicinsrc/ingest/providers/index.tsalways callsclassifyTopicOpenAI(), which sends requests in OpenAI format (/chat/completions), regardless of the summarizer's configured provider.summarizer(configured)/coding/v1/messages✅classifyTopic(hardcoded)/coding/v1❌When the endpoint only supports Anthropic format, the OpenAI-style
classifyTopicrequest returns 404.Why This Didn't Happen Before
classifyTopicworks even when summarizer uses Anthropic config./coding/v1/messages) only accepts Anthropic format, rejecting OpenAI-style requests.Impact
classifyTopiccall that fails and falls back.Proposed Fix
Option A: Make
classifyTopicrespect the summarizer's configuredproviderand call the appropriate implementation (OpenAI vs Anthropic).Option B: Add a separate
topicClassifierconfiguration key so users can explicitly set a different provider/model for topic classification.Related Code
src/ingest/providers/index.ts:classifyTopic()src/ingest/providers/openai.ts:classifyTopicOpenAI()src/ingest/task-processor.ts:onChunksIngested()→classifyTopic()Additional context: This was discovered while investigating why QQ Bot responses became slow. The repeated
classifyTopicfailures contribute to event-loop blocking in the OpenClaw Gateway.