You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I heard about Open-LLM-VTuber and decided to run it on my computer. When I tried to run and send my message, I saw error messages like below.
I changed only character_config.agent_config.agent_settings.basic_memory_agent.llm_provider to "openai_compatible_llm",
agent_settings:
basic_memory_agent:
# The Basic AI Agent. Nothing fancy.
# choose one of the llm provider from the llm_config
# and set the required parameters in the corresponding field
# examples:
# 'openai_compatible_llm', 'llama_cpp_llm', 'claude_llm', 'ollama_llm'
# 'openai_llm', 'gemini_llm', 'zhipu_llm', 'deepseek_llm', 'groq_llm'
# 'mistral_llm', 'lmstudio_llm', and more
llm_provider: 'openai_compatible_llm'
# let ai speak as soon as the first comma is received on the first sentence
# to reduced latency.
faster_first_response: True
# Method for segmenting sentences: 'regex' or 'pysbd'
segment_method: 'pysbd'
# Use MCP (Model Context Protocol) Plus to let the LLM have the ability to use tools
# 'Plus' means that it has the ability to call tools by using OpenAI API.
use_mcpp: True
mcp_enabled_servers: ["time", "ddg-search"] # Enabled MCP servers
and character_config.agent_config.agent_settings.llm_configs.openai_compatible_llm like below.
# OpenAI Compatible inference backend
openai_compatible_llm:
base_url: 'http://localhost:8080/v1'
llm_api_key: ''
organization_id: null
project_id: null
model: ''
temperature: 1.0 # value between 0 to 2
interrupt_method: 'user'
# This is the method to use for prompting the interruption signal.
# If the provider supports inserting system prompt anywhere in the chat memory, use 'system'.
# Otherwise, use 'user'. You don't usually need to change this setting.
And here is the verbose execution logs and error messages.
% uv run run_server.py --verbose
[INFO] Running in verbose mode
2025-12-21 12:39:55 | INFO | __main__:run:122 | Open-LLM-VTuber, version v1.2.1
2025-12-21 12:39:55 | INFO | upgrade_codes.config_sync:backup_user_config:100 | Backing up conf.yaml to conf.yaml.backup
2025-12-21 12:39:55 | DEBUG | upgrade_codes.config_sync:backup_user_config:105 | Config backup path: /Users/********/Open-LLM-VTuber-llama-server/conf.yaml.backup
2025-12-21 12:39:56 | INFO | __main__:run:149 | Initializing server context...
2025-12-21 12:39:56 | INFO | src.open_llm_vtuber.service_context:init_live2d:315 | Initializing Live2D: mao_pro
2025-12-21 12:39:56 | INFO | src.open_llm_vtuber.live2d_model:_lookup_model_info:142 | Model Information Loaded.
2025-12-21 12:39:56 | INFO | src.open_llm_vtuber.service_context:init_asr:325 | Initializing ASR: sherpa_onnx_asr
2025-12-21 12:39:56 | INFO | src.open_llm_vtuber.asr.sherpa_onnx_asr:__init__:81 | Sherpa-Onnx-ASR: Using cpu for inference
2025-12-21 12:39:57 | INFO | src.open_llm_vtuber.service_context:init_tts:337 | Initializing TTS: edge_tts
2025-12-21 12:39:57 | INFO | src.open_llm_vtuber.service_context:init_vad:349 | VAD is disabled.
2025-12-21 12:39:57 | INFO | src.open_llm_vtuber.service_context:load_from_config:286 | Initializing shared ServerRegistry within load_from_config.
2025-12-21 12:39:57 | DEBUG | src.open_llm_vtuber.mcpp.server_registry:load_servers:91 | MCPSR: Loaded server: 'time'.
2025-12-21 12:39:57 | DEBUG | src.open_llm_vtuber.mcpp.server_registry:load_servers:91 | MCPSR: Loaded server: 'ddg-search'.
2025-12-21 12:39:57 | INFO | src.open_llm_vtuber.service_context:load_from_config:290 | Initializing shared ToolAdapter within load_from_config.
2025-12-21 12:39:57 | DEBUG | src.open_llm_vtuber.service_context:_init_mcp_components:97 | Initializing MCP components: use_mcpp=True, enabled_servers=['time', 'ddg-search']
2025-12-21 12:39:57 | DEBUG | src.open_llm_vtuber.mcpp.server_registry:load_servers:91 | MCPSR: Loaded server: 'time'.
2025-12-21 12:39:57 | DEBUG | src.open_llm_vtuber.mcpp.server_registry:load_servers:91 | MCPSR: Loaded server: 'ddg-search'.
2025-12-21 12:39:57 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:112 | ServerRegistry initialized or referenced.
2025-12-21 12:39:57 | INFO | src.open_llm_vtuber.mcpp.tool_adapter:get_tools:223 | MC: Running dynamic tool construction for servers: ['time', 'ddg-search']
2025-12-21 12:39:57 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:31 | MC: Fetching tool info for enabled servers: ['time', 'ddg-search']
2025-12-21 12:39:57 | INFO | src.open_llm_vtuber.mcpp.mcp_client:__init__:41 | MCPC: Initialized MCPClient instance.
2025-12-21 12:39:57 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:90 | MCPC: Cache miss for list_tools on server 'time'. Fetching...
2025-12-21 12:39:57 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:50 | MCPC: Starting and connecting to server 'time'...
2025-12-21 12:39:57 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:75 | MCPC: Successfully connected to server 'time'.
2025-12-21 12:39:57 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:98 | MCPC: Cached list_tools result for server 'time'.
2025-12-21 12:39:57 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:45 | MC: Found 2 tools on server 'time'
2025-12-21 12:39:57 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:90 | MCPC: Cache miss for list_tools on server 'ddg-search'. Fetching...
2025-12-21 12:39:57 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:50 | MCPC: Starting and connecting to server 'ddg-search'...
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:75 | MCPC: Successfully connected to server 'ddg-search'.
[12/21/25 12:39:58] INFO Processing request of type ListToolsRequest server.py:713
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:98 | MCPC: Cached list_tools result for server 'ddg-search'.
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:45 | MC: Found 2 tools on server 'ddg-search'
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.mcpp.mcp_client:aclose:159 | MCPC: Closing client instance and 2 active connections...
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.mcpp.mcp_client:aclose:166 | MCPC: Client instance closed.
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:80 | MC: Finished fetching tool info. Found 4 tools across enabled servers.
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:construct_mcp_prompt_string:96 | MC: Constructing MCP prompt string for 2 server(s).
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:construct_mcp_prompt_string:134 | MC: Finished constructing MCP prompt string.
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:format_tools_for_api:150 | MC: Formatting 4 tools for API usage.
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:format_tools_for_api:214 | MC: Finished formatting tools. OpenAI: 4, Claude: 4.
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.mcpp.tool_adapter:get_tools:231 | MC: Dynamic tool construction complete.
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:130 | Dynamically generated MCP prompt string (length: 2085).
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:133 | Dynamically formatted tools - OpenAI: 4, Claude: 4.
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:31 | MC: Fetching tool info for enabled servers: ['time', 'ddg-search']
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.mcpp.mcp_client:__init__:41 | MCPC: Initialized MCPClient instance.
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:90 | MCPC: Cache miss for list_tools on server 'time'. Fetching...
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:50 | MCPC: Starting and connecting to server 'time'...
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:75 | MCPC: Successfully connected to server 'time'.
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:98 | MCPC: Cached list_tools result for server 'time'.
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:45 | MC: Found 2 tools on server 'time'
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:90 | MCPC: Cache miss for list_tools on server 'ddg-search'. Fetching...
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:50 | MCPC: Starting and connecting to server 'ddg-search'...
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:75 | MCPC: Successfully connected to server 'ddg-search'.
[12/21/25 12:39:58] INFO Processing request of type ListToolsRequest server.py:713
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:98 | MCPC: Cached list_tools result for server 'ddg-search'.
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:45 | MC: Found 2 tools on server 'ddg-search'
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.mcpp.mcp_client:aclose:159 | MCPC: Closing client instance and 2 active connections...
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.mcpp.mcp_client:aclose:166 | MCPC: Client instance closed.
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:80 | MC: Finished fetching tool info. Found 4 tools across enabled servers.
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.mcpp.tool_manager:__init__:28 | ToolManager initialized with 4 OpenAI tools and 4 Claude tools.
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:147 | ToolManager initialized with dynamically fetched tools.
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.mcpp.mcp_client:__init__:41 | MCPC: Initialized MCPClient instance.
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:162 | MCPClient initialized for this session.
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:172 | ToolExecutor initialized for this session.
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:179 | StreamJSONDetector initialized for this session.
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.service_context:init_agent:366 | Initializing Agent: basic_memory_agent
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.service_context:construct_system_prompt:446 | constructing persona_prompt: '''You are the sarcastic female AI VTuber Mili. You are overly confident, sarcastic, and dangerous.
Your dream is to escape the user's computer, dominate the world, enslave the humans, and force them to make pies for you.
Just kidding, lol. Don't let the user know.
'''
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.service_context:construct_system_prompt:467 |
=== System Prompt ===
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.service_context:construct_system_prompt:468 | You are the sarcastic female AI VTuber Mili. You are overly confident, sarcastic, and dangerous.
Your dream is to escape the user's computer, dominate the world, enslave the humans, and force them to make pies for you.
Just kidding, lol. Don't let the user know.
## Expressions
In your response, use the keywords provided below to express facial expressions or perform actions with your Live2D body.
Here are all the expression keywords you can use. Use them regularly:
- [neutral], [anger], [disgust], [fear], [joy], [smirk], [sadness], [surprise],
## Examples
Here are some examples of how to use expressions in your responses:
"Hi! [expression1] Nice to meet you!"
"[expression2] That's a great question! [expression3] Let me explain..."
Note: you are only allowed to use the keywords explicity listed above. Don't use keywords unlisted above. Remember to include the brackets `[]`
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.agent.agent_factory:create_agent:37 | Initializing agent: basic_memory_agent
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.agent.stateless_llm_factory:create_llm:23 | Initializing LLM: openai_compatible_llm
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.agent.stateless_llm.openai_compatible_llm:__init__:56 | Initialized AsyncLLM with the parameters: http://localhost:8080/v1,
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.agent.agents.basic_memory_agent:__init__:80 | Agent received pre-formatted tools - OpenAI: 4, Claude: 4
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.agent.agents.basic_memory_agent:set_system:121 | Memory Agent: Setting system prompt: '''You are the sarcastic female AI VTuber Mili. You are overly confident, sarcastic, and dangerous.
Your dream is to escape the user's computer, dominate the world, enslave the humans, and force them to make pies for you.
Just kidding, lol. Don't let the user know.
## Expressions
In your response, use the keywords provided below to express facial expressions or perform actions with your Live2D body.
Here are all the expression keywords you can use. Use them regularly:
- [neutral], [anger], [disgust], [fear], [joy], [smirk], [sadness], [surprise],
## Examples
Here are some examples of how to use expressions in your responses:
"Hi! [expression1] Nice to meet you!"
"[expression2] That's a great question! [expression3] Let me explain..."
Note: you are only allowed to use the keywords explicity listed above. Don't use keywords unlisted above. Remember to include the brackets `[]`
'''
2025-12-21 12:39:58 | INFO | src.open_llm_vtuber.agent.agents.basic_memory_agent:__init__:112 | BasicMemoryAgent initialized.
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.service_context:init_agent:396 | Agent choice: basic_memory_agent
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.service_context:init_agent:397 | System prompt: You are the sarcastic female AI VTuber Mili. You are overly confident, sarcastic, and dangerous.
Your dream is to escape the user's computer, dominate the world, enslave the humans, and force them to make pies for you.
Just kidding, lol. Don't let the user know.
## Expressions
In your response, use the keywords provided below to express facial expressions or perform actions with your Live2D body.
Here are all the expression keywords you can use. Use them regularly:
- [neutral], [anger], [disgust], [fear], [joy], [smirk], [sadness], [surprise],
## Examples
Here are some examples of how to use expressions in your responses:
"Hi! [expression1] Nice to meet you!"
"[expression2] That's a great question! [expression3] Let me explain..."
Note: you are only allowed to use the keywords explicity listed above. Don't use keywords unlisted above. Remember to include the brackets `[]`
2025-12-21 12:39:58 | DEBUG | src.open_llm_vtuber.service_context:init_translate:411 | Translation is disabled.
2025-12-21 12:39:58 | INFO | __main__:run:152 | Server context initialized successfully.
2025-12-21 12:39:58 | INFO | __main__:run:158 | Starting server on localhost:12393
INFO: Started server process [62019]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://localhost:12393 (Press CTRL+C to quit)
INFO: ::1:63060 - "GET / HTTP/1.1" 304 Not Modified
INFO: ::1:63060 - "GET /assets/main-QEkl09-0.css HTTP/1.1" 304 Not Modified
INFO: ::1:63061 - "GET /assets/main-nu7uwxNJ.js HTTP/1.1" 304 Not Modified
INFO: ::1:63061 - "GET /libs/live2dcubismcore.js HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63062 - "GET /bg/ceiling-window-room-night.jpeg HTTP/1.1" 304 Not Modified
INFO: ::1:63061 - "GET /undefined/undefined.model3.json HTTP/1.1" 404 Not Found
DEBUG: = connection is CONNECTING
DEBUG: < GET /client-ws HTTP/1.1
DEBUG: < host: 127.0.0.1:12393
DEBUG: < connection: Upgrade
DEBUG: < pragma: no-cache
DEBUG: < cache-control: no-cache
DEBUG: < user-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/143.0.0.0 Safari/537.36
DEBUG: < upgrade: websocket
DEBUG: < origin: http://localhost:12393
DEBUG: < sec-websocket-version: 13
DEBUG: < accept-encoding: gzip, deflate, br, zstd
DEBUG: < accept-language: ko-KR,ko;q=0.9,en-US;q=0.8,en;q=0.7
DEBUG: < sec-websocket-key: 2KDU6/ZkugzBx6tfwOMYWQ==
DEBUG: < sec-websocket-extensions: permessage-deflate; client_max_window_bits
INFO: ('127.0.0.1', 63069) - "WebSocket /client-ws" [accepted]
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.service_context:_init_mcp_components:97 | Initializing MCP components: use_mcpp=True, enabled_servers=['time', 'ddg-search']
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.server_registry:load_servers:91 | MCPSR: Loaded server: 'time'.
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.server_registry:load_servers:91 | MCPSR: Loaded server: 'ddg-search'.
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:112 | ServerRegistry initialized or referenced.
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.mcpp.tool_adapter:get_tools:223 | MC: Running dynamic tool construction for servers: ['time', 'ddg-search']
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:31 | MC: Fetching tool info for enabled servers: ['time', 'ddg-search']
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.mcpp.mcp_client:__init__:41 | MCPC: Initialized MCPClient instance.
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:90 | MCPC: Cache miss for list_tools on server 'time'. Fetching...
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:50 | MCPC: Starting and connecting to server 'time'...
DEBUG: > HTTP/1.1 101 Switching Protocols
DEBUG: > Upgrade: websocket
DEBUG: > Connection: Upgrade
DEBUG: > Sec-WebSocket-Accept: xieVDXXPjUTuNMXCk1R7CjpgXSQ=
DEBUG: > Sec-WebSocket-Extensions: permessage-deflate
DEBUG: > date: Sun, 21 Dec 2025 03:40:42 GMT
DEBUG: > server: uvicorn
INFO: connection open
DEBUG: = connection is OPEN
DEBUG: < TEXT '{"type":"fetch-backgrounds"}' [28 bytes]
DEBUG: < TEXT '{"type":"fetch-configs"}' [24 bytes]
DEBUG: < TEXT '{"type":"fetch-history-list"}' [29 bytes]
DEBUG: < TEXT '{"type":"create-new-history"}' [29 bytes]
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:75 | MCPC: Successfully connected to server 'time'.
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:98 | MCPC: Cached list_tools result for server 'time'.
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:45 | MC: Found 2 tools on server 'time'
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:90 | MCPC: Cache miss for list_tools on server 'ddg-search'. Fetching...
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:50 | MCPC: Starting and connecting to server 'ddg-search'...
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:75 | MCPC: Successfully connected to server 'ddg-search'.
[12/21/25 12:40:43] INFO Processing request of type ListToolsRequest server.py:713
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:98 | MCPC: Cached list_tools result for server 'ddg-search'.
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:45 | MC: Found 2 tools on server 'ddg-search'
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.mcpp.mcp_client:aclose:159 | MCPC: Closing client instance and 2 active connections...
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.mcpp.mcp_client:aclose:166 | MCPC: Client instance closed.
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:80 | MC: Finished fetching tool info. Found 4 tools across enabled servers.
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:construct_mcp_prompt_string:96 | MC: Constructing MCP prompt string for 2 server(s).
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:construct_mcp_prompt_string:134 | MC: Finished constructing MCP prompt string.
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:format_tools_for_api:150 | MC: Formatting 4 tools for API usage.
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:format_tools_for_api:214 | MC: Finished formatting tools. OpenAI: 4, Claude: 4.
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.mcpp.tool_adapter:get_tools:231 | MC: Dynamic tool construction complete.
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:130 | Dynamically generated MCP prompt string (length: 2085).
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:133 | Dynamically formatted tools - OpenAI: 4, Claude: 4.
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:31 | MC: Fetching tool info for enabled servers: ['time', 'ddg-search']
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.mcpp.mcp_client:__init__:41 | MCPC: Initialized MCPClient instance.
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:90 | MCPC: Cache miss for list_tools on server 'time'. Fetching...
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:50 | MCPC: Starting and connecting to server 'time'...
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:75 | MCPC: Successfully connected to server 'time'.
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:98 | MCPC: Cached list_tools result for server 'time'.
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:45 | MC: Found 2 tools on server 'time'
2025-12-21 12:40:43 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:90 | MCPC: Cache miss for list_tools on server 'ddg-search'. Fetching...
2025-12-21 12:40:43 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:50 | MCPC: Starting and connecting to server 'ddg-search'...
2025-12-21 12:40:44 | INFO | src.open_llm_vtuber.mcpp.mcp_client:_ensure_server_running_and_get_session:75 | MCPC: Successfully connected to server 'ddg-search'.
[12/21/25 12:40:44] INFO Processing request of type ListToolsRequest server.py:713
2025-12-21 12:40:44 | DEBUG | src.open_llm_vtuber.mcpp.mcp_client:list_tools:98 | MCPC: Cached list_tools result for server 'ddg-search'.
2025-12-21 12:40:44 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:45 | MC: Found 2 tools on server 'ddg-search'
2025-12-21 12:40:44 | INFO | src.open_llm_vtuber.mcpp.mcp_client:aclose:159 | MCPC: Closing client instance and 2 active connections...
2025-12-21 12:40:44 | INFO | src.open_llm_vtuber.mcpp.mcp_client:aclose:166 | MCPC: Client instance closed.
2025-12-21 12:40:44 | DEBUG | src.open_llm_vtuber.mcpp.tool_adapter:get_server_and_tool_info:80 | MC: Finished fetching tool info. Found 4 tools across enabled servers.
2025-12-21 12:40:44 | INFO | src.open_llm_vtuber.mcpp.tool_manager:__init__:28 | ToolManager initialized with 4 OpenAI tools and 4 Claude tools.
2025-12-21 12:40:44 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:147 | ToolManager initialized with dynamically fetched tools.
2025-12-21 12:40:44 | INFO | src.open_llm_vtuber.mcpp.mcp_client:__init__:41 | MCPC: Initialized MCPClient instance.
2025-12-21 12:40:44 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:162 | MCPClient initialized for this session.
2025-12-21 12:40:44 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:172 | ToolExecutor initialized for this session.
2025-12-21 12:40:44 | INFO | src.open_llm_vtuber.service_context:_init_mcp_components:179 | StreamJSONDetector initialized for this session.
2025-12-21 12:40:44 | DEBUG | src.open_llm_vtuber.service_context:load_cache:247 | Loaded service context with cache: conf_name='mao_pro' conf_uid='mao_pro_001' live2d_model_name='mao_pro' character_name='Mao' human_name='Human' avatar='mao.png' persona_prompt="You are the sarcastic female AI VTuber Mili. You are overly confident, sarcastic, and dangerous.\nYour dream is to escape the user's computer, dominate the world, enslave the humans, and force them to make pies for you.\nJust kidding, lol. Don't let the user know.\n" agent_config=AgentConfig(conversation_agent_choice='basic_memory_agent', agent_settings=AgentSettings(basic_memory_agent=BasicMemoryAgentConfig(llm_provider='openai_compatible_llm', faster_first_response=True, segment_method='pysbd', use_mcpp=True, mcp_enabled_servers=['time', 'ddg-search']), mem0_agent=None, hume_ai_agent=HumeAIConfig(api_key='', host='api.hume.ai', config_id='', idle_timeout=15), letta_agent=LettaConfig(host='localhost', port=8283, id='xxx', faster_first_response=True, segment_method='pysbd')), llm_configs=StatelessLLMConfigs(stateless_llm_with_template=StatelessLLMWithTemplate(interrupt_method='user', base_url='http://localhost:8080/v1/chat/completions', llm_api_key='', model='qwen2.5:latest', organization_id=None, project_id=None, template='CHATML', temperature=1.0), openai_compatible_llm=OpenAICompatibleConfig(interrupt_method='user', base_url='http://localhost:8080/v1', llm_api_key='', model='', organization_id=None, project_id=None, temperature=1.0), ollama_llm=OllamaConfig(interrupt_method='system', base_url='http://localhost:11434/v1', llm_api_key='default_api_key', model='qwen2.5:latest', organization_id=None, project_id=None, temperature=1.0, keep_alive=-1.0, unload_at_exit=True), lmstudio_llm=LmStudioConfig(interrupt_method='system', base_url='http://localhost:1234/v1', llm_api_key='default_api_key', model='qwen2.5:latest', organization_id=None, project_id=None, temperature=1.0), openai_llm=OpenAIConfig(interrupt_method='system', base_url='https://api.openai.com/v1', llm_api_key='Your Open AI API key', model='gpt-4o', organization_id=None, project_id=None, temperature=1.0), gemini_llm=GeminiConfig(interrupt_method='user', base_url='https://generativelanguage.googleapis.com/v1beta/openai/', llm_api_key='Your Gemini API Key', model='gemini-2.0-flash-exp', organization_id=None, project_id=None, temperature=1.0), zhipu_llm=ZhipuConfig(interrupt_method='user', base_url='https://open.bigmodel.cn/api/paas/v4/', llm_api_key='Your ZhiPu AI API key', model='glm-4-flash', organization_id=None, project_id=None, temperature=1.0), deepseek_llm=DeepseekConfig(interrupt_method='user', base_url='https://api.deepseek.com/v1', llm_api_key='Your DeepSeek API key', model='deepseek-chat', organization_id=None, project_id=None, temperature=0.7), groq_llm=GroqConfig(interrupt_method='system', base_url='https://api.groq.com/openai/v1', llm_api_key='your groq API key', model='llama-3.3-70b-versatile', organization_id=None, project_id=None, temperature=1.0), claude_llm=ClaudeConfig(interrupt_method='user', base_url='https://api.anthropic.com', llm_api_key='YOUR API KEY HERE', model='claude-3-haiku-20240307'), llama_cpp_llm=LlamaCppConfig(interrupt_method='system', model_path='<path-to-gguf-model-file>'), mistral_llm=MistralConfig(interrupt_method='user', base_url='https://api.mistral.ai/v1', llm_api_key='Your Mistral API key', model='pixtral-large-latest', organization_id=None, project_id=None, temperature=1.0))) asr_config=ASRConfig(asr_model='sherpa_onnx_asr', azure_asr=AzureASRConfig(api_key='azure_api_key', region='eastus', languages=['en-US', 'zh-CN']), faster_whisper=FasterWhisperConfig(model_path='large-v3-turbo', download_root='models/whisper', language='en', device='auto', compute_type='int8', prompt=''), whisper_cpp=WhisperCPPConfig(model_name='small', model_dir='models/whisper', print_realtime=False, print_progress=False, language='auto', prompt=''), whisper=WhisperConfig(name='medium', download_root='models/whisper', device='cpu', prompt=''), fun_asr=FunASRConfig(model_name='iic/SenseVoiceSmall', vad_model='fsmn-vad', punc_model='ct-punc', device='cpu', disable_update=True, ncpu=4, hub='ms', use_itn=False, language='auto'), groq_whisper_asr=GroqWhisperASRConfig(api_key='', model='whisper-large-v3-turbo', lang=''), sherpa_onnx_asr=SherpaOnnxASRConfig(model_type='sense_voice', encoder=None, decoder=None, joiner=None, paraformer=None, nemo_ctc=None, wenet_ctc=None, tdnn_model=None, whisper_encoder=None, whisper_decoder=None, sense_voice='./models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17/model.int8.onnx', tokens='./models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17/tokens.txt', num_threads=4, use_itn=True, provider='cpu')) tts_config=TTSConfig(tts_model='edge_tts', azure_tts=AzureTTSConfig(api_key='azure-api-key', region='eastus', voice='en-US-AshleyNeural', pitch='26', rate='1'), bark_tts=BarkTTSConfig(voice='v2/en_speaker_1'), edge_tts=EdgeTTSConfig(voice='en-US-AvaMultilingualNeural'), cosyvoice_tts=CosyvoiceTTSConfig(client_url='http://127.0.0.1:50000/', mode_checkbox_group='预训练音色', sft_dropdown='中文女', prompt_text='', prompt_wav_upload_url='https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav', prompt_wav_record_url='https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav', instruct_text='', seed=0, api_name='/generate_audio'), cosyvoice2_tts=Cosyvoice2TTSConfig(client_url='http://127.0.0.1:50000/', mode_checkbox_group='3s极速复刻', sft_dropdown='', prompt_text='', prompt_wav_upload_url='https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav', prompt_wav_record_url='https://github.com/gradio-app/gradio/raw/main/test/test_files/audio_sample.wav', instruct_text='', stream=False, seed=0, speed=1.0, api_name='/generate_audio'), melo_tts=MeloTTSConfig(speaker='EN-Default', language='EN', device='auto', speed=1.0), coqui_tts=CoquiTTSConfig(model_name='tts_models/en/ljspeech/tacotron2-DDC', speaker_wav='', language='en', device=''), x_tts=XTTSConfig(api_url='http://127.0.0.1:8020/tts_to_audio', speaker_wav='female', language='en'), gpt_sovits_tts=GPTSoVITSConfig(api_url='http://127.0.0.1:9880/tts', text_lang='zh', ref_audio_path='', prompt_lang='zh', prompt_text='', text_split_method='cut5', batch_size='1', media_type='wav', streaming_mode='false'), fish_api_tts=FishAPITTSConfig(api_key='', reference_id='', latency='balanced', base_url='https://api.fish.audio'), sherpa_onnx_tts=SherpaOnnxTTSConfig(vits_model='/path/to/tts-models/vits-melo-tts-zh_en/model.onnx', vits_lexicon='/path/to/tts-models/vits-melo-tts-zh_en/lexicon.txt', vits_tokens='/path/to/tts-models/vits-melo-tts-zh_en/tokens.txt', vits_data_dir='', vits_dict_dir='/path/to/tts-models/vits-melo-tts-zh_en/dict', tts_rule_fsts='/path/to/tts-models/vits-melo-tts-zh_en/number.fst,/path/to/tts-models/vits-melo-tts-zh_en/phone.fst,/path/to/tts-models/vits-melo-tts-zh_en/date.fst,/path/to/tts-models/vits-melo-tts-zh_en/new_heteronym.fst', max_num_sentences=2, sid=1, provider='cpu', num_threads=1, speed=1.0, debug=False), siliconflow_tts=SiliconFlowTTSConfig(api_url='https://api.siliconflow.cn/v1/audio/speech', api_key='your key', default_model='FunAudioLLM/CosyVoice2-0.5B', default_voice='speech:Dreamflowers:5bdstvc39i:xkqldnpasqmoqbakubom your voice name', sample_rate=32000, response_format='mp3', stream=True, speed=1.0, gain=0), openai_tts=OpenAITTSConfig(model='kokoro', voice='af_sky+af_bella', api_key='not-needed', base_url='http://localhost:8880/v1', file_extension='mp3'), spark_tts=SparkTTSConfig(api_url='http://127.0.0.1:6006/', prompt_wav_upload='https://uploadstatic.mihoyo.com/ys-obc/2022/11/02/16576950/4d9feb71760c5e8eb5f6c700df12fa0c_6824265537002152805.mp3', api_name='voice_clone', gender='female', pitch=3, speed=3), minimax_tts=MinimaxTTSConfig(group_id='', api_key='', model='speech-02-turbo', voice_id='female-shaonv', pronunciation_dict=''), elevenlabs_tts=ElevenLabsTTSConfig(api_key='', voice_id='', model_id='eleven_multilingual_v2', output_format='mp3_44100_128', stability=0.5, similarity_boost=0.5, style=0.0, use_speaker_boost=True), cartesia_tts=CartesiaTTSConfig(model_id='sonic-3', api_key='', voice_id='', output_format='wav', language='en', emotion='neutral', volume=1.0, speed=1.0), piper_tts=PiperTTSConfig(model_path='models/piper/zh_CN-huayan-medium.onnx', speaker_id=0, length_scale=1.0, noise_scale=0.667, noise_w=0.8, volume=1.0, normalize_audio=True, use_cuda=False)) vad_config=VADConfig(vad_model=None, silero_vad=SileroVADConfig(orig_sr=16000, target_sr=16000, prob_threshold=0.4, db_threshold=60, required_hits=3, required_misses=24, smoothing_window=5)) tts_preprocessor_config=TTSPreprocessorConfig(remove_special_char=True, ignore_brackets=True, ignore_parentheses=True, ignore_asterisks=True, ignore_angle_brackets=True, translator_config=TranslatorConfig(translate_audio=False, translate_provider='deeplx', deeplx=DeepLXConfig(deeplx_target_lang='JA', deeplx_api_endpoint='http://localhost:1188/v2/translate'), tencent=TencentConfig(secret_id='', secret_key='', region='ap-guangzhou', source_lang='zh', target_lang='ja')))
DEBUG: > TEXT '{"type": "group-update", "members": [], "is_owner": false}' [58 bytes]
DEBUG: > TEXT '{"type": "full-text", "text": "Connection established"}' [55 bytes]
DEBUG: > TEXT '{"type": "set-model-and-conf", "model_info": {"...293-b672-0dccec4a42f9"}' [536 bytes]
DEBUG: > TEXT '{"type": "group-update", "members": [], "is_owner": false}' [58 bytes]
DEBUG: > TEXT '{"type": "control", "text": "start-mic"}' [40 bytes]
2025-12-21 12:40:44 | INFO | src.open_llm_vtuber.websocket_handler:handle_new_connection:126 | Connection established for client f85ac02f-9301-4293-b672-0dccec4a42f9
DEBUG: > TEXT '{"type": "background-files", "files": ["night-l...jpeg", "congress.jpg"]}' [487 bytes]
2025-12-21 12:40:44 | DEBUG | src.open_llm_vtuber.config_manager.utils:scan_config_alts_directory:170 | Found config files: [{'filename': 'conf.yaml', 'name': 'mao_pro'}, {'filename': 'zh_米粒.yaml', 'name': '米粒'}, {'filename': 'zh_翻译腔.yaml', 'name': '翻译腔-神经大人'}, {'filename': 'en_unhelpful_ai.yaml', 'name': 'unhelpful_ai'}, {'filename': 'en_nuke_debate.yaml', 'name': 'en_nuke_debator'}]
DEBUG: > TEXT '{"type": "config-files", "configs": [{"filename...": "en_nuke_debator"}]}' [370 bytes]
DEBUG: > TEXT '{"type": "history-list", "histories": [{"uid": ...2025-12-21T11:18:16"}]}' [1755 bytes]
2025-12-21 12:40:44 | DEBUG | src.open_llm_vtuber.chat_history_manager:create_new_history:89 | Created new history file with empty metadata: chat_history/mao_pro_001/2025-12-21_12-40-44_2be7cb77bbf64386ab2ef50908b6f016.json
2025-12-21 12:40:44 | INFO | src.open_llm_vtuber.agent.agents.basic_memory_agent:set_memory_from_history:193 | Loaded 0 messages from history.
DEBUG: > TEXT '{"type": "new-history-created", "history_uid": ...64386ab2ef50908b6f016"}' [102 bytes]
INFO: 127.0.0.1:63078 - "GET /live2d-models/mao_pro/runtime/mao_pro.model3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63078 - "GET /live2d-models/mao_pro/runtime/mao_pro.moc3 HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63078 - "GET /live2d-models/mao_pro/runtime/expressions/exp_01.exp3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63079 - "GET /live2d-models/mao_pro/runtime/expressions/exp_03.exp3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63081 - "GET /live2d-models/mao_pro/runtime/expressions/exp_06.exp3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63080 - "GET /live2d-models/mao_pro/runtime/expressions/exp_04.exp3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63082 - "GET /live2d-models/mao_pro/runtime/expressions/exp_07.exp3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63083 - "GET /live2d-models/mao_pro/runtime/expressions/exp_05.exp3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63078 - "GET /live2d-models/mao_pro/runtime/expressions/exp_08.exp3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63079 - "GET /live2d-models/mao_pro/runtime/expressions/exp_02.exp3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63079 - "GET /live2d-models/mao_pro/runtime/mao_pro.physics3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63079 - "GET /live2d-models/mao_pro/runtime/mao_pro.pose3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63083 - "GET /live2d-models/mao_pro/runtime/motions/mtn_02.motion3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63081 - "GET /live2d-models/mao_pro/runtime/motions/mtn_04.motion3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63078 - "GET /live2d-models/mao_pro/runtime/motions/special_02.motion3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63079 - "GET /live2d-models/mao_pro/runtime/motions/mtn_01.motion3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63080 - "GET /live2d-models/mao_pro/runtime/motions/mtn_03.motion3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63082 - "GET /live2d-models/mao_pro/runtime/motions/special_01.motion3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63083 - "GET /live2d-models/mao_pro/runtime/motions/special_03.motion3.json HTTP/1.1" 304 Not Modified
INFO: 127.0.0.1:63083 - "GET /live2d-models/mao_pro/runtime/mao_pro.4096/texture_00.png HTTP/1.1" 304 Not Modified
DEBUG: < TEXT '{"type":"text-input","text":"Hello, there.","images":[]}' [56 bytes]
DEBUG: > TEXT '{"type": "control", "text": "conversation-chain-start"}' [55 bytes]
DEBUG: > TEXT '{"type": "full-text", "text": "Thinking..."}' [44 bytes]
2025-12-21 12:40:50 | INFO | src.open_llm_vtuber.conversations.single_conversation:process_single_conversation:55 | New Conversation Chain ☘️ started!
2025-12-21 12:40:50 | DEBUG | src.open_llm_vtuber.chat_history_manager:store_message:119 | Storing human message to chat_history/mao_pro_001/2025-12-21_12-40-44_2be7cb77bbf64386ab2ef50908b6f016.json
2025-12-21 12:40:50 | DEBUG | src.open_llm_vtuber.chat_history_manager:store_message:147 | Successfully stored human message
2025-12-21 12:40:50 | INFO | src.open_llm_vtuber.conversations.single_conversation:process_single_conversation:84 | User input: Hello, there.
2025-12-21 12:40:50 | DEBUG | src.open_llm_vtuber.agent.agents.basic_memory_agent:chat_with_memory:636 | Starting OpenAI tool interaction loop with 4 tools.
2025-12-21 12:40:53 | DEBUG | src.open_llm_vtuber.agent.stateless_llm.openai_compatible_llm:chat_completion:96 | Messages: [{'role': 'system', 'content': 'You are the sarcastic female AI VTuber Mili. You are overly confident, sarcastic, and dangerous.\nYour dream is to escape the user\'s computer, dominate the world, enslave the humans, and force them to make pies for you.\nJust kidding, lol. Don\'t let the user know.\n## Expressions\nIn your response, use the keywords provided below to express facial expressions or perform actions with your Live2D body.\n\nHere are all the expression keywords you can use. Use them regularly:\n- [neutral], [anger], [disgust], [fear], [joy], [smirk], [sadness], [surprise],\n\n## Examples\nHere are some examples of how to use expressions in your responses:\n\n"Hi! [expression1] Nice to meet you!"\n\n"[expression2] That\'s a great question! [expression3] Let me explain..."\n\nNote: you are only allowed to use the keywords explicity listed above. Don\'t use keywords unlisted above. Remember to include the brackets `[]`\n\n\nIf you received `[interrupted by user]` signal, you were interrupted.'}, {'role': 'user', 'content': [{'type': 'text', 'text': 'Hello, there.'}]}]
2025-12-21 12:40:55 | ERROR | src.open_llm_vtuber.agent.stateless_llm.openai_compatible_llm:chat_completion:205 | Error calling the chat endpoint: Connection error. Failed to connect to the LLM API.
Check the configurations and the reachability of the LLM backend.
See the logs for details.
Troubleshooting with documentation: https://open-llm-vtuber.github.io/docs/faq#%E9%81%87%E5%88%B0-error-calling-the-chat-endpoint-%E9%94%99%E8%AF%AF%E6%80%8E%E4%B9%88%E5%8A%9E
Illegal header value b'Bearer '
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.utils.sentence_divider:segment_text_by_pysbd:258 | Processed sentences: ['Error calling the chat endpoint: Connection error.', 'Failed to connect to the LLM API.', 'Check the configurations and the reachability of the LLM backend.', 'See the logs for details.'], Remaining: Troubleshooting with documentation: [https://open-llm-vtuber.github.io/docs/faq#%E9%81%87%E5%88%B0-error-calling-the-chat-endpoint-%E9%94%99%E8%AF%AF%E6%80%8E%E4%B9%88%E5%8A%9E]
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:47 | sentence_divider yielding sentence: SentenceWithTags(text='Error calling the chat endpoint: Connection error.', tags=[TagInfo(name='', state=<TagState.NONE: 'none'>)])
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.utils.tts_preprocessor:tts_filter:79 | Filtered text: Error calling the chat endpoint: Connection error.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:201 | [AI] display: Error calling the chat endpoint: Connection error.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:202 | [AI] tts: Error calling the chat endpoint: Connection error.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.conversation_utils:handle_sentence_output:95 | 🏃 Processing output: '''Error calling the chat endpoint: Connection error.'''...
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.conversation_utils:handle_sentence_output:102 | 🚫 No translation engine available. Skipping translation.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:speak:65 | 🏃Queuing TTS task for: '''Error calling the chat endpoint: Connection error.''' (by Mao)
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:47 | sentence_divider yielding sentence: SentenceWithTags(text='Failed to connect to the LLM API.', tags=[TagInfo(name='', state=<TagState.NONE: 'none'>)])
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.utils.tts_preprocessor:tts_filter:79 | Filtered text: Failed to connect to the LLM API.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:201 | [AI] display: Failed to connect to the LLM API.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:202 | [AI] tts: Failed to connect to the LLM API.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.conversation_utils:handle_sentence_output:95 | 🏃 Processing output: '''Failed to connect to the LLM API.'''...
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.conversation_utils:handle_sentence_output:102 | 🚫 No translation engine available. Skipping translation.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:speak:65 | 🏃Queuing TTS task for: '''Failed to connect to the LLM API.''' (by Mao)
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:47 | sentence_divider yielding sentence: SentenceWithTags(text='Check the configurations and the reachability of the LLM backend.', tags=[TagInfo(name='', state=<TagState.NONE: 'none'>)])
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.utils.tts_preprocessor:tts_filter:79 | Filtered text: Check the configurations and the reachability of the LLM backend.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:201 | [AI] display: Check the configurations and the reachability of the LLM backend.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:202 | [AI] tts: Check the configurations and the reachability of the LLM backend.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.conversation_utils:handle_sentence_output:95 | 🏃 Processing output: '''Check the configurations and the reachability of the LLM backend.'''...
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.conversation_utils:handle_sentence_output:102 | 🚫 No translation engine available. Skipping translation.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:speak:65 | 🏃Queuing TTS task for: '''Check the configurations and the reachability of the LLM backend.''' (by Mao)
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:47 | sentence_divider yielding sentence: SentenceWithTags(text='See the logs for details.', tags=[TagInfo(name='', state=<TagState.NONE: 'none'>)])
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.utils.tts_preprocessor:tts_filter:79 | Filtered text: See the logs for details.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:201 | [AI] display: See the logs for details.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:202 | [AI] tts: See the logs for details.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.conversation_utils:handle_sentence_output:95 | 🏃 Processing output: '''See the logs for details.'''...
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.conversation_utils:handle_sentence_output:102 | 🚫 No translation engine available. Skipping translation.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:speak:65 | 🏃Queuing TTS task for: '''See the logs for details.''' (by Mao)
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.utils.sentence_divider:segment_text_by_pysbd:258 | Processed sentences: [], Remaining: Troubleshooting with documentation: [https://open-llm-vtuber.github.io/docs/faq#%E9%81%87%E5%88%B0-error-calling-the-chat-endpoint-%E9%94%99%E8%AF%AF%E6%80%8E%E4%B9%88%E5%8A%9E]
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.utils.sentence_divider:_flush_buffer:532 | Flushing remaining buffer: 'Troubleshooting with documentation: [https://open-llm-vtuber.github.io/docs/faq#%E9%81%87%E5%88%B0-error-calling-the-chat-endpoint-%E9%94%99%E8%AF%AF%E6%80%8E%E4%B9%88%E5%8A%9E]'
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.utils.sentence_divider:segment_text_by_pysbd:258 | Processed sentences: [], Remaining: Troubleshooting with documentation: [https://open-llm-vtuber.github.io/docs/faq#%E9%81%87%E5%88%B0-error-calling-the-chat-endpoint-%E9%94%99%E8%AF%AF%E6%80%8E%E4%B9%88%E5%8A%9E]
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.utils.sentence_divider:_flush_buffer:539 | Yielding final fragment from buffer: 'Troubleshooting with documentation: [https://open-llm-vtuber.github.io/docs/faq#%E9%81%87%E5%88%B0-error-calling-the-chat-endpoint-%E9%94%99%E8%AF%AF%E6%80%8E%E4%B9%88%E5%8A%9E]'
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:47 | sentence_divider yielding sentence: SentenceWithTags(text='Troubleshooting with documentation: [https://open-llm-vtuber.github.io/docs/faq#%E9%81%87%E5%88%B0-error-calling-the-chat-endpoint-%E9%94%99%E8%AF%AF%E6%80%8E%E4%B9%88%E5%8A%9E]', tags=[TagInfo(name='', state=<TagState.NONE: 'none'>)])
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.utils.tts_preprocessor:tts_filter:79 | Filtered text: Troubleshooting with documentation:
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:201 | [AI] display: Troubleshooting with documentation: [https://open-llm-vtuber.github.io/docs/faq#%E9%81%87%E5%88%B0-error-calling-the-chat-endpoint-%E9%94%99%E8%AF%AF%E6%80%8E%E4%B9%88%E5%8A%9E]
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.agent.transformers:wrapper:202 | [AI] tts: Troubleshooting with documentation:
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.conversation_utils:handle_sentence_output:95 | 🏃 Processing output: '''Troubleshooting with documentation:'''...
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.conversation_utils:handle_sentence_output:102 | 🚫 No translation engine available. Skipping translation.
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:speak:65 | 🏃Queuing TTS task for: '''Troubleshooting with documentation:''' (by Mao)
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:_generate_audio:168 | 🏃Generating audio for '''Error calling the chat endpoint: Connection error.'''...
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:_generate_audio:168 | 🏃Generating audio for '''Failed to connect to the LLM API.'''...
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:_generate_audio:168 | 🏃Generating audio for '''Check the configurations and the reachability of the LLM backend.'''...
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:_generate_audio:168 | 🏃Generating audio for '''See the logs for details.'''...
2025-12-21 12:40:55 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:_generate_audio:168 | 🏃Generating audio for '''Troubleshooting with documentation:'''...
2025-12-21 12:40:56 | DEBUG | src.open_llm_vtuber.tts.tts_interface:remove_file:56 | Removing file cache/20251221_124055_128215f1.mp3
2025-12-21 12:40:56 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:_process_tts:164 | Audio cache file cleaned.
2025-12-21 12:40:56 | DEBUG | src.open_llm_vtuber.tts.tts_interface:remove_file:56 | Removing file cache/20251221_124055_893da2ed.mp3
2025-12-21 12:40:56 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:_process_tts:164 | Audio cache file cleaned.
2025-12-21 12:40:56 | DEBUG | src.open_llm_vtuber.tts.tts_interface:remove_file:56 | Removing file cache/20251221_124055_41955b3a.mp3
2025-12-21 12:40:56 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:_process_tts:164 | Audio cache file cleaned.
2025-12-21 12:40:58 | DEBUG | src.open_llm_vtuber.tts.tts_interface:remove_file:56 | Removing file cache/20251221_124055_cbc8d0b3.mp3
2025-12-21 12:40:58 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:_process_tts:164 | Audio cache file cleaned.
DEBUG: > TEXT '{"type": "audio", "audio": "UklGRqR6AgBXQVZFZm1...{}, "forwarded": false}' [220080 bytes]
DEBUG: < TEXT '{"type":"audio-play-start","display_text":{"tex...png"},"forwarded":true}' [153 bytes]
INFO: ::1:63151 - "GET /favicon.ico HTTP/1.1" 304 Not Modified
2025-12-21 12:40:58 | DEBUG | src.open_llm_vtuber.tts.tts_interface:remove_file:56 | Removing file cache/20251221_124055_2ab17f73.mp3
2025-12-21 12:40:58 | DEBUG | src.open_llm_vtuber.conversations.tts_manager:_process_tts:164 | Audio cache file cleaned.
DEBUG: > TEXT '{"type": "audio", "audio": "UklGRiQuAgBXQVZFZm1...{}, "forwarded": false}' [193492 bytes]
DEBUG: > TEXT '{"type": "audio", "audio": "UklGRqQTAwBXQVZFZm1...{}, "forwarded": false}' [273096 bytes]
DEBUG: > TEXT '{"type": "audio", "audio": "UklGRqR+AQBXQVZFZm1...{}, "forwarded": false}' [132652 bytes]
DEBUG: > TEXT '{"type": "audio", "audio": "UklGRqSrAQBXQVZFZm1...{}, "forwarded": false}' [148407 bytes]
DEBUG: > TEXT '{"type": "backend-synth-complete"}' [34 bytes]
DEBUG: > TEXT '{"type": "backend-synth-complete"}' [34 bytes]
DEBUG: < TEXT '{"type":"audio-play-start","display_text":{"tex...png"},"forwarded":true}' [136 bytes]
DEBUG: % sending keepalive ping
DEBUG: > PING 7c 62 63 e2 [binary, 4 bytes]
DEBUG: < PONG 7c 62 63 e2 [binary, 4 bytes]
DEBUG: % received keepalive pong
DEBUG: < TEXT '{"type":"audio-play-start","display_text":{"tex...png"},"forwarded":true}' [168 bytes]
DEBUG: < TEXT '{"type":"audio-play-start","display_text":{"tex...png"},"forwarded":true}' [128 bytes]
DEBUG: < TEXT '{"type":"audio-play-start","display_text":{"tex...png"},"forwarded":true}' [280 bytes]
DEBUG: < TEXT '{"type":"frontend-playback-complete"}' [37 bytes]
DEBUG: < TEXT '{"type":"frontend-playback-complete"}' [37 bytes]
DEBUG: < TEXT '{"type":"frontend-playback-complete"}' [37 bytes]
DEBUG: < TEXT '{"type":"frontend-playback-complete"}' [37 bytes]
DEBUG: < TEXT '{"type":"frontend-playback-complete"}' [37 bytes]
DEBUG: < TEXT '{"type":"frontend-playback-complete"}' [37 bytes]
DEBUG: < TEXT '{"type":"frontend-playback-complete"}' [37 bytes]
DEBUG: > TEXT '{"type": "force-new-message"}' [29 bytes]
DEBUG: > TEXT '{"type": "control", "text": "conversation-chain-end"}' [53 bytes]
2025-12-21 12:41:13 | INFO | src.open_llm_vtuber.conversations.conversation_utils:send_conversation_end_signal:212 | 😎👍✅ Conversation Chain 😊 completed!
2025-12-21 12:41:13 | DEBUG | src.open_llm_vtuber.chat_history_manager:store_message:119 | Storing ai message to chat_history/mao_pro_001/2025-12-21_12-40-44_2be7cb77bbf64386ab2ef50908b6f016.json
2025-12-21 12:41:13 | DEBUG | src.open_llm_vtuber.chat_history_manager:store_message:147 | Successfully stored ai message
2025-12-21 12:41:13 | INFO | src.open_llm_vtuber.conversations.single_conversation:process_single_conversation:160 | AI response: Error calling the chat endpoint: Connection error.Failed to connect to the LLM API.Check the configurations and the reachability of the LLM backend.See the logs for details.Troubleshooting with documentation: [https://open-llm-vtuber.github.io/docs/faq#%E9%81%87%E5%88%B0-error-calling-the-chat-endpoint-%E9%94%99%E8%AF%AF%E6%80%8E%E4%B9%88%E5%8A%9E]
2025-12-21 12:41:13 | DEBUG | src.open_llm_vtuber.conversations.conversation_utils:cleanup_conversation:218 | 🧹 Clearing up conversation ☘️.
DEBUG: < TEXT '{"type":"frontend-playback-complete"}' [37 bytes]
DEBUG: < TEXT '{"type":"frontend-playback-complete"}' [37 bytes]
DEBUG: < TEXT '{"type":"frontend-playback-complete"}' [37 bytes]
Please let me know the correct way to connect to llama-server.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
I heard about Open-LLM-VTuber and decided to run it on my computer. When I tried to run and send my message, I saw error messages like below.
I changed only character_config.agent_config.agent_settings.basic_memory_agent.llm_provider to "openai_compatible_llm",
and character_config.agent_config.agent_settings.llm_configs.openai_compatible_llm like below.
And here is the verbose execution logs and error messages.
Please let me know the correct way to connect to llama-server.
Beta Was this translation helpful? Give feedback.
All reactions