I am running a small testbed of Nvidia Sparks plugged into a Mikrotik CRS812-DDQ switch ... with GPUStack coordinating operation of the nodes and running the various models. It all works wonderfully, and I have different applications accessing the GPUStack API (OpenAI-Compatible).
I decided to test the Openagents Docker image, and it spun up without any issues. After opening a shell (/bin/bash) and making a slight mod to the llm_agent.py example file (pointing to the local API) when comments are posted in the guest/general channel I see the agent_runner trying to open sessions via the API.
However, it appears "tool_choice":"auto" is being sent from openagents to the API (packet capture of the interaction also reveals a bunch of extra/unrelated agentworld KVPs - I have no idea where that's coming from). The API then shoots back a nastigram saying the "auto" tool choice requires --enable-auto-tool-choice and --tool-call-parser to be set. The openagent logs show this result as ...
{"error":{"message":""auto" tool choice requires --enable-auto-tool-choice and --tool-call-parser to be set","type":"BadRequestError","param":null,"code":400}}
In the Docs I see the section "LLM Tools and Capabilities: Function Calling and Tool Use", where ToolEnabledAgent(WorkerAgent) is called. Is that the approach that should be taken to interact with a tool-enabled API ... even when my initial tests won't involve tools at all? (I'm doing a simple proof-of-concept where four different agents and a number of humans can interact in a chat channel)
THANKS for any suggestions the group may have!
If you need more details about the capture, logs, or tweaked llm_agent.py just let me know. But this is more of a general question about how to either bypass "tool_choice" being sent to the API or do the most minimal tool implementation in order to create a chat channel. BTW, I'm NOT a coder (but can usually reverse engineer how something works ... so if no one has suggestions I can drill into the ToolEnabledAgent() code to see if I can figure it out!) :P
I am running a small testbed of Nvidia Sparks plugged into a Mikrotik CRS812-DDQ switch ... with GPUStack coordinating operation of the nodes and running the various models. It all works wonderfully, and I have different applications accessing the GPUStack API (OpenAI-Compatible).
I decided to test the Openagents Docker image, and it spun up without any issues. After opening a shell (/bin/bash) and making a slight mod to the llm_agent.py example file (pointing to the local API) when comments are posted in the guest/general channel I see the agent_runner trying to open sessions via the API.
However, it appears "tool_choice":"auto" is being sent from openagents to the API (packet capture of the interaction also reveals a bunch of extra/unrelated agentworld KVPs - I have no idea where that's coming from). The API then shoots back a nastigram saying the "auto" tool choice requires --enable-auto-tool-choice and --tool-call-parser to be set. The openagent logs show this result as ...
{"error":{"message":""auto" tool choice requires --enable-auto-tool-choice and --tool-call-parser to be set","type":"BadRequestError","param":null,"code":400}}
In the Docs I see the section "LLM Tools and Capabilities: Function Calling and Tool Use", where ToolEnabledAgent(WorkerAgent) is called. Is that the approach that should be taken to interact with a tool-enabled API ... even when my initial tests won't involve tools at all? (I'm doing a simple proof-of-concept where four different agents and a number of humans can interact in a chat channel)
THANKS for any suggestions the group may have!
If you need more details about the capture, logs, or tweaked llm_agent.py just let me know. But this is more of a general question about how to either bypass "tool_choice" being sent to the API or do the most minimal tool implementation in order to create a chat channel. BTW, I'm NOT a coder (but can usually reverse engineer how something works ... so if no one has suggestions I can drill into the ToolEnabledAgent() code to see if I can figure it out!) :P