Conversation
|
This is great, thanks for the contribution! Can you explain the process of running a command real quick? Does the LLM need to do any initialization before it can execute code? We're working on building a layer to determine which methods get exposed by tools to a particular agent to avoid overwhelming the context, but until that's ready, I'm afraid this tool is going to overwhelm the context. Also, since you have well written documentation in the docstring of the tool implementation, would you mind moving that to the docs ( |
|
In that case, perhaps it's best to wait for your update. Do you know when it's expected to be released? |
Still very much a draft, but it is something I'm actively woking on. I guess the point I was making was that some of the lifecycle stuff doesn't necessarily need to get exposed to the LLM. Like, can we handle setup and teardown outside of the LLM? I imagine a high percentage of use cases will utilize a single |
|
Agreed with @tcdent that the lifecycle management can be handled outside of LLM tool calls. It would also be more likely to work reliably if there was a bit more prompt engineering in the docstrings. Explaining to the LLM what tools should be called when is important so it doesnt get stuck in a loop or waste calls :) |
|
Ok, I will rethink this and consult with colleagues. Thanks. |
📥 Pull Request
📘 Description
I've introduced Daytona as a new tool. The Daytona SDK allows you to create, run, and terminate sandboxes for your AI code effortlessly. It's particularly well-suited for generative AI code, such as the type typically produced by AI agents. This tool streamlines the process of managing sandboxes and executing AI-generated code in a controlled environment.
🧪 Testing

Created a new agentstack project, added a tool, and tested it.