diff --git a/docs/components/json-processor.md b/docs/components/json-processor.md index c06a2f3..2e1cd2b 100644 --- a/docs/components/json-processor.md +++ b/docs/components/json-processor.md @@ -100,7 +100,7 @@ Additional results are reported in [1]. ## Getting Started -Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/post_tool_reflection_toolkit/code_generation/README.md) for instructions on how to get started with the code. +Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/post_tool/code_generation/README.md) for instructions on how to get started with the code. See an example in action [here](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/examples/json_processor_getting_started.ipynb). ## References diff --git a/docs/components/policy-guard.md b/docs/components/policy-guard.md index badd198..e421e13 100644 --- a/docs/components/policy-guard.md +++ b/docs/components/policy-guard.md @@ -43,7 +43,7 @@ The above figure shows the achieved instruction following (IF) rates for the fou Additional results are reported in [1]. ## Getting Started -Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/policy_guard_toolkit/README.md) for instructions on how to get started with the code. +Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/pre_response/policy_guard/README.md) for instructions on how to get started with the code. ## References [1] Elder, B., et al., "Boosting Instruction Following at Scale," arXiv preprint arXiv: (2025). https://arxiv.org/abs/2510.14842 diff --git a/docs/components/rag-repair.md b/docs/components/rag-repair.md index b464f53..d8d8c1f 100644 --- a/docs/components/rag-repair.md +++ b/docs/components/rag-repair.md @@ -17,7 +17,7 @@ For now, the default retrieval method is using sentence embeddings along with Ch #### Input format This component expects documents to be provided in a local path. Optionally, documents can be divided into manual pages/documentation in a `man` folder and other documents such as troubleshooting documents in a `doc` folder. If these folders are not provided, then all documents are considered non-documentation. Nested folders are supported for ingesting. Supported files are: `pdf`, `html`, `json`, `jsonl`. -The class `post_tool_reflection_toolkit.core.toolkit.RAGRepairRunInput` expects three main inputs as follows: +The class `post_tool.core.toolkit.RAGRepairRunInput` expects three main inputs as follows: 1. `messages`: List[BaseMessages], a list of messages from the agent, this is optional but will be used to infer the task at hand if `nl_query` is not provided. @@ -47,7 +47,7 @@ Additional results are reported in [1]. ![rag_results](../assets/img_ragrepair_eval.png) ## Getting Started -Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/post_tool_reflection_toolkit/rag_repair/README.md) for instructions on how to get started with the code. +Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/post_tool/rag_repair/README.md) for instructions on how to get started with the code. See an example in action [here](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/examples/rag_repair.ipynb). diff --git a/docs/components/refraction.md b/docs/components/refraction.md index 308cb76..33973db 100644 --- a/docs/components/refraction.md +++ b/docs/components/refraction.md @@ -2,7 +2,7 @@ ![refraction_fig_1](refraction/assets/01115a12-5433-4a39-a970-9f33ffc13a67.png) -[//]: # () +[//]: # () Refraction is a low-cost (no LLMs!), low-latency, domain-agnostic, data-agnostic, model-agnostic approach towards validation and repair for a sequence of tool calls, based on classical AI planning techniques. @@ -99,7 +99,7 @@ For scaling characteristics, see [here](refraction/05.-Scaling.md). 5. Scaling characteristics [[link](refraction/05.-Scaling.md)] 6. Offline API [[link](refraction/07.-Offline-Analysis.md)] -Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/pre_tool_reflection_toolkit/refraction/README.md) for instructions on how to get started with the code. +Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/pre_tool/refraction/README.md) for instructions on how to get started with the code. ## References diff --git a/docs/components/refraction/02.-The-Refraction-API-|-Inputs-and-Outputs.md b/docs/components/refraction/02.-The-Refraction-API-|-Inputs-and-Outputs.md index c6cfbc1..65f05fc 100644 --- a/docs/components/refraction/02.-The-Refraction-API-|-Inputs-and-Outputs.md +++ b/docs/components/refraction/02.-The-Refraction-API-|-Inputs-and-Outputs.md @@ -120,7 +120,7 @@ result = refract( You can also use the format of tool calling used in the [OpenAI](https://platform.openai.com/docs/guides/function-calling) and [LangChain](https://langchain-ai.github.io/langgraph/how-tos/tool-calling/#call-tools) tool calling APIs. -These are very commonly used in LLM prompts. Check [here](https://github.ibm.com/research-dba/refraction/blob/main/tests/from_deployment/test_untyped_inputs.py#L14) for an example. +These are very commonly used in LLM prompts. Check [here](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/tests/pre_tool/refraction/from_deployment/test_untyped_inputs.py#L14) for an example. > Check [here](03.-The-Refraction-API-|-Tool-Calling.md) for instructions on refraction while calling tools directly. diff --git a/docs/components/refraction/03.-The-Refraction-API-|-Tool-Calling.md b/docs/components/refraction/03.-The-Refraction-API-|-Tool-Calling.md index 6008cc0..5e94675 100644 --- a/docs/components/refraction/03.-The-Refraction-API-|-Tool-Calling.md +++ b/docs/components/refraction/03.-The-Refraction-API-|-Tool-Calling.md @@ -174,9 +174,3 @@ Executing: var1 = TripadvisorSearchLocation(query="London") Executing: TripadvisorSearchHotels(param="foo", geoId="123", checkIn="2024-09-05", checkOut="2024-09-15") ``` - -# 3.2 Integration with LangGraph - -We can use this idea to augment tool decorators in popular -agentic frameworks. Head over to [here](08.-Integration-with-LangGraph.md) -for an example with the LangGraph tool decorator. diff --git a/docs/components/refraction/09.-Integration-with-Mellea.md b/docs/components/refraction/09.-Integration-with-Mellea.md index aabeca2..7d7baa3 100644 --- a/docs/components/refraction/09.-Integration-with-Mellea.md +++ b/docs/components/refraction/09.-Integration-with-Mellea.md @@ -148,9 +148,9 @@ if the following were possible: - I could not see any immediate method to access the prompt / instruction (and not just the response) inside the requirement. This should be possible, and then we would not require any extra inputs (i.e. tools, memory, etc.) at all -to instantiate the requirement -- we can read off everything from the instruction. [[link](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/pre_tool_reflection_toolkit/refraction/src/integration/mellea_requirement.py#12)] +to instantiate the requirement -- we can read off everything from the instruction. [[link](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/pre_tool/refraction/src/integration/mellea_requirement.py#L12)] - Currently, the validation result only allows for a `str` reason. This means that the refractor, in the event it could fix a call in situ, as in the example above, it must use the prompting mechanism for resampling. An alternative pathway could be to allow for return the fix in the validator, if it is able to fix in place, and this -saves us an extra sampling cost + avoid the possibility of ending up with a new error in the new sample. [[link](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/pre_tool_reflection_toolkit/refraction/src/integration/mellea_requirement.py#L19)] +saves us an extra sampling cost + avoid the possibility of ending up with a new error in the new sample. [[link](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/pre_tool/refraction/src/integration/mellea_requirement.py#L19)] diff --git a/docs/components/refraction/10.-Integration-with-LangFlow.md b/docs/components/refraction/10.-Integration-with-LangFlow.md index 09c8f31..967f5b1 100644 --- a/docs/components/refraction/10.-Integration-with-LangFlow.md +++ b/docs/components/refraction/10.-Integration-with-LangFlow.md @@ -9,7 +9,7 @@ ## Refraction as a component You can add refraction as a component. Until we are open-source, you have to do it yourself. Please follow the instructions -[here](https://docs.langflow.org/components-custom-components), and use the file [here](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/pre_tool_reflection_toolkit/refraction/src/integration/refractor_component.py) as an example. Once you have done so, +[here](https://docs.langflow.org/components-custom-components), and use the file [here](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/pre_tool/refraction/src/integration/refractor_component.py) as an example. Once you have done so, it will appear in the list of components. ![2ca9337f-48f0-4271-bd5e-981ee258b04c.png](assets%2F2ca9337f-48f0-4271-bd5e-981ee258b04c.png) diff --git a/docs/components/silent-review.md b/docs/components/silent-review.md index 82a75de..17d6de4 100644 --- a/docs/components/silent-review.md +++ b/docs/components/silent-review.md @@ -56,5 +56,5 @@ For more details, refer to the paper: [Invocable APIs derived from NL2SQL datase - Average loop counts decrease slightly, showing that fewer iterations are needed to reach successful query completion. ## Getting Started -Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/post_tool_reflection_toolkit/silent_review/README.md) for instructions on how to get started with the code. +Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/post_tool/silent_review/README.md) for instructions on how to get started with the code. See an example in action [here]( https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/examples/silent_review.ipynb). diff --git a/docs/components/sparc.md b/docs/components/sparc.md index 10f4faf..4c4c2b6 100644 --- a/docs/components/sparc.md +++ b/docs/components/sparc.md @@ -142,4 +142,4 @@ These results demonstrate the reflector's strong ability to accurately identify ## Getting Started -Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/pre_tool_reflection_toolkit/sparc/README.md) for instructions on how to get started with the code. See an example in action [here]( https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/examples/langgraph_agent_sparc_example.py) +Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/pre_tool/sparc/README.md) for instructions on how to get started with the code. See an example in action [here]( https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/examples/langgraph_agent_sparc_example.py) diff --git a/docs/components/spotlight.md b/docs/components/spotlight.md index 3e02435..d5a2e94 100644 --- a/docs/components/spotlight.md +++ b/docs/components/spotlight.md @@ -14,7 +14,7 @@ The bar graphs below show how SpotLight leads to improved end to end performance ![spotlight_results](../assets/img_spotlight_results.png) ## Getting Started -Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/spotlight_toolkit/README.md) for instructions on how to get started with the code. See an example in action [here](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/examples/spotlight.ipynb) +Refer to this [README](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/altk/pre_llm/spotlight/README.md) for instructions on how to get started with the code. See an example in action [here](https://github.com/AgentToolkit/agent-lifecycle-toolkit/blob/main/examples/spotlight.ipynb) ## References Venkateswaran, Praveen, and Danish Contractor. "Spotlight Your Instructions: Instruction-following with Dynamic Attention Steering." arXiv preprint arXiv:2505.12025 (2025). https://arxiv.org/pdf/2505.12025