Skip to content

Feat/v llm: RAGentA Enhancement: Multi-Interface Support and LangGraph Integration#1

Open
RoccoLiberoGiossi wants to merge 2 commits intofaerber-lab:mainfrom
RoccoLiberoGiossi:feat/vLLM
Open

Feat/v llm: RAGentA Enhancement: Multi-Interface Support and LangGraph Integration#1
RoccoLiberoGiossi wants to merge 2 commits intofaerber-lab:mainfrom
RoccoLiberoGiossi:feat/vLLM

Conversation

@RoccoLiberoGiossi
Copy link
Copy Markdown

Introduction of architectural improvements and new features to the RAGentA framework, making it more flexible, scalable, and easier to integrate with various LLM providers. The core logic has been refactored to support multiple model interfaces and an alternative graph-based execution mode.

Core Changes

  • Multi-Interface LLM Support

    • Refactored LLM interaction into a modular architecture using a BaseLLMAgent abstract class in llm_agents.py .
    • Added support for three primary interfaces:
      • HuggingFace : Local model execution with transformers .
      • vLLM : Support for both local vllm execution and remote OpenAI-compatible APIs.
      • OpenAI : Direct integration with OpenAI's API.
    • Implemented a factory function get_llm_agent for streamlined model initialization across all agent roles.
  • LangGraph Integration

    • Introduced a new graph-based implementation of the RAGentA workflow in ragenta_graph.py .
    • This mode decomposes the multi-agent process into discrete, state-managed nodes (Retrieval, Scoring, Filtering, etc.), allowing for more complex and observable agent interactions for future developments.
    • Accessible via the new --use_graph flag.
  • Enhanced Framework Logic

    • Updated the main RAGentA.py to handle the new interface types and provide detailed logging for each agent's decisions.
  • CLI & Configuration Improvements

    • Expanded run_RAGentA.py with several new arguments:
      • --interface : Choose between huggingface , vllm , or openai .
      • --api_key & --api_base : For remote API configuration.
      • --use_graph : To enable the LangGraph-based workflow.
      • --remote_vllm : To use vLLM via an API instead of local execution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant