This repository showcases the integration between Agent Voice Response and OpenRouter. The application leverages OpenRouter's powerful language model to process text input from users, providing intelligent, context-aware responses that enhance the virtual agent's capabilities.
To set up and run this project, you will need:
- Node.js and npm installed.
- An OpenRouter API key.
git clone https://github.com/agentvoiceresponse/avr-llm-openrouter.git
cd aavr-llm-openrouternpm installCreate a .env file in the root of the project to store your API keys and configuration. You will need to add the following variables:
OPENROUTER_API_KEY=your_openrouter_api_key
OPENROUTER_MODEL=your_openrouter_model
PORT=6009Replace your_openrouter_api_key with your actual OpenRouter API key.
Replace your_openrouter_model with your preferred model (e.g., "google/gemini-2.0-flash-lite-preview-02-05:free").
Start the application by running the following command:
node index.jsThe server will start on the port defined in the environment variable (default: 6009).
The Agent Voice Response system integrates with OpenRouter to provide intelligent text-based responses to user queries. The server receives text input from users, forwards it to OpenRouter's API, and then returns the model's response to the user in real time. This allows the virtual agent to simulate conversational abilities, improving the overall user experience.
- Express.js Server: The server handles incoming requests from clients and sends them to OpenRouter’s API for processing.
- OpenRouter API Integration: The application sends text queries to OpenRouter and receives generated responses, which are relayed back to the user.
- Conversation Handling: The system can maintain the context of conversations for more interactive and dynamic exchanges.
- OpenRouter API Request: The application sends user input to OpenRouter and specifies the model (e.g.,
google/gemini-2.0-flash-lite-preview-02-05:free) to be used for generating responses. - Response Streaming: The server streams the response back to the client, allowing real-time interaction.
- Conversation Context: You can implement conversation handling by storing previous interactions and passing them as part of the prompt for more contextual responses.
This endpoint accepts a JSON payload containing the user's messages and returns a response generated by OpenRouter.
In index.js, you can modify the parameters sent to OpenRouter, such as changing the model or adjusting the temperature and max_tokens to control the creativity and length of the responses:
const openRouterRequest = {
model: "gpt-4",
prompt: query,
temperature: 0.7,
max_tokens: 150,
};To use this application, you need to obtain an API key from OpenRouter and select a model to use. Follow these steps:
- Register at OpenRouter: OpenRouter Registration
- Retrieve your API key from: OpenRouter API Keys
- Select the model you want to use from: OpenRouter Models
Replace your_openrouter_api_key and your_openrouter_model in the .env file with the values you obtained from OpenRouter.
- GitHub: https://github.com/agentvoiceresponse - Report issues, contribute code.
- Discord: https://discord.gg/DFTU69Hg74 - Join the community discussion.
- Docker Hub: https://hub.docker.com/u/agentvoiceresponse - Find Docker images.
- NPM: https://www.npmjs.com/~agentvoiceresponse - Browse our packages.
- Wiki: https://wiki.agentvoiceresponse.com/en/home - Project documentation and guides.
AVR is free and open-source. Any support is entirely voluntary and intended as a personal gesture of appreciation. Donations do not provide access to features, services, or special benefits, and the project remains fully available regardless of donations.
MIT License - see the LICENSE file for details.