This project showcases how to call functions in a sample implementation of Hume's Empathic Voice Interface using Hume's Typescript SDK. Here, we have a simple EVI that calls a function to get the current weather for a given location.
To run this project locally, ensure your development environment meets the following requirements:
To check the versions of pnpm and Node.js installed on a Mac via the terminal, you can use the following commands:
- For Node.js, enter the following command and press Enter:
node -vThis command will display the version of Node.js currently installed on your system, for example, v21.6.1.
- For pnpm, type the following command and press Enter:
pnpm -vThis command will show the version of pnpm that is installed, like 8.10.0.
If you haven't installed these tools yet, running these commands will result in a message indicating that the command was not found. In that case, you would need to install them first. Node.js can be installed from its official website or via a package manager like Homebrew, and pnpm can be installed via npm (which comes with Node.js) by running npm install -g pnpm in the terminal.
Before running this project, you'll need to set up EVI with the ability to leverage tools or call functions. Follow the steps below for authentication, as well as creating a Tool and adding it to a configuration.
- Create a
.envfile in the root folder of the repo and add your API Key and Secret Key.
There is an example file called
.env.examplewith placeholder values, which you can simply rename to.env.
Note the VITE prefix to the environment variables. This prefix is required for vite to expose the environment variable to the client. For more information, see the vite documentation on environment variables and modes.
VITE_HUME_API_KEY=<YOUR API KEY>
VITE_HUME_SECRET_KEY=<YOUR SECRET KEY>See our documentation on Setup for Tool Use for no-code and full-code guides on creating a tool and adding it to a configuration.
- Create a tool with the following payload:
curl -X POST https://api.hume.ai/v0/evi/tools \
-H "X-Hume-Api-Key: <YOUR_HUME_API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"name": "get_current_weather",
"parameters": "{ \"type\": \"object\", \"properties\": { \"location\": { \"type\": \"string\", \"description\": \"The city and state, e.g. San Francisco, CA\" }, \"format\": { \"type\": \"string\", \"enum\": [\"celsius\", \"fahrenheit\"], \"description\": \"The temperature unit to use. Infer this from the users location.\" } }, \"required\": [\"location\", \"format\"] }",
"version_description": "Fetches current weather and uses celsius or fahrenheit based on location of user.",
"description": "This tool is for getting the current weather.",
"fallback_content": "Unable to fetch current weather."
}'This will yield a Tool ID, which you can assign to a new EVI configuration.
- Create a configuration equipped with that tool:
curl -X POST https://api.hume.ai/v0/evi/configs \
-H "X-Hume-Api-Key: <YOUR_HUME_API_KEY>" \
-H "Content-Type: application/json" \
-d '{
"evi_version": "2",
"name": "Weather Assistant Config",
"voice": {
"provider": "HUME_AI",
"name": "ITO"
},
"language_model": {
"model_provider": "ANTHROPIC",
"model_resource": "claude-3-5-sonnet-20240620",
"temperature": 1
},
"tools": [
{
"id": "<YOUR_TOOL_ID>"
}
]
}'- Add the Config ID to your environmental variables in your
.envfile:
VITE_HUME_WEATHER_ASSISTANT_CONFIG_ID=<YOUR CONFIG ID>- Add your Geocoding API key to your environmental variables (free to use from geocode.maps.co).
VITE_GEOCODING_API_KEY=<YOUR GEOCODING API KEY>Below are the steps to run the project locally:
- Run
pnpm ito install required dependencies. - Run
pnpm buildto build the project. - Run
pnpm devto serve the project atlocalhost:5173.
This implementation of Hume's Empathic User Interface (EVI) is minimal, using default configurations for the interface and a basic UI to authenticate, connect to, and disconnect from the interface.
- Click the
Startbutton to establish an authenticated connection and to begin capturing audio. - Upon clicking
Start, you will be prompted for permissions to use your microphone. Grant the permission to the application to continue. - Once permission is granted, you can begin speaking with the interface. The transcript of the conversation will be displayed on the webpage in realtime.
- Click
Stopwhen finished speaking with the interface to stop audio capture and to disconnect the WebSocket.
