Notehound is a note taking app that summarizes you meetings for you. Notes and transcripts are stored in RDS along with Pinecone to allow you to search across your notes.
View a summary of your recent meetings (image 1). Automatically generate notes and actions items or see the transcription (image 2). Search across your notes using natural language via Pinecone (image 3).
- Lytix account. Instruction on how to get an API key can be found here
- Pinecone account
- Pinecone API key (see here)
- Access to an RDS database, in our case we used AWS RDS
- Firebase account
- Hugging Face account and API key
- You'll also need to read & agree to the terms for this model
Navigate to the backend folder
cd Backend
Install our dependencies
pip install -r req.txt
Defined our env vars
LX_API_KEY: Lytix API KeyPINECONE_API_KEY: Pinecone API KeyFIREBASE_PROJECT_ID: Firebase Project IDDATABASE_URL: RDS Database URL. E.g.postgresql://{{ username }}:{{password }}@{{host }}:{{port }}/{{database }}HF_TOKEN: Hugging Face TokenFIREBASE_CONFIG: Parsed up firebase config (e.g.JSON.parse(...firebaseConfig)). Please see here for instructions- The shape of your firebase config looks like this
{ "type":"service_account", "project_id":"{{ project_id }}", "private_key_id":"{{ private_key }}", "private_key":"{{ private_key }}", "client_email":"{{ client_email }}", "client_id":"{{ client_id }}", "auth_uri":"https://accounts.google.com/o/oauth2/auth", "token_uri":"https://oauth2.googleapis.com/token", "auth_provider_x509_cert_url":"https://www.googleapis.com/oauth2/v1/certs", "client_x509_cert_url":"https://www.googleapis.com/robot/v1/metadata/x509/{{ client_email }}", "universe_domain":"googleapis.com" }
Navigate to the frontend folder
cd Frontend
Install our dependence's
npm install
Defined our env vars
NEXT_PUBLIC_BASE_URL: http://localhost:4040NEXT_PUBLIC_FIREBASE_CONFIG: Parsed up firebase config (e.g.JSON.parse(...firebaseConfig))- The shape of your firebase config looks like this
{ "apiKey":"123-345", "authDomain":"{{project }}.firebaseapp.com", "projectId":"{{project }}", "storageBucket":"{{project }}.appspot.com", "messagingSenderId":"{{messagingSenderId}}", "appId":"{{appId}}" }
Start the NextJS app
npm run dev
You now can login and start using the app! Upload your audio and watch it process in the background 🚀
All requests are routed first through server.py.
The processing jobs happen on the server itself (to keep things simple), so there is no external celery worker or message queue.
To accomplish this, we run a FastAPI background task. We can't run the job directly on the FastAPI server itself (since otherwise it will hog all the resources and requests will not get served).
To get around this, the fast api server manages a list of pending audio jobs. For each job we start a new subprocess via our BackgroundTaskQueue.py class.
This is what manages and runs the actual analysis of the meeting. Check out that code here.




