Smart-glasses-augmented language learning.
Features:
- See something, snap a picture of it, and the translation of the detected object will be told to you in your ear. You can then start a conversation with an LLM, asking followup questions
- Translations you learn throughout the day are saved to intuitive flash cards with pronunciation
- You can take quizzes for words you learned througout the day
- You can chat with an AI about how you did on the quiz and what you can do to improve
- You can speak to a voice agent to practice the words you learned in a day
Frontend PWA/webapp repo - github.com/mslee300/viewlingo-frontend
To run:
- Set up a .env file with these fields:
PORT=3000
PACKAGE_NAME=< App package name on Mentra dashboard e.g. com.example.app>
MENTRAOS_API_KEY= <also from Mentra dashboard>
GEMINI_API_KEY=
ELEVENLABS_API_KEY=
ELEVENLABS_VOICE_ID=
ELEVENLABS_MODEL_ID=eleven_multilingual_v2
2.Run the typescript backend + python api/db with Docker:
docker compose up --build- Get a free dedicated url from ngrok, set it up locally (same computer as Docker) and run:
ngrok http --url=<your ngrok url> 7777
Anyone who hits the URL hits port 7777, which hits your local 7777, which hits nginx, which hits the typescript backend or the DB/FastAPI, depending on the url.