- Create separate embedding service
- Explore and benchmark other sentence transformer models for later update
- Add separate search service
- Implement persistent Pinecone index that supports high speed & automatic recovery
- Implement search with URL sync and cached results in frontend
- Create user handler
- Use linting to ensure standard practices
- Benchmark current performance
- Add caching with FastAPI
- Add batch embeding
- Add analytics dashboard
- Add database for search
- Update the embedding model considering quality & speed
- Convert user handler to Go
- Convert caching handler to Go
- Dockerize user handler
- Dockerize embedding service
- Dockerize caching + search
- Ensure ephemeral nature of images
- Deploy to K8s or MicroK3s (V4.5)
- Redesign update client (V5)