Subject Matter Expert AI
RAG-powered Subject Matter Expert AI built with Next.js & Gemini
SMEAI transforms your documents into an AI-powered subject matter expert. Upload your documents, ask questions, and get expert answers grounded in your data.
Key Features:
- π Document-Based RAG β Upload PDFs, TXT, JSON, MD files
- π€ Gemini-Powered Responses β Streaming AI answers using Google's Gemini 2.5 Flash
- π Custom API Keys β Use your own Gemini API key for personal quota
- π Semantic Search β Vector embeddings for intelligent context retrieval
- π Secure Authentication β Google OAuth via Supabase
- β‘ Real-Time Streaming β Modern chat experience with SSE
- π¨ Beautiful UI β Built with Next.js 16, Tailwind CSS & shadcn/ui
- π€ Export Conversations β Download chat history as JSON
Documents β Text Extraction β Chunking β Vector Embeddings β
β Semantic Search β Context Injection β Gemini β SMEAI Response
Tech Stack:
- Frontend: Next.js 16 (App Router), React 19, TypeScript
- UI: Tailwind CSS 4, shadcn/ui, Lucide Icons, react-markdown
- AI/ML: Google Gemini 2.5 Flash, LangChain, Custom Vector Store
- Auth: Supabase (Google OAuth) with SSR support
- Notifications: Sonner (toast notifications)
- Deployment: Vercel-ready
- Node.js 18+ and npm/yarn
- Google Gemini API Key (Get one here)
- Supabase Project (Create one)
- Clone the repository
git clone https://github.com/yourusername/smeai.git
cd smeai- Install dependencies
npm install- Set up environment variables
Create a .env.local file:
# Google Gemini API
GEMINI_API_KEY=your_gemini_api_key_here
# Supabase Authentication
NEXT_PUBLIC_SUPABASE_URL=your_supabase_project_url
NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key-
Configure Supabase Authentication
- Go to Supabase Dashboard β Authentication β Providers
- Enable Google OAuth
- Add your site URL and redirect URL in Site URL settings
- Add authorized redirect URLs:
- Development:
http://localhost:3000/auth/callback - Production:
https://yourdomain.com/auth/callback
- Development:
-
Run the development server
npm run dev- Click "Sign in with Google" on the landing page
- Authenticate via Supabase
- Click the upload button (π icon)
- Select a document (TXT, PDF, JSON, MD)
- Maximum 2 documents, 2MB each
- Documents are chunked and vectorized automatically
- Type your question in the chat input
- SMEAI retrieves relevant context from your documents
- Receives streaming AI responses grounded in your data
- Press Enter to send, Shift+Enter for new line
- View uploaded documents with chunk counts
- Delete documents with the β icon
- Maximum 2 documents to keep the system lightweight
- Click the "API key" button in the top navigation
- Enter your own Gemini API key to use your personal quota
- Key is stored locally and encoded for security
- Remove it anytime to use the default server API key
- Click "Export as JSON" to download your chat history
- Includes messages, document count, and metadata
smeai/
βββ app/
β βββ auth/
β β βββ callback/ # OAuth callback handler
β βββ chat/ # Chat interface (protected)
β βββ api/
β β βββ chat/ # Streaming chat endpoint
β β βββ documents/ # Document management (GET/DELETE)
β β βββ upload/ # Document upload & ingestion
β βββ layout.tsx # Root layout
β βββ page.tsx # Landing page
βββ lib/
β βββ ai/
β β βββ gemini.ts # Gemini AI configuration
β βββ rag/
β β βββ vector.ts # Vector store & embeddings
β βββ supabase/
β βββ client.ts # Supabase client (browser)
β βββ server.ts # Supabase server client
β βββ proxy.ts # Auth middleware/proxy
βββ components/
β βββ common/
β β βββ TopNav.tsx # Navigation component
β βββ Prism.tsx # Animated background component
β βββ ui/ # shadcn/ui components
βββ proxy.ts # Next.js middleware entry
βββ vector_store.json # Persistent vector storage (gitignored)
βββ README.md
- User lands on
/(landing page) - Clicks "Sign in with Google"
- Redirects to Supabase OAuth
- Returns to
/auth/callback - Redirects to
/chat(protected route) - Authenticated users can access chat and upload documents
- Upload document via
/api/upload - Extract text content
- Split into chunks (1000 chars, 200 overlap)
- Generate embeddings using Gemini
- Store in persistent vector store
- User asks a question
- Question is embedded using Gemini embeddings
- Semantic similarity search finds top 3 relevant chunks
- Context is injected into Gemini 2.5 Flash prompt
- Gemini generates answer based on provided context
- Response streams to client in real-time via SSE
- Custom API keys are supported (stored in localStorage, encoded)
- β Clean, modern chat interface with animated Prism background
- β SMEAI branding with Brain icon and glass-morphism navigation
- β Avatar-based message distinction (User/Bot)
- β Rich markdown rendering for AI responses (code blocks, lists, tables)
- β Auto-scrolling chat area with user scroll detection
- β Fixed input area at bottom
- β Document upload with progress indicators
- β Document management (view chunk counts, delete)
- β Toast notifications for all actions (Sonner)
- β Error handling and loading states
- β Empty state with example question prompt
- β Export conversation as JSON
- β Custom API key management dialog
- β Clear chat functionality
- β Mobile-responsive design
-
Connect your repository to Vercel
-
Add environment variables in Vercel Dashboard:
GEMINI_API_KEYNEXT_PUBLIC_SUPABASE_URLNEXT_PUBLIC_SUPABASE_ANON_KEY
-
Update Supabase redirect URLs:
- Add your Vercel domain to authorized redirect URLs
-
Deploy:
npm run buildThe vercel.json is configured to handle all routing correctly.
- Storage: File-based (
vector_store.json) - Persistence: Survives server restarts
- Embeddings: Gemini Embedding API (
gemini-embedding-001) - Similarity: Cosine similarity calculation
- Scalability: For production, migrate to Pinecone, Supabase Vector, or Weaviate
- Max documents: 2
- Max file size: 2MB per document
- Supported formats: TXT, PDF, JSON, MD
- Chunk size: 1000 characters
- Overlap: 200 characters
- Splitter: LangChain Recursive Character Text Splitter
- Metadata: Each chunk tagged with document ID and filename
npm run dev # Start development server
npm run build # Build for production
npm run start # Start production server
npm run lint # Lint code
npm run lint:fix # Auto-fix linting issuesnpx shadcn@latest add [component-name]- Cause: Vector store not persisting or documents not uploaded
- Fix: Check
vector_store.jsonexists and contains data
- Cause: Gemini API quota exceeded
- Fix: Wait for quota reset or upgrade API plan
- Cause: Missing or incorrect Supabase credentials
- Fix: Verify
.env.localand Supabase dashboard settings
- Cause: Missing routing configuration
- Fix: Ensure
vercel.jsonis present and configured correctly
- Cause: API key not properly encoded or invalid
- Fix: Check that the API key is valid and try saving it again. The key is base64 encoded in localStorage.
MIT License - feel free to use this project for learning and production.
- Google Gemini for powerful AI capabilities (Gemini 2.5 Flash & Embeddings)
- LangChain for RAG tooling and text splitting
- Supabase for seamless authentication with SSR support
- shadcn/ui for beautiful, accessible components
- Sonner for elegant toast notifications
- Vercel for effortless deployment
Built with β€οΈ for creating domain-specific AI experts.
For questions or contributions, open an issue or pull request!
SMEAI β Your AI-powered Subject Matter Expert.