A clean, minimal AI chat interface powered by Ollama Cloud Models
DataVyn Labs · Ollama · Built with Streamlit
| Feature | Details | |
|---|---|---|
| 🤖 | Cloud Models | 19 verified Ollama cloud models from OpenAI, DeepSeek, Qwen, Gemini, Mistral, Kimi, GLM, MiniMax and more |
| ⚡ | Streaming | Responses stream token by token in real time |
| 🎨 | Model Avatars | Each AI shows its company logo in chat (ChatGPT, DeepSeek, Gemini, Mistral...) |
| 📎 | File Upload | Attach .txt .pdf .json .py .csv — content sent to the model |
| 🎙 | Audio Input | Record voice via mic, auto-transcribed to text |
| 🔐 | Secure Login | API key stored in session only — never saved to disk |
| 🌑 | Dark UI | Clean Claude.ai-style dark theme with Inter font |
git clone https://github.com/DataVyn-labs/ollama-agent
cd ollama-agentpip install -r requirements.txtstreamlit run app.pyOpen http://localhost:8501 in your browser.
- Go to ollama.com and create a free account
- Navigate to Settings → API Keys
- Click Create new key
- Paste it into the app login screen
| Model | ID | Company |
|---|---|---|
| GPT-OSS 120B | gpt-oss:120b |
OpenAI (open weights) |
| GPT-OSS 20B | gpt-oss:20b |
OpenAI (open weights) |
| DeepSeek V3.2 | deepseek-v3.2 |
DeepSeek |
| DeepSeek V3.1 671B | deepseek-v3.1:671b |
DeepSeek |
| Qwen3-Coder 480B | qwen3-coder:480b |
Alibaba |
| Qwen3-Coder-Next | qwen3-coder-next |
Alibaba |
| Qwen3-Next 80B | qwen3-next:80b |
Alibaba |
| Kimi K2.5 | kimi-k2.5 |
Moonshot AI |
| Kimi K2 Thinking | kimi-k2-thinking |
Moonshot AI |
| Gemini 3 Flash Preview | gemini-3-flash-preview |
|
| MiniMax M2.5 | minimax-m2.5 |
MiniMax |
| MiniMax M2.1 | minimax-m2.1 |
MiniMax |
| MiniMax M2 | minimax-m2 |
MiniMax |
| GLM-5 | glm-5 |
Zhipu AI |
| GLM-4.7 | glm-4.7 |
Zhipu AI |
| Devstral 2 123B | devstral-2:123b |
Mistral |
| Devstral Small 2 24B | devstral-small-2:24b |
Mistral |
| Cogito 2.1 671B | cogito-2.1:671b |
Essential AI |
| Nemotron 3 Nano 30B | nemotron-3-nano:30b |
NVIDIA |
Full list → ollama.com/search?c=cloud
Edit these constants at the top of app.py (~line 337):
TEMPERATURE = 0.6 # 0.0 = focused | 1.0 = creative
MAX_TOKENS = 1200 # max tokens per response (hard cap: 2000)datavynlabs_agent/
├── app.py # Main Streamlit application
├── logo.png # DataVyn Labs logo
├── requirements.txt # Python dependencies
└── README.md # This file
streamlit>=1.43.0
requests>=2.31.0
- API key is stored in session memory only — cleared on sign out or tab close
- Audio input (
accept_audio=True) requires Streamlit 1.43+ — runpip install --upgrade streamlitif needed - Uploaded file content is truncated to 4 000 characters before being sent to the model
- All model requests go through
https://ollama.com/api/chat - System prompt instructs the model to give concise, complete answers within the token limit
DataVyn Labs builds intelligent data automation and AI agent tools for modern teams.
© 2026 DataVyn Labs
