Winner β NVIDIA GTC Hackathon 2026 (Shortest Hackathon)
A unified creative operating system for designers, photographers, and digital artists. Real-time collaboration, AI-driven augmentation, and local model support β all in one dashboard. Built on NVIDIA DGX Spark (GB10) at GTC San Jose.
| Layer | Technology |
|---|---|
| Frontend | React 19, port 3001 |
| Backend | FastAPI (Python), port 3002 |
| Database | SQLite |
| Vector DB | ChromaDB |
| Embeddings | all-MiniLM-L6-v2 (local) |
| AI (Cloud) | Claude via Anthropic API |
| AI (Local) | Ollama β qwen2.5:72b, llama3.2-vision:11b |
| Image Gen | ComfyUI + Stable Diffusion (port 8188) |
1. Backend
cd backend
pip install fastapi uvicorn anthropic pydantic-settings sqlalchemy chromadb sentence-transformers httpx python-multipart
mkdir -p ../data/chroma ../data/uploads ../data/images
uvicorn main:app --host 0.0.0.0 --port 3002 --reload2. Frontend
cd frontend
npm install
npm startOpen http://localhost:3001 β login with admin / admin.
On the DGX Spark β install and run:
# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh
ollama pull qwen2.5:72b
ollama pull llama3.2-vision:11b
# Install ComfyUI
git clone https://github.com/comfyanonymous/ComfyUI.git ~/ComfyUI
cd ~/ComfyUI && pip3 install -r requirements.txt --break-system-packages
# Start everything
cd ~/Dashunbored/backend
mkdir -p ../data/chroma ../data/uploads ../data/images
pip3 install fastapi uvicorn pydantic-settings sqlalchemy chromadb sentence-transformers httpx python-multipart --break-system-packages
uvicorn main:app --host 0.0.0.0 --port 3002 &
cd ~/Dashunbored/frontend
npm install && npm start &
python3 ~/ComfyUI/main.py --listen 0.0.0.0 --port 8188 --cpu &Access from any device on your network: http://SPARK_IP:3001
The main AI workspace. Supports streaming responses, file uploads, knowledge base selection, and conversation history.
- Model selector β switch between Claude models (cloud) or Ollama models (local, no API cost)
qwen2.5:72bβ best for creative writing, briefs, style directionllama3.2-vision:11bβ upload photos for AI analysis
- Knowledge bases β select one or all sources to give the AI context from your connected tools
- File upload β drag and drop files into chat (images, PDFs, docs)
- Edit & regenerate β click any message to edit or regenerate
- Export β download any conversation as markdown
- Web search β built-in DuckDuckGo search panel
Homepage widgets (visible when no chat is open):
- Calendar β next 4 days of events
- Images β synced photos from connected services
- Documents β recent docs with source info
- APPS/API INPUTS β grouped by category (Creative Tools, Storage, Communication, Business)
Visual asset hub. See all synced content from connected creative tools in one place β Lightroom photos, Figma files, Frame.io clips, Behance boards.
Local AI image generation powered by ComfyUI running on the GB10.
- Text-to-image with prompt and negative prompt
- Style presets: Photorealistic, Cinematic, Studio Portrait, Editorial, Golden Hour, B&W, Concept Art, Watercolor
- Size options: Square, Portrait, Landscape, Wide
- Adjustable steps and CFG scale
- Generated images saved to gallery
- No API cost β runs entirely on local hardware
Creative AI agent that can autonomously complete multi-step tasks using your connected knowledge bases.
Example tasks (no API key needed with Ollama):
- Generate a mood board brief from your CC Library assets
- Write a photography brief from a Lightroom album
- Analyze a Figma file and summarize design decisions
- Draft creative direction from a client brief document
Shared team calendar with Zoom scheduling.
- Create organizations and invite team members with a join code
- Add/edit/delete events with color coding
- Connect personal Zoom β AI can schedule meetings from chat
- Public booking link:
/book/your-username
Email client for Gmail and Outlook.
- Read, compose, and manage email
- AI-assisted drafting from the chat interface
- Email signature builder
Connect external data sources to make their content searchable by the AI.
Creative Tools:
| Connector | What it syncs |
|---|---|
| Adobe Creative Cloud | Lightroom albums, photo metadata, CC Library colors/styles/graphics |
| Figma | Design files, components, styles, review comments |
| Frame.io | Video projects, clip metadata, timecoded review comments |
| Behance | Portfolio projects and inspiration |
| Unsplash | Stock photography collections |
| Notion | Pages, databases, project briefs |
Storage & Files:
| Connector | What it syncs |
|---|---|
| Dropbox | Files and folders |
| Google Drive | Drive files |
| SharePoint | Microsoft 365 documents |
| AWS S3 | S3 bucket contents |
| Local Files | Always available β upload directly |
Communication:
| Connector | What it syncs |
|---|---|
| Email (Gmail/Outlook) | Inbox messages |
| Slack | Channels and DMs |
Business:
| Connector | What it syncs |
|---|---|
| HubSpot | CRM contacts and deals |
| Web Scraper | Any public URL |
To connect: Admin β Connectors β click a connector β enter credentials β Save β Sync.
Go to Admin β API Keys:
- Anthropic (Claude) β required for Claude models. Get from
console.anthropic.com. Not needed if using Ollama local models. - Zoom β Client ID + Secret from Zoom Marketplace. Enables AI-scheduled meetings.
Running without API keys: Select
qwen2.5:72borllama3.2-vision:11bfrom the model dropdown. All chat, agent, and image scoring features work fully local at zero cost.
Upload any photo in the Agent page to get an AI score on how well it will perform when posted:
- Overall score (0β100)
- Breakdown: composition, lighting, color, sharpness, subject clarity
- Specific improvement suggestions
- Platform-specific notes
Select llama3.2-vision:11b as your model for best results.
Share http://YOUR_IP:3001/book/your-username publicly. Visitors see your availability and can book meetings β no login required.
| Field | Value |
|---|---|
| Username | admin |
| Password | admin |
Change in Admin β Settings after first login.
# On the Spark
ollama pull qwen2.5:72b # 47GB β best for creative tasks
ollama pull llama3.2-vision:11b # 7.8GB β photo/image analysisConfigure Ollama to accept remote connections:
sudo mkdir -p /etc/systemd/system/ollama.service.d
echo -e "[Service]\nEnvironment=OLLAMA_HOST=0.0.0.0" | sudo tee /etc/systemd/system/ollama.service.d/override.conf
sudo systemctl daemon-reload && sudo systemctl restart ollama