Skip to content

Architecture

Baldri edited this page Feb 11, 2026 · 1 revision

Architecture

Overview

Mingly is an Electron desktop application with an optional headless Node.js server variant. It connects to multiple LLM providers through a unified service layer.

┌─────────────────────────────────────────────┐
│                  Renderer                    │
│         React + Zustand + Tailwind          │
└──────────────────┬──────────────────────────┘
                   │ IPC (contextBridge)
┌──────────────────▼──────────────────────────┐
│                  Preload                     │
│          Secure IPC bridge (80+ channels)    │
└──────────────────┬──────────────────────────┘
                   │
┌──────────────────▼──────────────────────────┐
│               Main Process                   │
│  ┌─────────────────────────────────────┐    │
│  │          ServiceLayer               │    │
│  │  (transport-agnostic business logic)│    │
│  └──┬────┬────┬────┬────┬────┬───────┘    │
│     │    │    │    │    │    │              │
│  Client Router System RAG  MCP  Tracking   │
│  Mgr         Prompt        Client Engine   │
│     │        Mgr                            │
│  ┌──▼────────────────────┐                 │
│  │    LLM Clients        │                 │
│  │ Anthropic│OpenAI│Google│Ollama          │
│  └───────────────────────┘                 │
│                                             │
│  ┌──────────┐  ┌──────────┐               │
│  │ Database  │  │ Security │               │
│  │ (sql.js)  │  │ Sanitizer│               │
│  └──────────┘  │ RateLimiter│              │
│                 │ RBAC      │               │
│                 │ Validator │               │
│                 └──────────┘               │
└─────────────────────────────────────────────┘

Core Components

ServiceLayer

The central orchestrator (src/main/services/service-layer.ts). All business logic flows through here, making it transport-agnostic — the same code serves both IPC (Electron) and HTTP/WebSocket (server mode).

Chat pipeline:

  1. Command detection and extraction
  2. System prompt building with custom modes
  3. RAG context injection (non-blocking)
  4. Message streaming from LLM client
  5. Database persistence
  6. Token/cost tracking
  7. Event streaming via callback

Design principle: Non-blocking failure modes — RAG, database, or tracking failures never block the chat response.

LLM Clients

Provider-specific adapters in src/main/llm-clients/:

Client Provider Features
AnthropicClient Anthropic Claude models, streaming
OpenAIClient OpenAI GPT models, streaming
GoogleClient Google Gemini models, streaming
OllamaClient Ollama Local models, no API key needed

All clients implement a common interface with sendMessage() and streaming support.

Database

WASM-based SQLite via sql.js (src/main/database/index.ts). No native modules — works cross-platform without compilation.

  • File-based persistence with debounced saves (100ms)
  • Migration system with version tracking
  • See Database for schema details

Security

Four-layer security system in src/main/security/:

  • InputSanitizer — Prompt injection detection (8 attack types)
  • RateLimiter — Per-handler and global request throttling
  • RBACManager — Role-based access control with org/team hierarchy
  • SensitiveDataDetector — PII/credential scanning before cloud transmission

See Security for full details.

MCP Integration

Model Context Protocol support in src/main/mcp/ enables connecting external tools and data sources to LLM conversations.

RAG System

Retrieval-Augmented Generation enriches LLM responses with user documents:

  • Local embedding via RAG server (Python FastAPI)
  • Vector storage in Qdrant
  • Context injection into prompts with source attribution

Deployment Modes

┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐
│   Standalone     │     │     Server       │     │  Client (Hybrid) │
│                  │     │                  │     │                  │
│  Electron app    │     │  Node.js server  │     │  Electron app    │
│  Local database  │     │  REST + WS API   │     │  Connects to     │
│  Direct LLM calls│     │  Multi-user      │     │  remote server   │
│                  │     │  Central API keys │     │  Uses server keys│
└─────────────────┘     └─────────────────┘     └─────────────────┘

Configured via Settings > Network & AI Server. See Deployment for server setup.

Data Flow

Standalone Mode

User → Renderer → IPC → ServiceLayer → LLM Client → Provider API
                                    ↕
                              Database (local)

Server Mode

Client → HTTP/WS → MinglyAPIServer → ServiceLayer → LLM Client → Provider API
                                              ↕
                                        Database (server)

Technology Stack

Layer Technology
Desktop shell Electron
Frontend React, Zustand, Tailwind CSS
Build tool Vite
Backend Node.js, TypeScript
Database sql.js (SQLite WASM)
Vector DB Qdrant
Embeddings Python FastAPI (RAG server)
Testing Vitest
CI/CD GitHub Actions
Packaging electron-builder

Back to: Home | Related: API-Reference | Database | Security

Clone this wiki locally