Releases: smitkunpara/ChatKnot
v0.4.2
ChatKnot v0.4.2 Release Notes
⌨️ High-Precision Android Keyboard Support — Replaced the standard KeyboardAvoidingView with a manual keyboard height tracker for a more stable and predictable typing experience on Android. No more "sticky" inputs or inconsistent lifting.
🛠️ Hardened Data Management — Significant improvements to local data safety and migration:
- Resilient Data Deletion — Fixed a regression in the "Delete All Local Data" action, ensures all Realm, MMKV, and hardware-backed secrets are cleared even when legacy fallbacks are absent.
- Deep Import Validation — Settings imports now undergo deep JSON structure traversal with path-level validation. Invalid entries are skipped with precise reporting, while valid data is preserved.
- Post-Import Reconciliation — Consolidated post-import reporting using the app's native warning UI, ensuring AI models and MCP tools are correctly reconciled after a configuration load.
🎨 UI/UX & Refinements:
- Themed Status Dialogs — Replaced all remaining native Android system alerts with the app's themed popup system for a consistent, premium design language throughout the Settings workflows.
- MCP Panel Constraints — Added max-height constraints and internal scrolling to the MCP/tool call details panel, preventing large responses from disrupting the chat layout.
Full changelog: CHANGELOG.md
v0.4.1
ChatKnot v0.4.1 Release Notes
⚡ Drastic Performance & Size Optimization — In our most aggressive cleanup yet, we've stripped away over 7+ native dependencies, resulting in a significantly leaner, faster, and more stable Android experience:
- Reduced Bundle Size — Removed
AsyncStorage,Reanimated,MaskedView, and several polyfills; now built purely for modern Android. - Improved Security Fallback — Replaced disk-based fallbacks with a high-performance in-memory Volatile Storage layer.
- Zero-Warning Production Build — Cleared multiple internal Gradle/Kotlin deprecations to ensure a rock-solid, error-free binary.
🎯 Context Usage & Model Awareness — A significant overhaul of how token usage is tracked, displayed, and enforced:
- Dynamic Context Limits — Automatically extracts model context windows directly from AI provider
/modelsendpoints (including OpenRouter/Custom OpenAI). - Hard Switch Enforcement — Prevents switching to a model with a context limit smaller than your current conversation's used tokens, showing a "Context Limit Exceeded" warning with options to start a new chat instead.
- Transactional Token Tracking — Finalizes token counts only at the end of AI turns for a "flicker-free" and precise usage display.
- Smart Rewind — Editing or retrying a message now automatically "rewinds" your context ring to that historical turn, with intelligent token estimation for legacy chats.
📦 UI/UX & Native Refinements:
- Seamless Assistant Turns — Continuous "seam-free" response flow for sequential tool-calling turns; no more interstitial gaps.
- Metadata-Rich Exports — Markdown, PDF, and JSON exports now permanently record the specific AI model name and active mode for every message.
Full changelog: CHANGELOG.md
v0.4.0
ChatKnot v0.4.0 Release Notes
Android-Only Packaging — Removed iOS project files/config and iOS build checks so releases now target Android only.
Encrypted Realm Chat Storage — Chat data now uses encrypted Realm database with normalized entities (conversations, messages, tool calls, attachments) for secure local storage with clean schema separation.
Delete All Local Data — New Settings action to completely wipe local app data including chats, drafts, context usage, settings, and secrets — useful for fresh starts or privacy.
Long-Chat Paging — Added "Load Older Messages" control for handling very long conversations more efficiently.
App Size Optimization — Reduced APK binary size from 142 MB to ~45 MB (68% reduction) for modern devices by enabling R8 minification and resource shrinking.
Fresh Reinstall Policy — Android allowBackup="false" now ensures app data is fully removed on uninstall/reinstall.
UI Polish — Fixed share icon background on first open, replaced line-loader with round spinner, and ensured error replies also save model metadata.
Full changelog: CHANGELOG.md
v0.3.1
ChatKnot v0.3.1
Context Usage Indicator — Visual progress ring in chat composer shows how much context window is used, with color changes at 70% and 90% thresholds.
Chat UI Improvements — Better Markdown rendering, improved scrolling stability, refined thinking animations with millisecond-accurate timing, and permanent request metadata storage.
Stability & Tests — Fixed various edge cases, added comprehensive regression tests, and scoped TypeScript checks to avoid scanning SDK files.
Full changelog: CHANGELOG.md
v0.3.0
🚀 ChatKnot v0.3.0
This major release introduces the Mode System, a powerful way to organize your AI workflows. It also brings significant UI refinements, smarter tool-calling interactions, improved performance, and more precise startup health-check behaviour.
✨ Key Highlights
🤖 Intelligent Mode System
- Specialized Modes: Create and manage multiple Modes (e.g., "Coding", "Research", "Creative"). Each mode has its own:
- System Prompt: Fine-tune the AI's personality and instructions.
- Model Selection: Pin specific models to specific tasks.
- MCP & OpenAPI Overrides: Enable or disable specific tools per mode.
- Mode Persistence: Your conversations now remember which mode they were started in.
- Migration Path: Existing settings are automatically migrated into the new Mode structure without data loss.
⚡ Refined Settings UI
- Quick Toggles: Enable or disable AI Providers and MCP Servers directly from the list view—no more digging into sub-menus for simple changes.
- Full-Page Editors: Providers and MCP servers now use clean, focused full-page editors with explicit Save/Discard/Delete controls.
- Targeted Refreshes: Models and tools are refreshed only when needed — opening the Model Picker fetches models and capabilities; opening an MCP server editor silently refreshes that server's tools, avoiding unnecessary background refreshes.
- Model Picker Polish: Provider settings now show compact model selection counts, and the Manage Models picker has been refined to better match the chat model selector layout.
🎯 Smarter Startup Warnings
- Warnings now only appear for changes that matter to you: a hidden AI model being removed or a disabled tool being removed will no longer trigger a notification.
- Warning messages now name exactly what changed — e.g.,
Model "gpt-4o" removed from "OpenAI"orTool "search" removed from "My MCP"— so you always know what to act on.
🔐 Safe Defaults for New Items
- New AI models discovered on a provider are now hidden by default until you explicitly enable them in the Model Picker.
- New MCP tools discovered on a known server are now disabled by default until you explicitly allow them.
📤 Cleaner Export / Import
- Export now produces a minimal snapshot: only your visible AI models and enabled MCP tools are included, keeping the payload small and intentional.
- Import treats anything not in the exported file (models/tools discovered afterwards) as hidden/disabled by default, matching the safe-defaults behaviour above.
🔒 Security & Performance
- App Startup Optimization: Memoized core state (modes and MCP overrides) in the root
Appcomponent to eliminate redundant service reconnections and re-initializations during navigation. - Enhanced Caching:
- Added base64 hydration and provider instance caching to reduce UI jitter and latency.
- Implemented a smart cache eviction policy in the
ProviderFactoryto manage memory usage during long-running sessions.
- Efficient Message Rendering: Optimized internal message list memoization, ensuring the chat UI remains responsive even in conversations with hundreds of messages.
- Resource Cleanup: Removed ineffective background caches and fixed animation leaks in the streaming UI to ensure long-term stability and battery efficiency
💬 Streaming & MCP UX Refinements
- Realtime Visible Streaming Restored: When the user is on the active chat screen, chunks now render immediately as they arrive for true progressive typing feedback.
- No Partial Assistant Persistence: Assistant chunk state stays in memory during generation and is saved only when the response completes or the user presses Stop.
- Background/Hidden Screen Behavior: If the user leaves chat (e.g., opens Settings), streaming work continues in memory without repainting hidden chat UI; returning to chat shows the latest accumulated chunk instantly.
- Immediate MCP Auto-Scroll: As soon as MCP tool-call UI cards are created, chat now jumps to the bottom immediately instead of waiting for MCP response completion.
- Per-Chat Stop/Loading State: Stop button and loading indicators are now conversation-scoped, so switching to another chat no longer shows a false active generation state.
- Per-Chat Draft Persistence: Composer drafts are saved per conversation and restored after chat switches, app backgrounding, and full app restart.
- Broader Provider Stream Compatibility: OpenAI-compatible streaming now handles more SSE variants correctly, including CRLF-delimited frames and providers that send final streamed content via
messagepayloads. - Legacy Tool-Calling Fallback Restored: Non-OpenAI-compatible endpoints once again receive
functions/function_callfallback fields in addition to moderntools, improving compatibility with older OpenAI-style providers. - Composer Mode Chip Sizing: The mode selector in the composer now uses only the width needed for the mode name instead of spanning the full input row.
🧪 Debugging Improvements
- Structured Dev Logs: Added centralized dev-only debug logging with file and function labels across app startup, chat state, provider requests, and MCP runtime to make runtime tracing much easier during development.
📄 Full Changelog
For a detailed list of every commit and technical fix, please see CHANGELOG.md.
v0.2.3
This release introduces the Thinking UI for models with reasoning capabilities, along with critical fixes for chat scrolling, export functionality, and tool calling stability.
✨ Key Highlights
🧠 Thinking UI & Export
- Export with Thinking: Added an option to include model internal reasoning in chat exports (PDF, Markdown, and JSON). Thinking blocks are exported as collapsible
<details>blocks in Markdown for a clean reading experience. - Thinking Block Component: New dedicated UI for models that output internal thought processes (
<think>tags). - Live Thinking Timer: Real-time counter showing elapsed thinking time ("Thinking for 5s").
- Shimmer Animation: Subtle animated pulse effect during reasoning for visual feedback.
- Markdown in Thoughts: Full Markdown support in thinking content - code blocks, lists, and rich formatting.
- OpenAI-Compatible Reasoning: Supports
reasoning_contentandreasoningdeltas in streaming API.
🛠️ Tool Calling & Stability Fixes
- Tool Calling Compatibility: Fixed an issue where AI models failed to trigger tool calls due to non-standard tool names (dots/braces) or invalid JSON schemas.
- Sanitized Tool Names: All tool names now strictly adhere to the OpenAI regex
^[a-zA-Z0-9_]{1,64}$for maximum provider compatibility. - Improved Schema Extraction: Ensured
inputSchemaalways includestype: 'object'at the top level, preventing rejection by stricter OpenAI-compatible providers. - Stop AI Stability: Resolved multiple crashes when stopping AI mid-stream, including "AbortError" unhandled rejections and state updates on unmounted components.
- Intelligent Retry Placement: The retry button now correctly appears on the last meaningful assistant message, automatically hiding empty interrupted turn messages.
💬 Chat Experience Polishing
- Stability Regression Tests: Added 150+ comprehensive unit tests for AbortController stream cancellation, tool name sanitization, and OpenAPI schema extraction.
- Improved Auto-Scroll: Fixed messages hiding behind input during streaming. Increased buffer to handle expandable thinking blocks.
- Android Keyboard Precision: Fixed input box not returning to initial position when keyboard is dismissed on Android.
- FlatList Rendering Consistency: Fixed "Rendered fewer hooks than expected" error in
MessageBubblewhen stopping AI mid-response.
🎨 Visual Refinements
- Stop Button Styling: Updated to danger/red background with white icon for better visibility and consistency.
- Horizontal Rule Fix: Fixed
---being invisible in dark mode.
📄 Full Changelog
For a complete list of technical commits and internal bug fixes, please see CHANGELOG.md.
v0.2.2
This release focuses on polishing the chat experience with critical UI/UX fixes for scrolling, keyboard handling, and visual feedback.
✨ Key Highlights
💬 Chat Experience Polishing
- Auto-Scroll Buffer Fix: Resolved streaming responses scrolling messages behind the floating input.
- Android Keyboard Precision: Fixed input box not returning to initial position when keyboard is dismissed on Android
- Dynamic Input Padding: Input box now uses conditional bottom padding
🎨 Visual Refinements
- Stop Button Styling: Updated stop button to use danger/red background color with white icon for better visibility and consistency with warning/error styling.
📄 Full Changelog
For a complete list of technical commits and internal bug fixes, please see CHANGELOG.md.
v0.2.1
This release marks the first stable iteration of the 0.2.x series. We have completely overhauled the chat interface for better performance and keyboard reliability, while hardening the internal MCP engine for professional workflows.
✨ Key Highlights
🎨 Major UI & UX Refinement
- Modern Floating Input: Completely restructured the message composer to use a permanent "Stacked" layout. This ensures the input area is never obscured by the keyboard and provides a stable foundation for future feature expansion.
- Integrated Fade Gradients: Replaced heavy blur effects with high-performance
LinearGradientoverlays. Messages now gracefully fade into the background at the top and bottom of the screen. - Flicker-Free Typing: Implemented height-based hysteresis and debouncing to prevent UI "flickering" and associated app crashes when text wraps near line-edges.
- Keyboard Precision: Optimized Android
KeyboardAvoidingViewto ensure the input box snaps perfectly back to its initial position when the keyboard is dismissed.
🖼️ Multimodal & File Handling
- Rich Attachments: Seamlessly attach Images, PDFs, and Text files to your messages.
- Capability-Aware UI: Model capability tags (
vision,tools,file) are now clearly displayed in the selector, with automatic filtering to prevent sending unsupported attachments to models.
🛡️ Hardened Reliability
- Startup Health Checks: Updated the health check system to proactively validate AI providers and MCP servers. Reachable endpoints are verified, and unreachable ones are auto-disabled with clear feedback.
- SSE Memory Safety: Fixed a critical memory leak in SSE (Server-Sent Events) streams where timers weren't being cleared during token generation.
- Dual-System Prompts: Requests now use a two-part system instruction (User Prompt + App Default), improving reliability and following AI provider best practices.
🔐 Security & Compatibility
- Privacy First: Removed unnecessary
RECORD_AUDIOpermissions from the manifest to maintain a clean privacy footprint. - Storage Optimization: Large image data is no longer persisted to disk state, significantly reducing storage bloat and improving UI responsiveness.
📄 Full Changelog
For a complete list of technical commits and internal bug fixes, please see CHANGELOG.md.
v0.2.0-beta
This beta release expands ChatKnot with major usability, reliability, and multimodal improvements for real-world MCP + AI workflows.
- 📤 Rich Chat Export: Export any conversation as PDF, Markdown, or JSON with configurable tool input/output inclusion.
- 🖼️ Multimodal Chat Input: Attach images and files directly in chat, with model-capability-aware filtering to prevent unsupported payloads.
- 🧠 Smarter Model & Prompt Runtime: Requests now use two system instructions (user prompt + app default instruction), improving clarity and maintainability of agent behavior.
- 🧩 Capability-Aware Tooling: Model capability tags (
vision,tools,file) are shown in selectors, and MCP/tool payloads are automatically gated by model support. - 🛡️ Startup Safety Hardening: Health checks now proactively validate AI providers and MCP servers, auto-disable unreachable endpoints on network issues, and surface clear warning reasons.
- 🔐 Broader Provider Compatibility: Improved OpenAI-compatible endpoint handling with safer model discovery fallbacks and auth-header compatibility for protected proxies.
Full technical details are available in CHANGELOG.md.
⚠️ Beta Version Note
This is a v0.2.0-beta release. Please continue to back up your settings export JSON, as schemas and runtime behaviors may still evolve during the beta cycle.
v0.1.0-beta
This is the first beta release of ChatKnot, a privacy-focused mobile assistant designed for the Model Context Protocol (MCP).
- 🛡️ Secure Privacy Architecture: Conversations and API keys are protected with hardware-backed encryption (MMKV + Android Keystore), ensuring your data never leaves the device unencrypted.
- 🔌 Advanced MCP Support: Fully integrated Model Context Protocol engine allowing you to connect OpenAI-compatible providers to local or remote tool servers.
- 👁️ Model Visibility Control: Fetch model lists from any provider and use the built-in "eye-toggle" to curate exactly which AI models appear in your chat list.
- ⚡ Proactive Health Checks: Automated startup verification system that validates your AI endpoints and MCP servers to prevent runtime failures.
- 💾 Safe Settings Management: A new draft-based configuration system with strict schema validation for importing/exporting your setup via JSON.
Full technical details available in CHANGELOG.md.
⚠️ Beta Version Note
This is a v0.1.0 Beta release. Please ensure you back up your settings import/export JSON periodically, as internal schemas may evolve during this beta phase.