Skip to content

Latest commit

 

History

History
451 lines (316 loc) · 10.2 KB

File metadata and controls

451 lines (316 loc) · 10.2 KB
Visual Assist

Visual Assist

Empowering independence through intelligent visual assistance


⚠️ BETA VERSION — This app is currently in active development and testing. Features may change and bugs may exist. Report issues →


Platform Swift Xcode License


Beta Version Build


Features · Requirements · Getting Started · Architecture · Privacy


LiDAR Required ARKit On-Device ML Accessible



🎯 Overview

Visual Assist is a native iOS application designed to help visually impaired users navigate their environment safely and independently. Built with Apple's latest frameworks, it leverages the power of:

ARKit
LiDAR + ARKit
Depth Sensing
Vision
Vision
Text Recognition
Core ML
Core ML
Object Detection
SwiftUI
SwiftUI
Modern Interface


🧪 Beta Status

TestFlight

Visual Assist is currently in beta testing.

This means:

  • 🔨 Active Development — New features being added regularly
  • 🐛 Bug Fixes — Known issues are being addressed
  • 📝 Feedback Welcome — Your input helps improve the app
  • ⚠️ Not Production Ready — Use with awareness of potential issues

Known Limitations

Area Status Notes
Navigation Mode ✅ Working Core functionality complete
Text Reading ✅ Working OCR may vary with lighting
Object Detection 🔄 Testing Accuracy improvements ongoing
Voice Commands ✅ Working English only for now
Apple Watch �� Planned Coming in future release


✨ Features

🧭 Navigation Mode

Real-time obstacle detection powered by LiDAR sensor technology.

Feature
3-Zone Scanning
Distance Alerts
Haptic Feedback
Floor Detection

📖 Text Reading

Point-and-read OCR with natural speech synthesis.

Feature
Live OCR
Freeze Frame
Natural Speech
Tap to Focus

👁️ Object Awareness

AI-powered scene understanding and description.

Feature
Object Detection
Scene Description
People Counting
On-Device ML

🎤 Voice Commands

🗣️ "Navigate"           → Start obstacle detection
🗣️ "Read text"          → Begin text reading
🗣️ "What's around me"   → Describe surroundings
🗣️ "Stop"               → Stop current action
🗣️ "Faster" / "Slower"  → Adjust speech rate
🗣️ "Help"               → List all commands


📋 Requirements

📱 Hardware

Device LiDAR
iPhone 12 Pro / Pro Max
iPhone 13 Pro / Pro Max
iPhone 14 Pro / Pro Max
iPhone 15 Pro / Pro Max
iPhone 16 Pro / Pro Max

💻 Software

Requirement Version
iOS 17.0+
Xcode 15.0+
Swift 5.9+

🔑 Permissions

  • 📷 Camera
  • 🎤 Microphone
  • 🗣️ Speech Recognition


🚀 Getting Started

Installation

# Clone the repository
git clone https://github.com/yadava5/VisualAssist.git

# Navigate to project
cd VisualAssist

# Open in Xcode
open VisualAssist.xcodeproj

Build & Run

Step Action
1️⃣ Select your Development Team in Signing & Capabilities
2️⃣ Connect your iPhone Pro via USB
3️⃣ Press + R to build and run

First Launch

Grant permissions → App announces "Visual Assist ready" → Start using!



🏗️ Architecture

VisualAssist/
├── 📁 App/                    # Entry point & state
│   ├── VisualAssistApp.swift
│   └── AppState.swift
├── 📁 Views/                  # SwiftUI interface
│   ├── HomeView.swift
│   ├── NavigationModeView.swift
│   ├── TextReadingModeView.swift
│   ├── ObjectAwarenessModeView.swift
│   └── Components/
├── 📁 Services/               # Business logic
│   ├── LiDARService.swift
│   ├── CameraService.swift
│   ├── SpeechService.swift
│   └── HapticService.swift
├── 📁 Models/                 # Data structures
└── 📁 Utilities/              # Helpers

Technology Stack

Framework Purpose
ARKit LiDAR depth sensing 🔵
Vision Text recognition (OCR) 🟢
Core ML Object detection 🟣
AVFoundation Camera capture 🟠
Speech Voice commands 🔴
Core Haptics Haptic feedback 🟡

Design Patterns

Pattern Usage
MVVM Clean view/logic separation
Combine Reactive @Published properties
Swift Concurrency Modern async/await
iOS 26 Design Liquid glass UI effects


�� Privacy

Feature Description
🔐 On-Device Processing All ML runs locally on your iPhone
📡 No Network Required Works completely offline
🚫 No Data Collection Nothing leaves your device
📊 No Analytics Zero tracking or telemetry
👤 No Account Use immediately, no sign-up


♿ Accessibility

Visual Assist is built with accessibility as a core principle:

VoiceOver & UI

  • ✅ Full VoiceOver support
  • Dynamic Type compatible
  • High Contrast mode
  • Reduce Motion respected
  • ✅ Large touch targets (44pt min)

Haptic Patterns

Pattern Meaning
· Action confirmed
·· Mode changed
~~~ Critical obstacle
··· Warning


🗺️ Roadmap

  • ⌚ Apple Watch companion app
  • 🗺️ Indoor mapping & saved locations
  • 💵 Currency recognition
  • 🌍 Multi-language support
  • 🔗 Siri Shortcuts integration
  • 🚗 CarPlay navigation support


📜 License

This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License.

Permission
✓ Share — Copy and redistribute
✓ Adapt — Remix and build upon
✗ Commercial use without permission

For commercial licensing, contact the author.

CC BY-NC 4.0

View License



📚 Documentation

This project uses DocC for API documentation.

# Build documentation in Xcode
# Product → Build Documentation (⌃⇧⌘D)

# Or via command line
xcodebuild docbuild -scheme VisualAssist -derivedDataPath ./docs



Built with ❤️ for accessibility

© 2026 Ayush. All rights reserved.


Beta

Visual Assist is not affiliated with Apple Inc.
iPhone, LiDAR, ARKit, and other Apple trademarks are property of Apple Inc.


Made with Swift Built for iOS