⚠️ BETA VERSION — This app is currently in active development and testing. Features may change and bugs may exist. Report issues →
Features · Requirements · Getting Started · Architecture · Privacy
Visual Assist is a native iOS application designed to help visually impaired users navigate their environment safely and independently. Built with Apple's latest frameworks, it leverages the power of:
|
LiDAR + ARKit Depth Sensing |
Vision Text Recognition |
Core ML Object Detection |
SwiftUI Modern Interface |
|
Visual Assist is currently in beta testing. This means:
|
| Area | Status | Notes |
|---|---|---|
| Navigation Mode | ✅ Working | Core functionality complete |
| Text Reading | ✅ Working | OCR may vary with lighting |
| Object Detection | 🔄 Testing | Accuracy improvements ongoing |
| Voice Commands | ✅ Working | English only for now |
| Apple Watch | �� Planned | Coming in future release |
|
Real-time obstacle detection powered by LiDAR sensor technology.
|
Point-and-read OCR with natural speech synthesis.
|
AI-powered scene understanding and description.
|
|
|
|
|
# Clone the repository
git clone https://github.com/yadava5/VisualAssist.git
# Navigate to project
cd VisualAssist
# Open in Xcode
open VisualAssist.xcodeproj| Step | Action |
|---|---|
| 1️⃣ | Select your Development Team in Signing & Capabilities |
| 2️⃣ | Connect your iPhone Pro via USB |
| 3️⃣ | Press ⌘ + R to build and run |
Grant permissions → App announces "Visual Assist ready" → Start using!
VisualAssist/
├── 📁 App/ # Entry point & state
│ ├── VisualAssistApp.swift
│ └── AppState.swift
├── 📁 Views/ # SwiftUI interface
│ ├── HomeView.swift
│ ├── NavigationModeView.swift
│ ├── TextReadingModeView.swift
│ ├── ObjectAwarenessModeView.swift
│ └── Components/
├── 📁 Services/ # Business logic
│ ├── LiDARService.swift
│ ├── CameraService.swift
│ ├── SpeechService.swift
│ └── HapticService.swift
├── 📁 Models/ # Data structures
└── 📁 Utilities/ # Helpers
| Framework | Purpose | |
|---|---|---|
| ARKit | LiDAR depth sensing | 🔵 |
| Vision | Text recognition (OCR) | 🟢 |
| Core ML | Object detection | 🟣 |
| AVFoundation | Camera capture | 🟠 |
| Speech | Voice commands | 🔴 |
| Core Haptics | Haptic feedback | 🟡 |
| Pattern | Usage |
|---|---|
| MVVM | Clean view/logic separation |
| Combine | Reactive @Published properties |
| Swift Concurrency | Modern async/await |
| iOS 26 Design | Liquid glass UI effects |
| Feature | Description | |
|---|---|---|
| 🔐 | On-Device Processing | All ML runs locally on your iPhone |
| 📡 | No Network Required | Works completely offline |
| 🚫 | No Data Collection | Nothing leaves your device |
| 📊 | No Analytics | Zero tracking or telemetry |
| 👤 | No Account | Use immediately, no sign-up |
Visual Assist is built with accessibility as a core principle:
|
|
- ⌚ Apple Watch companion app
- 🗺️ Indoor mapping & saved locations
- 💵 Currency recognition
- 🌍 Multi-language support
- 🔗 Siri Shortcuts integration
- 🚗 CarPlay navigation support
|
This project is licensed under the Creative Commons Attribution-NonCommercial 4.0 International License.
For commercial licensing, contact the author. |
This project uses DocC for API documentation.
# Build documentation in Xcode
# Product → Build Documentation (⌃⇧⌘D)
# Or via command line
xcodebuild docbuild -scheme VisualAssist -derivedDataPath ./docs