Bridging the gap between High-Performance Deep Learning and Clinical Trust.
Currently pursuing my Master’s in Artificial Intelligence, specializing in making medical AI transparent and interpretable. Former Flutter Developer with a "production-first" mindset.
I develop interpretable frameworks for high-stakes medical diagnostics:
- Autism Spectrum Disorder (ASD): Utilizing XAI to identify early-stage biomarkers.
- Chronic Disease (Diabetes): Enhancing predictive trust using feature attribution methods.
- Mission: Building AI that doctors can understand and trust.
| Field | Tools & Frameworks |
|---|---|
| Explainable AI | SHAP • LIME • Integrated Gradients • Grad-CAM |
| Deep Learning | PyTorch • TensorFlow • Keras • Scikit-Learn |
| Data Science | Python • Pandas • NumPy • Matplotlib • Seaborn |
| NLP and Transformers | BERT • HuggingFace Transformers • LSTM • Text Classification |
| Quantum Machine Learning | PennyLane • Hybrid Quantum-Classical Models |
| Mobile Engineering | Flutter • Dart • Firebase • Provider/Bloc • REST APIs • Supabase |
- 🔭 Currently: Refining XAI models for clinical reliability.
- 💬 Ask me about: Why your model is a "Black Box" and how we can fix it.
- 📫 Connect: LinkedIn | Email
"An AI model that cannot explain its reasoning should never make medical decisions."
⭐ If you find my work interesting, feel free to explore the repositories or connect for collaboration.


