ML engineer working on production systems in regulated enterprise environments. Background spans modern agent engineering and classical ML at scale. Most of the work has been on the parts that don't show up in demos: feedback loops, evaluation discipline, drift, and keeping models useful after the launch announcement.
Currently building an internal LLM investigation agent at Walmart. Earlier work in fraud ML on high-volume transactional data and graph neural network research in healthcare.
Eval discipline beats model choice. Most production wins come from better ground truth and better routing, not from upgrading the LLM.
Feedback loops compound. A system that learns from analyst overrides this week is worth more than a benchmark score next quarter.
Compliance is a design constraint, not paperwork. Data residency, audit trails, and grounded outputs shape architecture from day one.
predictive-ml/ XGBoost, drift detection, champion-challenger, near real-time scoring
llm-systems/ RAG, agent orchestration, retrieval reranking, prompt eval
applied-research/ GNNs, NLP pipelines, computer vision (ESRGAN + attention)
production/ Vector DB ops, feature pipelines, observability for non-deterministic systems
Attention-based Super Resolution GAN for Drone Image Detection ESRGAN with dual attention mechanism (spatial + channel) for small object detection. 88.4% mAP50 on COWC, outperforming PP-YOLOE-Plus by 2.7%.