Hi, I'm Vridhi
I like to build data and ML systems designed to be used — not just submitted.
Most of my work sits at the boundary between analysis and deployment. The question I keep coming back to isn't "does the model work?" — it's "does the system hold up when it meets reality?"
Projects across forecasting, MLOps, computer vision, retrieval systems, and knowledge graphs — each built around a real problem rather than a benchmark.
A few things worth looking at:
- Stock Forecasting App — live XGBoost forecasting for any NSE ticker with R² monitoring and drift detection. Built to tell you when to stop trusting the forecast, not just what the forecast is.
- Bank Marketing MLOps — end-to-end pipeline from raw data to a containerised Flask API on AWS EC2 with SHAP interpretability.
- RAG Politics QA — retrieval-augmented generation over Indian Prime Ministers Wikipedia corpus. Baseline accuracy 0.19 → 0.37 through pipeline design alone.
- Stock Market Anomaly Detection — five detection methods compared and validated against real financial news. The signal only matters if it meant something.
Completing the Advanced Management Program in Business Analytics at ISB Hyderabad — building systems that bridge ML and real business decisions.
Pushing toward: better deployment practices, stronger evaluation frameworks, and projects where the output is a decision, not just a prediction.
Python · SQL · scikit-learn · XGBoost · PyTorch · Docker · AWS EC2 · Flask · Streamlit · LlamaIndex · Tableau
I notice patterns everywhere — in datasets, in films, in how systems fail quietly before they fail loudly. Wady's Kitchen was a lockdown experiment in building something from zero. Same instinct, different medium.