I am a Data Scientist based in Sydney, Australia, with professional experience building and deploying production-grade ML systems across fraud intelligence, customer lifecycle analytics, financial AI, and healthcare. I currently work as an Associate Consultant — ERP Data Developer at PM-Partners, where I design and maintain automated Azure Data Factory ingestion pipelines and SQL/PL-SQL validation workflows that directly power executive reporting for HR and Payroll units.
My approach to ML is end-to-end — I own the full lifecycle from raw data ingestion, feature engineering, and model training through to real-time API deployment with FastAPI and Docker, and long-term drift monitoring in production. Every system I ship includes SHAP explainability for stakeholder trust, Evidently AI monitoring for reliability, and reproducible DVC pipelines for engineering rigour. I am fluent with PyTorch, XGBoost, scikit-learn, MLflow, and Azure, and comfortable working across both research-oriented and production-focused environments.
I hold a Bachelor of Software Engineering (Data Science & Artificial Intelligence) from Torrens University Australia, complemented by the Deep Learning Specialisation (DeepLearning.AI) and Machine Learning Specialisation (Stanford University / DeepLearning.AI). I have full working rights in Australia and am actively open to Data Scientist and ML Engineer opportunities where models ship to production and decisions are data-driven.
Most fraud detection systems react after a transaction is already flagged. FraudSentinel detects risk 72 hours earlier — before the financial loss materialises. Built on 590,540 real IEEE-CIS payment transactions, the system engineers a forward-looking early-warning label that asks: will this entity commit fraud within the next three days? Every transaction is then ranked by risk score, allowing operations teams to intervene proactively at a fraction of the cost of traditional reactive review.
| Operational Lift over random review |
Fraud Captured reviewing only top 8% |
High Band Lift in top 3% of traffic |
Precision in High Risk Band |
AUC-PR above no-skill baseline |
Real-Time API Inference Latency |
Model Architecture — A PyTorch MLP with MD5-hashed entity embeddings (32-dimensional learned vectors) captures behavioural identity across the transaction graph. Entity velocity features and fraud campaign propagation signals encode temporal risk dynamics. Predicted probabilities are calibrated via temperature scaling (LBFGS optimisation, T = 1.033), improving Brier score from 0.163 to 0.154 and ensuring the model's confidence is reliable for downstream risk banding.
Risk Bands — Scored transactions are bucketed into three operational tiers (High / Medium / Low) based on review capacity, not arbitrary thresholds. The High band (top 3% of traffic) delivers 19.5% precision and 7× lift — meaning every investigator hour is seven times more productive than an unguided review queue at identical operational cost.
Production MLOps — MLflow tracks every experiment. SHAP LinearExplainer generates feature attributions across a 262,000-dimensional feature space for each scored transaction. Evidently AI produces an interactive drift monitoring dashboard (live link above) comparing training and production distributions. DVC ensures the 11-stage pipeline is fully reproducible via a single make pipeline command. 50 pytest unit tests validate the full system.
| Core ML |
|
| MLOps |
|
| Language |
|
Most retention programmes treat all at-risk customers equally. This system answers a more precise question: which customers are worth retaining, by how much, and what is the optimal budget allocation to maximise return? Built on 4,933 customers representing a £12.1M revenue corpus, it combines probabilistic CLV forecasting, machine learning churn classification, and integer-programming budget optimisation into a single automated decision pipeline.
| Churn Model AUC-ROC |
CLV Forecast Spearman Rank |
Revenue Lift Top-Decile Customers |
Net ROI (£200 budget frontier) |
Revenue from Top 10% of Customers |
CLV Forecasting — A BG/NBD + Gamma-Gamma probabilistic engine predicts each customer's expected revenue over a 180-day holdout. Spearman rank correlation of 0.57 confirms strong ranking fidelity between predicted CLV and realised revenue. Top-decile customers deliver a 6.3× revenue lift over the portfolio mean, validating the model as a reliable capital allocation guide.
Churn Classification — Four classifiers (Random Forest, XGBoost, LightGBM, Logistic Regression) are benchmarked via 5-fold stratified cross-validation. The champion Random Forest achieves AUC-ROC = 0.816 and AUC-PR = 0.883. SHAP attribution identifies recency and transaction frequency as the dominant churn drivers — insight that was adopted directly into retention targeting rules.
Budget Optimisation — A PuLP/CBC 0-1 Knapsack allocator returns the optimal customer set to target under any spend constraint. At the £200 efficiency frontier, net ROI reaches 44.5×. A 500-draw Monte Carlo simulation stress-tests assumptions and delivers a full P5–P95 confidence band for every budget scenario, removing guesswork from retention investment decisions.
Pipeline — Fully automated 11-step scoring pipeline (ingest → clean → features → CLV → churn → optimise → evaluate → segment → cohort → insights → sensitivity). MLflow experiment tracking and GitHub Actions CI across Python 3.9 and 3.11. An 8-segment RFM taxonomy reveals a Gini coefficient of 0.73 — the top 10% of customers drive 62% of total revenue.
| Core ML |
|
| Optimisation |
|
| MLOps |
|
| Language |
|
Member churn is a critical operational risk for Australian Superannuation funds. ConnectIntelligence predicts which members are at risk of disengaging — and explains why — so that engagement teams can intervene with targeted strategies rather than blanket campaigns. The system is deployed as a production-grade real-time API, returning a churn probability and behavioural persona classification for any member within 200 milliseconds.
| Weighted Accuracy on held-out test data |
End-to-End API Inference Latency |
Behavioural Risk Personas via K-Means Clustering |
Model — XGBoost classifier with systematic feature selection, hyperparameter tuning via cross-validation, and class-imbalance handling. Achieves 0.86 weighted accuracy on the held-out test set. SHAP values are computed for every high-risk prediction, providing each engagement team member with a traceable, plain-language explanation of which factors drove the risk score — building trust and enabling targeted action.
Segmentation — K-Means clustering identifies four distinct behavioural risk personas from the member population. Each persona maps to a tailored intervention playbook, improving engagement team effectiveness by focusing effort on the right strategy for the right member type rather than applying one-size-fits-all outreach.
Deployment — The full system is containerised with Docker and served via FastAPI. End-to-end inference — from raw member input to scored output — completes in under 200 milliseconds, enabling real-time integration with operational dashboards and CRM systems. A live interactive demo is available via the link above.
| Core ML |
|
| Deployment |
|
| Language |
|
Financial due diligence on ASX-listed companies is time-consuming and error-prone when done manually across dense annual reports. FinGraph replaces keyword search and manual reading with an agentic GraphRAG pipeline — combining a structured knowledge graph of extracted financial entities with LLM-powered reasoning — so analysts can ask complex questions in plain language and receive answers grounded in traceable, source-linked evidence from the original documents.
| Retrieval Latency Reduction |
LLM Answer Accuracy Improvement |
Financial Entities Auto-Extracted per Report |
Architecture — A custom NLP extraction pipeline parses ASX annual reports and maps 50+ financial entities (companies, executives, subsidiaries, financial metrics, risk factors) into a Neo4j graph database. At query time, LlamaIndex traverses the graph rather than performing vector-only semantic search, dramatically improving precision on multi-hop financial questions. Groq handles LLM inference for speed.
Accuracy and Trust — A citation-style verification layer attaches the source sentence, page, and document to every LLM response. This grounds answers in the original filings and eliminates hallucinations — a critical requirement for financial analysis where accuracy directly affects investment decisions. Analysts can trace every claim back to its exact source with a single click.
Interface — A Streamlit application allows analysts to interrogate any ASX-listed company's filings in plain English. The system returns structured, evidence-backed answers alongside a visual knowledge graph of entity relationships.
| AI / LLM |
|
| Graph DB |
|
| Interface |
|
| Language |
|
| Machine Learning & AI |
|
| MLOps & Deployment |
|
| Data Engineering |
|
| Cloud & Databases |
|
| Visualisation & BI |
|
| Languages |
|
Associate Consultant — ERP Data Developer PM-Partners · Sydney, NSW · Feb 2026 – Present
Designing and maintaining automated daily ingestion pipelines in Azure Data Factory for HR and Payroll units, replacing error-prone manual exports with validated end-to-end refreshes. Reduced manual effort by 40%, cut data refresh latency from hours to under 30 minutes (75% reduction), and elevated reporting accuracy by 30% through automated SQL/PL-SQL referential integrity validation across high-volume ERP production datasets.
Data Science & ML Intern Hightech Mastermind Pty Ltd · Sydney, NSW · Feb 2025 – Apr 2025
Implemented a champion–challenger model selection workflow (scikit-learn, XGBoost) that boosted forecasting performance 15% over baseline. Containerised all ML workflows with Docker and automated deployment through GitHub Actions CI/CD. Delivered stakeholder Power BI dashboards paired with SHAP interpretability reports, accelerating executive decision cycles by 25%.
Bachelor of Software Engineering — Data Science & Artificial Intelligence · Torrens University Australia · 2021 – 2024
Professional Year Program — Information Technology · QIBA, Sydney · 2024 – 2025
Google Advanced Data Analytics Professional Certificate · Google · 2025
Deep Learning Specialisation · DeepLearning.AI · 2025
Machine Learning Specialisation · Stanford University / DeepLearning.AI · 2024