Skip to content

hsospedra/Project2_Stark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

37 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vanguard A/B Test Analysis

🧪 Project Overview

This project analyzes the results of a digital A/B experiment conducted by Vanguard to evaluate whether a redesigned online user interface improves client experience and process completion. The experiment compares a traditional interface (Control group) with a redesigned version (Test group).

The analysis focuses on measuring differences between the Test and Control groups using key performance indicators (KPI) such as completion rate, time spent across process steps, and error-related behavior. These metrics are used to assess whether the redesigned interface leads to a more efficient and intuitive client journey.

In addition to the core experiment evaluation, the project includes exploratory analyses of client demographics and behavior. These analyses examine relationships between client tenure, number of accounts, balances, engagement metrics, and support interactions in order to provide additional context around client behavior and potential secondary effects of the redesign.


🛠️ Tools

  • Python (pandas, numpy, scipy, matplotlib, seaborn)
  • SQL
  • Tableau
  • Jupyter Notebook
  • Git & GitHub

📁 Repository Structure

The repository contains the following folders and files:

data_raw

Contains the original datasets provided for the project. These files are kept unchanged to ensure traceability, reproducibility, and transparency throughout the analysis process.

data_clean

Includes cleaned and processed datasets used for analysis. The cleaning steps applied to the raw data are documented in the notebooks.

figures

Contains visual assets generated during the project, including: Charts and plots from exploratory data analysis (EDA). Diagrams created in Miro. Entity Relationship Diagrams (ERD) used for SQL analysis and database design understanding.

These visuals support both the analytical process and the final presentation of insights.

notebook

Holds the Jupyter notebooks used throughout the project. These notebooks include: Data cleaning and preprocessing. Exploratory data analysis (EDA). Metric and KPI calculations. Hypothesis testing. Data visualizations and interpretations.

They represent the core analytical workflow of the project.

slides_and_tableau

Contains the final presentation materials and interactive visualizations, including: The final project presentation in PDF format. Tableau visualizations used to explore key metrics, compare Test vs Control groups, and support analytical findings.

This folder bridges the technical analysis with the storytelling and communication of results.

📅 Project Log

Day 1 – Dataset Discovery & Initial EDA

  • Loaded all datasets into Python.
  • Explored dataset structure, data types, and key variables.
  • Performed initial exploratory analysis using pandas.
  • Identified potential data quality issues.

Day 2 – Data Cleaning & Client Analysis

  • Cleaned datasets and addressed missing or inconsistent values.
  • Merged datasets required for analysis.
  • Analyzed client demographics to identify primary users of the process.
  • Compared client age groups and tenure.
  • Conducted initial client behavior analysis.

Day 3 – Performance Metrics Definition

  • Reviewed KPI and metrics concepts.
  • Defined key success metrics for the experiment.
  • Calculated completion rates for Test and Control groups.
  • Analyzed time spent on each step of the process.
  • Compared performance between the new and old designs.
  • Reviewed error-related behavior across process steps.

Day 4 – Hypothesis Testing

  • Defined hypotheses related to completion rate differences.
  • Conducted statistical tests to evaluate significance.
  • Began analysis of whether the observed increase met the 5% threshold.
  • Selected an additional hypothesis to test.
  • Started evaluating experiment design effectiveness.

Day 5 – Hypothesis Testing & Experiment Evaluation

  • Completed hypothesis testing analyses.
  • Completed additional hypothesis testing.
  • Evaluated experiment duration and limitations.
  • Identified additional data that could improve the analysis.

Day 6 – Tableau Metrics & Data Preparation

  • Defined metrics to be visualized in Tableau.
  • Prepared and exported cleaned datasets for Tableau.
  • Imported data into Tableau.
  • Planned dashboard structure based on KPIs.

Day 7 – Tableau Dashboard Development

  • Built Tableau dashboards showing A/B test results.
  • Visualized completion rates, time spent, and error rates.
  • Added demographic filters (age, gender).
  • Integrated EDA visuals to provide context.
  • Refined dashboards for clarity and storytelling.

Day 8 – Project Refinement

  • Reviewed all previous analyses and outputs.
  • Cleaned and organized notebooks and code.

Day 9 – Presentation Preparation

  • Created project presentation following provided guidelines.
  • Structured slides to clearly communicate insights.
  • Prepared visual and analytical narratives.
  • Ensured all deliverables were complete and consistent.

Day 10 – Final Review & Presentation

  • Performed final checks against project requirements.
  • Reviewed README and repository structure.
  • Delivered the final presentation.
  • Submitted all required links and materials.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors 4

  •  
  •  
  •  
  •