Skip to content

Vishnu2707/QENCS

Repository files navigation

QENCS: Quantum-Enhanced Neuro-Coaching System

QENCS is a research prototype that tests whether a 9-qubit variational quantum classifier (VQC) can detect cognitive focus lapses from pre-computed EEG band-power features. The stack combines a PennyLane + PyTorch training pipeline, a FastAPI backend, and a Next.js frontend.

This is not a clinical system, not a real-time validated EEG product, and not a demonstration of quantum advantage. The latest publish pass completed the missing experiments and the result is still a negative-but-honest research outcome: the quantum model remains close to chance and does not clearly beat simple classical baselines.


Latest Publish Update

The repo now includes all three pre-publication experiments:

  • Adam vs QNG on the same 2,000-sample stratified subset for 50 epochs
  • Full-dataset Adam training on all 12,811 available samples
  • A 3-layer MLP baseline trained on the exact same split and scaler as the VQC

Artifacts generated by the latest run:

  • data/training_results.json
  • data/feature_scaler.pkl
  • data/quantum_focus_model_v2.pth
  • linkedin_images/results_table.png
  • linkedin_images/loss_curve.png
  • linkedin_images/architecture_card.png

The LinkedIn cards are now generated from results on disk via scripts/generate_linkedin_images.py, so the visuals stay in sync with the actual experiment output.


What The Project Actually Does

Goal

Predict predefinedlabel from EEG-derived features:

  • 0 = no lapse
  • 1 = lapse

Dataset

  • Source: data/processed_eeg.csv
  • Total rows: 12,811
  • Full-dataset class distribution: 6,662 class-0 / 6,149 class-1
  • 2k experiment subset: 1,000 class-0 / 1,000 class-1

Input Features

Nine features are used:

  • Delta
  • Theta
  • Alpha1
  • Alpha2
  • Beta1
  • Beta2
  • Gamma1
  • Gamma2
  • FocusRatio

FocusRatio is engineered in scripts/data_processing.py, and all nine features are scaled to [0, pi] with a MinMaxScaler fit on the training split only.


Model And Training Setup

Quantum Model

Component Value
Qubits 9
Ansatz depth 4 layers
Quantum layer StronglyEntanglingLayers
Weight tensor shape (4, 9, 3)
Quantum parameters 108
Feature embedding AngleEmbedding
Measurements 9 x expval(PauliZ)
Readout Linear(9, 1) logits layer
Inference sigmoid + threshold 0.5
Simulator PennyLane default.qubit

Shared Training Configuration

Setting Value
Epochs 50
Batch size 32
Learning rate 0.01
Loss BCEWithLogitsLoss(pos_weight=2.0)
Split stratified 80/20
Seed 42
Scaler MinMaxScaler(feature_range=[0, pi]) on train only

Added In This Update

  • QNGOptimizer(approx="block-diag") comparison run
  • full-dataset Adam run
  • MLP baseline: Linear(9,18) -> ReLU -> Linear(18,18) -> ReLU -> Linear(18,1)
  • JSON-safe full loss histories for Adam, QNG, full-dataset Adam, and MLP

Final Experiment Results

All numbers below come directly from data/training_results.json.

1. Adam vs QNG on the 2,000-sample subset

Metric Adam QNG Delta
Accuracy 0.5225 0.5000 -0.0225
Precision 0.5116 0.5000 -0.0116
Recall 0.9900 1.0000 +0.0100
F1 Score 0.6746 0.6667 -0.0079
Train time 113.6 s 5885.2 s 51.8x slower

2. 2k subset vs full dataset with Adam

Metric 2k subset Full dataset Delta
Accuracy 0.5225 0.5033 -0.0192
Precision 0.5116 0.4912 -0.0204
Recall 0.9900 0.9797 -0.0103
F1 Score 0.6746 0.6544 -0.0202
Train time 113.6 s 1017.3 s 9.0x longer

3. VQC vs SVM vs MLP on the 2k subset

Metric VQC (Adam) SVM MLP
Accuracy 0.5225 0.5250 0.5150
Precision 0.5116 0.5176 0.5079
Recall 0.9900 0.7350 0.9650
F1 Score 0.6746 0.6074 0.6655
Train time 113.6 s 0.06 s 1.1 s

Direct Interpretation

  • QNG did not make a meaningful difference. It ended slightly worse than Adam on F1 and took 51.8x longer.
  • The full dataset did not improve the VQC meaningfully at the same hyperparameters. In this run it was slightly worse on F1 while taking 9.0x longer.
  • The MLP nearly matched the VQC. The gap between VQC Adam (0.6746) and MLP (0.6655) is too small to support a strong quantum claim.
  • The VQC still shows a high-recall / low-precision pattern. It tends to over-predict the lapse class instead of learning a clean decision boundary.

Bottom line: this repo now has a more rigorous experimental story, but it still does not show clear quantum advantage on this dataset.


Training Pipeline

data/EEG_data.csv
  -> scripts/data_processing.py
  -> data/processed_eeg.csv
  -> scripts/quantum_model.py
      -> data/feature_scaler.pkl
      -> data/quantum_focus_model_v2.pth
      -> data/training_results.json
  -> scripts/generate_linkedin_images.py
      -> linkedin_images/*.png

Key scripts:

  • scripts/data_processing.py: builds processed_eeg.csv
  • scripts/quantum_model.py: runs Adam, QNG, full-data Adam, SVM, and MLP
  • scripts/generate_linkedin_images.py: regenerates the publish cards from the latest JSON results

Repository Structure

Data / Research

  • data/processed_eeg.csv
  • data/training_results.json
  • data/feature_scaler.pkl
  • data/quantum_focus_model_v2.pth
  • linkedin_images/

Backend

  • backend/main.py
  • backend/requirements.txt

Frontend

  • web-app/

Supporting Docs

  • docs/deployment_guide.md
  • validation_report.md

Run Locally

1. Prepare the processed dataset

python3 scripts/data_processing.py

2. Run the full experiment suite

python3 scripts/quantum_model.py

Outputs:

  • data/feature_scaler.pkl
  • data/quantum_focus_model_v2.pth
  • data/training_results.json

3. Regenerate the LinkedIn summary cards

python3 scripts/generate_linkedin_images.py

Outputs:

  • linkedin_images/results_table.png
  • linkedin_images/loss_curve.png
  • linkedin_images/architecture_card.png

4. Run the backend

cd backend
pip install -r requirements.txt
python3 main.py

Backend URL:

  • http://localhost:8000

5. Run the frontend

cd web-app
npm install
npm run dev

Frontend URL:

  • http://localhost:3000

Frontend Environment Variable

Variable Default Description
NEXT_PUBLIC_API_URL http://localhost:8000 FastAPI backend URL

Tech Stack

Backend

  • Python 3.11
  • FastAPI 0.128.5
  • Uvicorn 0.40.0
  • PennyLane 0.43.1
  • PyTorch 2.9.0
  • scikit-learn 1.5.1
  • pandas 2.2.2
  • NumPy 2.0.1

Frontend

  • Next.js 15.1.9
  • React 19
  • Three.js / React Three Fiber
  • Recharts
  • Framer Motion
  • Tailwind CSS
  • TypeScript

Current Limitations

  • No strong predictive performance yet. All current models remain close to chance on held-out data.
  • No quantum advantage claim is justified. The VQC is not clearly better than the MLP and is dramatically slower than classical baselines.
  • Hyperparameters are still simple. No systematic sweep over learning rate, depth, threshold, class weighting, or calibration has been done yet.
  • Simulator only. Everything runs on default.qubit; there is no quantum hardware result in this repo.
  • Real EEG streaming is still scaffolded. The frontend/backend integration is not yet a validated live EEG pipeline.
  • No subject-specific personalization. The current setup is a global model with a shared threshold, not a calibrated per-user system.

Recommended Next Steps

  • Tune learning rate and depth instead of assuming QNG fixes convergence
  • Add stronger classical baselines such as Random Forest and XGBoost
  • Evaluate threshold calibration and PR / ROC behavior, not just point metrics
  • Test subject-aware splits to avoid optimistic leakage patterns
  • Replace mock EEG input with validated live acquisition if real-time use is a goal

Honest Status

This project is still useful as a software and research prototype:

  • the data pipeline works
  • the VQC training pipeline is reproducible
  • the backend/frontend stack is wired up
  • the experiment reporting is now much more complete

But the current model outcome is still a null result. That honesty is part of the value of the repo.

About

Quantum-Enhanced Neuro-Coaching System (QENCS). Real-time ADHD focus analysis using Variational Quantum Classifiers (VQC) and 4-layer Entanglement.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors