QENCS is a research prototype that tests whether a 9-qubit variational quantum classifier (VQC) can detect cognitive focus lapses from pre-computed EEG band-power features. The stack combines a PennyLane + PyTorch training pipeline, a FastAPI backend, and a Next.js frontend.
This is not a clinical system, not a real-time validated EEG product, and not a demonstration of quantum advantage. The latest publish pass completed the missing experiments and the result is still a negative-but-honest research outcome: the quantum model remains close to chance and does not clearly beat simple classical baselines.
The repo now includes all three pre-publication experiments:
- Adam vs QNG on the same 2,000-sample stratified subset for 50 epochs
- Full-dataset Adam training on all 12,811 available samples
- A 3-layer MLP baseline trained on the exact same split and scaler as the VQC
Artifacts generated by the latest run:
data/training_results.jsondata/feature_scaler.pkldata/quantum_focus_model_v2.pthlinkedin_images/results_table.pnglinkedin_images/loss_curve.pnglinkedin_images/architecture_card.png
The LinkedIn cards are now generated from results on disk via
scripts/generate_linkedin_images.py, so the visuals stay in sync with the
actual experiment output.
Predict predefinedlabel from EEG-derived features:
0= no lapse1= lapse
- Source:
data/processed_eeg.csv - Total rows:
12,811 - Full-dataset class distribution:
6,662class-0 /6,149class-1 - 2k experiment subset:
1,000class-0 /1,000class-1
Nine features are used:
DeltaThetaAlpha1Alpha2Beta1Beta2Gamma1Gamma2FocusRatio
FocusRatio is engineered in scripts/data_processing.py, and all nine
features are scaled to [0, pi] with a MinMaxScaler fit on the training split
only.
| Component | Value |
|---|---|
| Qubits | 9 |
| Ansatz depth | 4 layers |
| Quantum layer | StronglyEntanglingLayers |
| Weight tensor shape | (4, 9, 3) |
| Quantum parameters | 108 |
| Feature embedding | AngleEmbedding |
| Measurements | 9 x expval(PauliZ) |
| Readout | Linear(9, 1) logits layer |
| Inference | sigmoid + threshold 0.5 |
| Simulator | PennyLane default.qubit |
| Setting | Value |
|---|---|
| Epochs | 50 |
| Batch size | 32 |
| Learning rate | 0.01 |
| Loss | BCEWithLogitsLoss(pos_weight=2.0) |
| Split | stratified 80/20 |
| Seed | 42 |
| Scaler | MinMaxScaler(feature_range=[0, pi]) on train only |
QNGOptimizer(approx="block-diag")comparison run- full-dataset Adam run
- MLP baseline:
Linear(9,18) -> ReLU -> Linear(18,18) -> ReLU -> Linear(18,1) - JSON-safe full loss histories for Adam, QNG, full-dataset Adam, and MLP
All numbers below come directly from data/training_results.json.
| Metric | Adam | QNG | Delta |
|---|---|---|---|
| Accuracy | 0.5225 | 0.5000 | -0.0225 |
| Precision | 0.5116 | 0.5000 | -0.0116 |
| Recall | 0.9900 | 1.0000 | +0.0100 |
| F1 Score | 0.6746 | 0.6667 | -0.0079 |
| Train time | 113.6 s | 5885.2 s | 51.8x slower |
| Metric | 2k subset | Full dataset | Delta |
|---|---|---|---|
| Accuracy | 0.5225 | 0.5033 | -0.0192 |
| Precision | 0.5116 | 0.4912 | -0.0204 |
| Recall | 0.9900 | 0.9797 | -0.0103 |
| F1 Score | 0.6746 | 0.6544 | -0.0202 |
| Train time | 113.6 s | 1017.3 s | 9.0x longer |
| Metric | VQC (Adam) | SVM | MLP |
|---|---|---|---|
| Accuracy | 0.5225 | 0.5250 | 0.5150 |
| Precision | 0.5116 | 0.5176 | 0.5079 |
| Recall | 0.9900 | 0.7350 | 0.9650 |
| F1 Score | 0.6746 | 0.6074 | 0.6655 |
| Train time | 113.6 s | 0.06 s | 1.1 s |
- QNG did not make a meaningful difference. It ended slightly worse than
Adam on F1 and took
51.8xlonger. - The full dataset did not improve the VQC meaningfully at the same
hyperparameters. In this run it was slightly worse on F1 while taking
9.0xlonger. - The MLP nearly matched the VQC. The gap between VQC Adam (
0.6746) and MLP (0.6655) is too small to support a strong quantum claim. - The VQC still shows a high-recall / low-precision pattern. It tends to over-predict the lapse class instead of learning a clean decision boundary.
Bottom line: this repo now has a more rigorous experimental story, but it still does not show clear quantum advantage on this dataset.
data/EEG_data.csv
-> scripts/data_processing.py
-> data/processed_eeg.csv
-> scripts/quantum_model.py
-> data/feature_scaler.pkl
-> data/quantum_focus_model_v2.pth
-> data/training_results.json
-> scripts/generate_linkedin_images.py
-> linkedin_images/*.png
Key scripts:
scripts/data_processing.py: buildsprocessed_eeg.csvscripts/quantum_model.py: runs Adam, QNG, full-data Adam, SVM, and MLPscripts/generate_linkedin_images.py: regenerates the publish cards from the latest JSON results
data/processed_eeg.csvdata/training_results.jsondata/feature_scaler.pkldata/quantum_focus_model_v2.pthlinkedin_images/
backend/main.pybackend/requirements.txt
web-app/
docs/deployment_guide.mdvalidation_report.md
python3 scripts/data_processing.pypython3 scripts/quantum_model.pyOutputs:
data/feature_scaler.pkldata/quantum_focus_model_v2.pthdata/training_results.json
python3 scripts/generate_linkedin_images.pyOutputs:
linkedin_images/results_table.pnglinkedin_images/loss_curve.pnglinkedin_images/architecture_card.png
cd backend
pip install -r requirements.txt
python3 main.pyBackend URL:
http://localhost:8000
cd web-app
npm install
npm run devFrontend URL:
http://localhost:3000
| Variable | Default | Description |
|---|---|---|
NEXT_PUBLIC_API_URL |
http://localhost:8000 |
FastAPI backend URL |
- Python 3.11
- FastAPI 0.128.5
- Uvicorn 0.40.0
- PennyLane 0.43.1
- PyTorch 2.9.0
- scikit-learn 1.5.1
- pandas 2.2.2
- NumPy 2.0.1
- Next.js 15.1.9
- React 19
- Three.js / React Three Fiber
- Recharts
- Framer Motion
- Tailwind CSS
- TypeScript
- No strong predictive performance yet. All current models remain close to chance on held-out data.
- No quantum advantage claim is justified. The VQC is not clearly better than the MLP and is dramatically slower than classical baselines.
- Hyperparameters are still simple. No systematic sweep over learning rate, depth, threshold, class weighting, or calibration has been done yet.
- Simulator only. Everything runs on
default.qubit; there is no quantum hardware result in this repo. - Real EEG streaming is still scaffolded. The frontend/backend integration is not yet a validated live EEG pipeline.
- No subject-specific personalization. The current setup is a global model with a shared threshold, not a calibrated per-user system.
- Tune learning rate and depth instead of assuming QNG fixes convergence
- Add stronger classical baselines such as Random Forest and XGBoost
- Evaluate threshold calibration and PR / ROC behavior, not just point metrics
- Test subject-aware splits to avoid optimistic leakage patterns
- Replace mock EEG input with validated live acquisition if real-time use is a goal
This project is still useful as a software and research prototype:
- the data pipeline works
- the VQC training pipeline is reproducible
- the backend/frontend stack is wired up
- the experiment reporting is now much more complete
But the current model outcome is still a null result. That honesty is part of the value of the repo.