Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -240,6 +240,15 @@ Xanadu-style job conversion example:
bash examples/hardware_integration/run.sh
```

Aurora/QCA/GKP fixture conversions:

```bash
bash examples/hardware_integration/run_public_datasets.sh
```

For large real datasets, use converter streaming/chunk flags:
`--stream --shot-start <N> --max-shots <K> [--append-out]`.

Replay converted NDJSON through the C++ adapter:

```bash
Expand Down
95 changes: 92 additions & 3 deletions docs/hardware-integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,13 +29,26 @@ Key fields:
Each line is one `DecodeRequest` JSON object.
See `schemas/decoder_io_example.ndjson` for examples.

### Xanadu job conversion helper
### Xanadu dataset conversion helper

Use the built-in converter example to transform Xanadu-style job outputs into
`DecodeRequest` NDJSON:
Use the built-in converter example to transform Xanadu data into
`DecodeRequest` NDJSON.

Supported source modes:

- `xanadu_job_json`: legacy job payloads with `output`/`samples`.
- `aurora_switch_dir`: Aurora decoder-demo batch directory with
`switch_settings_qpu_*.npy` (or `.json` in fixture mode).
- `shot_matrix`: generic shot arrays from `.json`, `.npy`, or `.npz`
(covers QCA `samples.npy`).
- `count_table_json`: count-compressed outcomes (`sample` + `count`), useful for
GKP-style exports.

Legacy job JSON:

```bash
python3 examples/hardware_integration/convert_xanadu_job_to_decoder_io.py \
--source-format xanadu_job_json \
--input /path/to/xanadu_job.json \
--mapping /path/to/your_mapping.json \
--out examples/results/hardware_integration/decoder_requests.ndjson
Expand All @@ -47,9 +60,85 @@ Quick demo:
bash examples/hardware_integration/run.sh
```

Aurora / QCA / GKP fixture demos:

```bash
bash examples/hardware_integration/run_public_datasets.sh
```

Real Aurora decoder-demo batch:

```bash
python3 -m pip install numpy

python3 examples/hardware_integration/convert_xanadu_job_to_decoder_io.py \
--source-format aurora_switch_dir \
--stream \
--input /path/to/decoder_demo/signal/batch_0 \
--mapping examples/hardware_integration/xanadu_aurora_mapping_example.json \
--out examples/results/hardware_integration/decoder_requests_aurora.ndjson \
--aurora-binarize \
--max-shots 20000 \
--progress-every 5000
```

Real QCA sample matrix:

```bash
python3 examples/hardware_integration/convert_xanadu_job_to_decoder_io.py \
--source-format shot_matrix \
--stream \
--input /path/to/fig3a/samples.npy \
--mapping examples/hardware_integration/xanadu_qca_mapping_example.json \
--out examples/results/hardware_integration/decoder_requests_qca.ndjson \
--max-shots 50000 \
--progress-every 10000
```

Chunk large QCA files by repeating conversion with shifted `--shot-start` and `--append-out`:

```bash
python3 examples/hardware_integration/convert_xanadu_job_to_decoder_io.py \
--source-format shot_matrix \
--stream \
--input /path/to/fig3a/samples.npy \
--mapping examples/hardware_integration/xanadu_qca_mapping_example.json \
--out examples/results/hardware_integration/decoder_requests_qca.ndjson \
--shot-start 0 \
--max-shots 200000

python3 examples/hardware_integration/convert_xanadu_job_to_decoder_io.py \
--source-format shot_matrix \
--stream \
--input /path/to/fig3a/samples.npy \
--mapping examples/hardware_integration/xanadu_qca_mapping_example.json \
--out examples/results/hardware_integration/decoder_requests_qca.ndjson \
--append-out \
--shot-start 200000 \
--max-shots 200000
```

Count-compressed GKP outcomes:

```bash
python3 examples/hardware_integration/convert_xanadu_job_to_decoder_io.py \
--source-format count_table_json \
--input /path/to/gkp_outcome_counts.json \
--mapping examples/hardware_integration/xanadu_gkp_mapping_example.json \
--out examples/results/hardware_integration/decoder_requests_gkp.ndjson
```

The mapping file controls how measured modes are converted to syndrome events.
See `examples/hardware_integration/xanadu_syndrome_mapping_example.json`.

Large-data controls:

- `--stream`: use memory-mapped loading where possible (notably `.npy`).
- `--shot-start`: skip the first `N` expanded shots.
- `--max-shots`: cap this run to `K` shots.
- `--append-out`: append to an existing NDJSON file.
- `--progress-every`: print progress every `M` written requests.

### Replay NDJSON through LiDMaS+ adapter

Use the C++ CLI replay mode to decode each NDJSON `DecodeRequest` line and write
Expand Down
58 changes: 58 additions & 0 deletions docs/releases/v1.2.0-rc.1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# LiDMaS+ v1.2.0-rc.1

Release date: 2026-03-17

Tag: `v1.2.0-rc.1`
Package version: `1.2.0rc1`

This release candidate focuses on Xanadu hardware-data integration and large-dataset conversion reliability.

## Highlights

- Added multi-format Xanadu data conversion into `decoder_io` NDJSON:
- legacy job JSON (`output`/`samples`)
- Aurora decoder-demo switch-setting batches
- QCA/Borealis shot matrices (`.json`, `.npy`, `.npz`)
- count-compressed outcome tables (GKP-style exports)
- Added C++ replay workflow integration for converted NDJSON requests.
- Added streaming/chunked conversion controls for heavy datasets:
- `--stream`
- `--shot-start`
- `--max-shots`
- `--append-out`
- `--progress-every`
- Added runnable fixtures/scripts for Aurora/QCA/GKP conversion and replay validation.

## Included Changes Since v1.1.4

- `feat(hardware): add decoder_io NDJSON replay CLI and Xanadu integration example` (`431b0c9`)
- `feat(hardware): add streaming Xanadu dataset integration` (`a544e2b`)

Comparison:
`https://github.com/DennisWayo/lidmas_cpp/compare/v1.1.4...v1.2.0-rc.1`

## Validation Performed

- Python compile check:
- `python3 -m py_compile examples/hardware_integration/convert_xanadu_job_to_decoder_io.py`
- Fixture conversions:
- `bash examples/hardware_integration/run.sh`
- `bash examples/hardware_integration/run_public_datasets.sh`
- Replay checks:
- `bash examples/hardware_integration/replay.sh examples/results/hardware_integration/decoder_requests_aurora.ndjson`
- `bash examples/hardware_integration/replay.sh examples/results/hardware_integration/decoder_requests_qca.ndjson`
- `bash examples/hardware_integration/replay.sh examples/results/hardware_integration/decoder_requests_gkp.ndjson`
- Result: replay completed with `errors=0` on fixture datasets.

## Notes

- `.npy/.npz` conversion paths require NumPy (`pip install numpy`).
- This is an RC intended for real-data validation at scale before final `v1.2.0`.
- No breaking changes are expected for existing workflows.

## Upgrade / Usage

- Use this RC tag for validation runs:
- `git checkout v1.2.0-rc.1`
- For large QCA/Aurora inputs, prefer streaming/chunking:
- `--stream --shot-start <N> --max-shots <K> [--append-out]`
58 changes: 58 additions & 0 deletions docs/releases/v1.2.0-rc.2.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# LiDMaS+ v1.2.0-rc.2

Release date: 2026-03-17

Tag: `v1.2.0-rc.2`
Package version: `1.2.0rc2`

This release candidate focuses on Xanadu hardware-data integration and large-dataset conversion reliability.

## Highlights

- Added multi-format Xanadu data conversion into `decoder_io` NDJSON:
- legacy job JSON (`output`/`samples`)
- Aurora decoder-demo switch-setting batches
- QCA/Borealis shot matrices (`.json`, `.npy`, `.npz`)
- count-compressed outcome tables (GKP-style exports)
- Added C++ replay workflow integration for converted NDJSON requests.
- Added streaming/chunked conversion controls for heavy datasets:
- `--stream`
- `--shot-start`
- `--max-shots`
- `--append-out`
- `--progress-every`
- Added runnable fixtures/scripts for Aurora/QCA/GKP conversion and replay validation.

## Included Changes Since v1.1.4

- `feat(hardware): add decoder_io NDJSON replay CLI and Xanadu integration example` (`431b0c9`)
- `feat(hardware): add streaming Xanadu dataset integration` (`a544e2b`)

Comparison:
`https://github.com/DennisWayo/lidmas_cpp/compare/v1.1.4...v1.2.0-rc.2`

## Validation Performed

- Python compile check:
- `python3 -m py_compile examples/hardware_integration/convert_xanadu_job_to_decoder_io.py`
- Fixture conversions:
- `bash examples/hardware_integration/run.sh`
- `bash examples/hardware_integration/run_public_datasets.sh`
- Replay checks:
- `bash examples/hardware_integration/replay.sh examples/results/hardware_integration/decoder_requests_aurora.ndjson`
- `bash examples/hardware_integration/replay.sh examples/results/hardware_integration/decoder_requests_qca.ndjson`
- `bash examples/hardware_integration/replay.sh examples/results/hardware_integration/decoder_requests_gkp.ndjson`
- Result: replay completed with `errors=0` on fixture datasets.

## Notes

- `.npy/.npz` conversion paths require NumPy (`pip install numpy`).
- This is an RC intended for real-data validation at scale before final `v1.2.0`.
- No breaking changes are expected for existing workflows.

## Upgrade / Usage

- Use this RC tag for validation runs:
- `git checkout v1.2.0-rc.2`
- For large QCA/Aurora inputs, prefer streaming/chunking:
- `--stream --shot-start <N> --max-shots <K> [--append-out]`
2 changes: 1 addition & 1 deletion examples/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ LIDMAS_SKIP_PY_DEPS=1 ./examples/hybrid_threshold/run.sh
- `decoder_comparison/`: same sweep across multiple decoders.
- `failure_debug/`: stress run and failure-dump capture workflow.
- `plot_only/`: publication-grade plotting from existing CSV files.
- `hardware_integration/`: convert Xanadu-style job output JSON to LiDMaS+ decoder IO NDJSON.
- `hardware_integration/`: convert Xanadu datasets (Aurora/QCA/GKP/job JSON) to LiDMaS+ decoder IO NDJSON.


## Central Results Folder
Expand Down
Loading
Loading