Skip to content

feat: Robust Analysis (nodes 02a, 02b, 02c and 04b) and Standardized Plotting Framework#9

Open
shanto268 wants to merge 45 commits intoqua-platform:mainfrom
Quantum-Elements:main
Open

feat: Robust Analysis (nodes 02a, 02b, 02c and 04b) and Standardized Plotting Framework#9
shanto268 wants to merge 45 commits intoqua-platform:mainfrom
Quantum-Elements:main

Conversation

@shanto268
Copy link
Copy Markdown

Updated codebase needed to work with qua-libs PR 325

shanto268 and others added 14 commits May 30, 2025 15:34
…-resspec-node

Interactive plot via plotly added to Node 02a
- Implement TraceConfig, LayoutConfig, and PlotConfig Pydantic models
- Create create_plotly_figure function for generating interactive plots
- Support multi-qubit grid layouts with consistent styling
- Add hover templates and customizable trace properties
- Enable both raw data and fitted curve visualization
feat: Add standard_plotter module for configuration-driven plotting
@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Jun 24, 2025

CLA assistant check
All committers have signed the CLA.

@nulinspiratie nulinspiratie requested a review from Copilot June 26, 2025 13:45
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

A concise summary of the changes to support the new qua-libs PR 325, adding robust analysis metrics and a unified plotting framework.

  • Introduces standard_plotter.py to generate both Plotly and Matplotlib figures from raw and fit datasets.
  • Extends grids.py with PlotlyQubitGrid and iteration helpers for Plotly subplot layouts.
  • Enhances analysis modules: fitting.py now returns R² values and feature_detection.py computes new metrics (peak counts, SNR, asymmetry, skewness, artifact detection).

Reviewed Changes

Copilot reviewed 5 out of 6 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
qualibration_libs/plotting/standard_plotter.py New unified plotting functions create_plotly_figure and create_matplotlib_figure.
qualibration_libs/plotting/grids.py Added PlotlyQubitGrid class and plotly_grid_iter iterator to mirror physical qubit layout.
qualibration_libs/plotting/init.py Exported new plotting functions and grid helpers.
qualibration_libs/analysis/fitting.py Updated apply_fit to return fit parameters plus R²; cleaned up import order and removed old comments.
qualibration_libs/analysis/feature_detection.py Added helper for Savitzky–Golay windowing and multiple new feature-detection routines.
Comments suppressed due to low confidence (3)

qualibration_libs/plotting/standard_plotter.py:61

  • Only the first PlotConfig is used even if multiple configs are passed; either enforce a single config or extend support to iterate over all provided configurations.
    config = plot_configs[0] # For now, assume one config for the whole figure.

qualibration_libs/plotting/standard_plotter.py:72

  • [nitpick] The loop index variable i is ambiguous; consider renaming to idx or subplot_idx for clarity.
    for i, name_dict in plotly_grid_iter(grid):

qualibration_libs/analysis/feature_detection.py:30

  • The docstring for peaks_dips still references the old default prominence_factor=5; update it to reflect the new default of 2.
def peaks_dips(

Comment thread qualibration_libs/analysis/fitting.py
Comment thread qualibration_libs/analysis/feature_detection.py Outdated
@TheoLaudatQM TheoLaudatQM self-requested a review June 27, 2025 08:34
Copy link
Copy Markdown
Collaborator

@TheoLaudatQM TheoLaudatQM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Preliminary review for resonator_spectrscopy_1D only

Comment on lines +184 to +210
def _opx_bandwidth_artifact(arr, prominence, window=20, exclusion=3, artifact_prominence_factor=2.0):
# Baseline correction (ALS)
baseline_window = _get_savgol_window_length(len(arr), 51, 3)
baseline = savgol_filter(arr, window_length=baseline_window, polyorder=3) if baseline_window > 0 else arr
arr_bc = arr - baseline
# Smoothing
smooth_window = _get_savgol_window_length(len(arr_bc), 21, 3)
smoothed = savgol_filter(arr_bc, window_length=smooth_window, polyorder=3) if smooth_window > 0 else arr_bc
# Noise estimation from residual
noise = np.std(arr_bc - smoothed)
# Find main dip (minimum)
main_idx = np.argmin(smoothed)
# Define window around main dip, excluding center
start = max(0, main_idx - window)
end = min(len(smoothed), main_idx + window + 1)
exclusion_start = max(start, main_idx - exclusion)
exclusion_end = min(end, main_idx + exclusion + 1)
# Search for local maxima (peaks) in the window, excluding the dip center
search_region = np.concatenate([
smoothed[start:exclusion_start],
smoothed[exclusion_end:end]
])
if len(search_region) == 0:
return np.array([False])
# Find peaks in the search region
peaks, properties = find_peaks(search_region, prominence=artifact_prominence_factor * noise)
return np.array([len(peaks) > 0])
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure that I understand how this functions works at the high level. Based on its name, I am guessing that it is looking for the 5MHz FWHM dip with a single outlier at the minimum, or something similar. However, when analyzing the dataset #668, the opx_bandwidth_artifact flag is raised to True, whereas the only visible problem is related to the SNR. Even more, by tuning the smoothing window, peaks can be detected, but the analysis fails due to this flag being raise.
Can you please elaborate?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this commit and 6cbf4ea I have added fits based on phase data as you suggested and now peaks in #668 datasets are now automatically discovered

Copy link
Copy Markdown
Collaborator

@TheoLaudatQM TheoLaudatQM Jul 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, but this doesn't explain what this functions tries to detect, what features are identified as opx_bandwidth_artifacts?
I am insisting on this because with dataset #544 for instance, the flag 'opx_bandwidth_artifact' is set to True for all qubits whereas there is clearly no limitation regarding the OPX bandwidth and it seems that it is even not part of _determine_resonator_outcome anymore...

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are absolutely correct - I have simplified the checks and gotten rid of this buggy flag in the latest PR :)

Comment thread qualibration_libs/analysis/feature_detection.py Outdated
Comment thread qualibration_libs/analysis/feature_detection.py Outdated
Comment thread qualibration_libs/analysis/feature_detection.py Outdated
Comment thread qualibration_libs/analysis/feature_detection.py
Comment thread qualibration_libs/analysis/feature_detection.py
Comment on lines +106 to +107
baseline_window = _get_savgol_window_length(len(arr), 51, 3)
baseline = savgol_filter(arr, window_length=baseline_window, polyorder=3) if baseline_window > 0 else arr
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems to remove the background, but then how come the peaks from dataset #86 are not properly identified?
Could it be because of the filtering window?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed in the meeting earlier, we have devised everything to work for fitting single resonators.

Once can certainly, extend the present analysis scheme to fit on multiple resonators

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand, but there are two things here:

  1. Several peaks are identified, so the analysis is marked as failed: this is good.
  2. If I disable the failing assertion, the fitting routine did converge and found a very broad peak as it seemed to have converged on the the background instead of the most prominent peak. This is the part that I don't understand considering that the baseline should be removed by the savgol filter so that the most prominent peak should still be identified. Can you please explain what I am missing here?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The decisions we made for this was biased towards not choosing a resonator when many are present but now that we have better understanding of the deliverable we have addressed this issue in the latest PR

Comment thread qualibration_libs/plotting/grids.py Outdated
…it 1D resonator spec using phase data as well

- Added consistent preprocessing functions for peak detection in feature_detection.py.
- Introduced circle_fit_s21_resonator_model function in fitting.py for S21 circle fitting.
- Updated models.py to include S21Resonator class for fitting 1D resonator spectroscopy data.
- Refactored peak detection logic to utilize new preprocessing functions, improving code clarity and maintainability.
Comment thread qualibration_libs/analysis/feature_detection.py Outdated
Comment on lines +199 to +202
_DELAY_FIT_PERCENTAGE = 10
_PHASES_THRESHOLD_PERCENTAGE = 80
_STD_DEV_GAUSSIAN_KERNEL = 5
_PHASE_ELEMENTS = 5
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment here, are the users supposed to play with these parameters, and if so, how can they know how they impact the fit.
For instance on dataset #544, qC2 is systematically failing, but it is very hard to understand why.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed in the latest PR after the fruitful discussions

Comment thread qualibration_libs/analysis/feature_detection.py Outdated
Comment on lines +26 to +46
"""
Apply consistent preprocessing to signal data for peak detection.

This function performs baseline correction using ALS, applies smoothing with
Savitzky-Golay filter, and estimates noise from the residual. All functions
use identical preprocessing parameters to ensure consistency.

Parameters
----------
arr : np.ndarray
Input signal array to preprocess

Returns
-------
Dict[str, np.ndarray]
Dictionary containing:
- 'smoothed': Baseline-corrected and smoothed signal
- 'noise': Estimated noise level from residual
- 'baseline': Original baseline estimate
- 'baseline_corrected': Signal with baseline removed
"""
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We typically use Google-style docstrings, so it makes sense to keep that in this pull request as well.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noted. I will make sure the final PR will have the Google-style docstrings

Comment thread qualibration_libs/analysis/models.py Outdated
Comment on lines +259 to +260
print(f"An error occurred during fitting: {e}")
traceback.print_exc()
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's avoid print statements. Instead, let's use the logging module.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have not addressed it since we are no longer using this model and I will delete all this code in the final PR

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What do you call "final PR"? Isn't it this one?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please disregard that nomenclature 😅


plt.tight_layout(rect=[0, 0, 1, 0.95]); plt.show()

def _s21_model(self, frequencies: NDArray, resonance: float, q_loaded: float, q_coupling: float, phi: float = 0.0, amplitude: float = 1.0, alpha: float = 0.0, tau: float = 0.0) -> NDArray:
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These methods are too dense. It becomes difficult to understand what they are actually doing. They should, first of all, have docstrings and comments. We should also consider moving them to separate helper functions.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very fair comments.

I have not addressed it since we are no longer using this model and I will delete all this code in the final PR

Comment thread qualibration_libs/plotting/grids.py Outdated
Comment on lines +163 to +215
if grid_names:
if isinstance(grid_names, str):
grid_names = [grid_names]
grid_indices = [
tuple(map(int, re.findall(r"-?\d+", grid_name)))
for grid_name in grid_names
]
else:
grid_indices = [
tuple(map(int, re.findall(r"-?\d+", str(ds.qubit.values[q_index]))))
for q_index in range(ds.qubit.size)
]

if len(grid_indices) > 1:
grid_name_mapping = dict(zip(grid_indices, ds.qubit.values))
else:
try:
grid_name_mapping = dict(zip(grid_indices, [str(ds.qubit.values[0])]))
except Exception:
grid_name_mapping = dict(zip(grid_indices, [str(ds.qubit.values)]))

grid_row_idxs = [idx[1] for idx in grid_indices]
grid_col_idxs = [idx[0] for idx in grid_indices]
min_grid_row = min(grid_row_idxs)
max_grid_row = max(grid_row_idxs)
min_grid_col = min(grid_col_idxs)
n_rows = max_grid_row - min_grid_row + 1
n_cols = max(grid_col_idxs) - min_grid_col + 1

# Build grid order using the same coordinate mapping logic as QubitGrid
# This ensures consistent layout between matplotlib and plotly versions
grid_order = []
name_dicts = []

for row in range(n_rows):
for col in range(n_cols):
# Use the same coordinate transformation as QubitGrid:
# grid_row = max(grid_row_idxs) - row
# grid_col = col + min_grid_col
grid_row = max_grid_row - row
grid_col = col + min_grid_col

qubit_name = grid_name_mapping.get((grid_col, grid_row))
grid_order.append(qubit_name)

# Only add to name_dicts if qubit exists at this position
if qubit_name is not None:
name_dicts.append({ds.qubit.name: qubit_name})

self.n_rows = n_rows
self.n_cols = n_cols
self.grid_order = grid_order
self.name_dicts = name_dicts
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's too much functionality in the init method. We should consider moving it to a separate private method.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you are totally right. addressed in the latest PR

Comment thread qualibration_libs/plotting/grids.py Outdated
Comment on lines +218 to +223
def plotly_grid_iter(grid: 'PlotlyQubitGrid') -> Iterator:
"""
Generator to iterate over the PlotlyQubitGrid, yielding (subplot_index, name_dict) for each qubit.
"""
for i, name_dict in enumerate(grid.name_dicts):
yield i, name_dict No newline at end of file
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it would make more sense to use the iter method instead.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

addressed in the latest PR

from quam_builder.architecture.superconducting.qubit import AnyTransmon


class TraceConfig(BaseModel):
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs a docstring explaining its purpose. The same applies to the classes below.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Addressed in the latest PR

Comment thread qualibration_libs/plotting/grids.py Outdated
"""
Computes grid shape and qubit ordering for Plotly subplots based on a dataset's qubit coordinates.

Unlike QubitGrid, this class does not create or manage any plotting objects or axes. Instead, it provides the necessary information (number of rows, columns, and qubit ordering) to facilitate the arrangement of subplots in Plotly figures.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’m a bit confused by the naming here. If this class doesn’t actually handle any plotting objects, it might be clearer to remove “Plotly” from the name. Right now, having both a QubitGrid and a PlotlyQubitGrid feels a bit redundant, especially if their responsibilities overlap. Would it make sense to consolidate the functionality into a single class

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You are totally right. I have addressed it in the latest PR

fit_traces: List[TraceConfig] = Field(default_factory=list)


def create_plotly_figure(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I look in the QUA-Libs PR, I see that only one of the nodes (02a) uses this function. It should either be a common function used by all of the new refactored nodes, or it should not be a standardized function residing in Qualibration-libs.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes you are totally right.

I think the confusion stems from the fact that I didn't upload the application note to the github (it was shared over slack). I only implemented the "proper" structure to node02a and left the other 3 nodes in a mostly backwards compatible flow but still kept those plotly methods that I wrote while developing.

nevertheless, the latest PR addresses this issue

shanto268 and others added 23 commits July 4, 2025 20:56
- Removed unused overlay abstraction (812 lines of dead code):
  - Deleted overlays.py (507 lines)
  - Deleted overlay_backends.py (305 lines)

- Replaced hardcoded values with constants:
  - Added matplotlib tight_layout constants to PlotConstants
  - Replaced hardcoded coordinate strings with CoordinateNames
  - Added FREQ_FULL constant to CoordinateNames
  - Added mv_to_v() to UnitConverter and replaced hardcoded /1000
  - Imported UnitConverter in plotly_engine

All tests passing with no regressions.
- Removed 15 unused imports across main engine files:
  - plotly_engine.py: QubitGrid, 4 unused exceptions, RobustStatistics, extract_trace_data
  - matplotlib_engine.py: numpy, LineStyles, 4 unused exceptions, DataExtractor
  - base_engine.py: QubitGrid

No duplicate imports or commented code found.
All tests passing.
- Added common methods to BaseRenderingEngine:
  - _validate_overlay_fit() for centralized fit validation
  - _extract_overlay_parameters() for unified parameter extraction
  - _get_frequency_range() for common frequency range extraction

- Refactored overlay methods in both engines to use base class:
  - Plotly: Updated all overlay methods to use base utilities
  - Matplotlib: Removed 4 redundant helper methods

Net reduction of ~150 lines while improving maintainability.
All tests passing.
Using parallel sub-agents:

1. Data Extraction Consolidation:
   - Added common data preparation methods to data_utils module
   - Updated engines to use DataExtractor.extract_qubit_data() consistently
   - Removed manual qubit data extraction patterns

2. Grid/Subplot Unification:
   - Added 6 methods to base class for unified grid/layout management:
     - _create_grid_layout() - centralized grid creation
     - _calculate_figure_dimensions() - unified figure sizing
     - _get_subplot_spacing() - standardized spacing logic
     - _generate_subplot_titles() - centralized title generation
     - _calculate_colorbar_positions() - unified colorbar positioning
     - _create_subplot_title() - single subplot title helper
   - Both engines now use consistent grid creation patterns

Net change: +555 additions, -232 deletions (better organized)
All tests passing.
…lution for changing fit constants + addressed cosmetic changes raised in PR
refactor: cleanded up __init__ of grids and added iter
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants