feat: Robust Analysis (nodes 02a, 02b, 02c and 04b) and Standardized Plotting Framework#9
feat: Robust Analysis (nodes 02a, 02b, 02c and 04b) and Standardized Plotting Framework#9shanto268 wants to merge 45 commits intoqua-platform:mainfrom
Conversation
…r handling interactive plotly plots
…-resspec-node Interactive plot via plotly added to Node 02a
- Implement TraceConfig, LayoutConfig, and PlotConfig Pydantic models - Create create_plotly_figure function for generating interactive plots - Support multi-qubit grid layouts with consistent styling - Add hover templates and customizable trace properties - Enable both raw data and fitted curve visualization
feat: Add standard_plotter module for configuration-driven plotting
Feature/node 02b c fix
…ctive plots for SOW task 2
There was a problem hiding this comment.
Pull Request Overview
A concise summary of the changes to support the new qua-libs PR 325, adding robust analysis metrics and a unified plotting framework.
- Introduces
standard_plotter.pyto generate both Plotly and Matplotlib figures from raw and fit datasets. - Extends
grids.pywithPlotlyQubitGridand iteration helpers for Plotly subplot layouts. - Enhances analysis modules:
fitting.pynow returns R² values andfeature_detection.pycomputes new metrics (peak counts, SNR, asymmetry, skewness, artifact detection).
Reviewed Changes
Copilot reviewed 5 out of 6 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| qualibration_libs/plotting/standard_plotter.py | New unified plotting functions create_plotly_figure and create_matplotlib_figure. |
| qualibration_libs/plotting/grids.py | Added PlotlyQubitGrid class and plotly_grid_iter iterator to mirror physical qubit layout. |
| qualibration_libs/plotting/init.py | Exported new plotting functions and grid helpers. |
| qualibration_libs/analysis/fitting.py | Updated apply_fit to return fit parameters plus R²; cleaned up import order and removed old comments. |
| qualibration_libs/analysis/feature_detection.py | Added helper for Savitzky–Golay windowing and multiple new feature-detection routines. |
Comments suppressed due to low confidence (3)
qualibration_libs/plotting/standard_plotter.py:61
- Only the first PlotConfig is used even if multiple configs are passed; either enforce a single config or extend support to iterate over all provided configurations.
config = plot_configs[0] # For now, assume one config for the whole figure.
qualibration_libs/plotting/standard_plotter.py:72
- [nitpick] The loop index variable
iis ambiguous; consider renaming toidxorsubplot_idxfor clarity.
for i, name_dict in plotly_grid_iter(grid):
qualibration_libs/analysis/feature_detection.py:30
- The docstring for
peaks_dipsstill references the old defaultprominence_factor=5; update it to reflect the new default of 2.
def peaks_dips(
TheoLaudatQM
left a comment
There was a problem hiding this comment.
Preliminary review for resonator_spectrscopy_1D only
| def _opx_bandwidth_artifact(arr, prominence, window=20, exclusion=3, artifact_prominence_factor=2.0): | ||
| # Baseline correction (ALS) | ||
| baseline_window = _get_savgol_window_length(len(arr), 51, 3) | ||
| baseline = savgol_filter(arr, window_length=baseline_window, polyorder=3) if baseline_window > 0 else arr | ||
| arr_bc = arr - baseline | ||
| # Smoothing | ||
| smooth_window = _get_savgol_window_length(len(arr_bc), 21, 3) | ||
| smoothed = savgol_filter(arr_bc, window_length=smooth_window, polyorder=3) if smooth_window > 0 else arr_bc | ||
| # Noise estimation from residual | ||
| noise = np.std(arr_bc - smoothed) | ||
| # Find main dip (minimum) | ||
| main_idx = np.argmin(smoothed) | ||
| # Define window around main dip, excluding center | ||
| start = max(0, main_idx - window) | ||
| end = min(len(smoothed), main_idx + window + 1) | ||
| exclusion_start = max(start, main_idx - exclusion) | ||
| exclusion_end = min(end, main_idx + exclusion + 1) | ||
| # Search for local maxima (peaks) in the window, excluding the dip center | ||
| search_region = np.concatenate([ | ||
| smoothed[start:exclusion_start], | ||
| smoothed[exclusion_end:end] | ||
| ]) | ||
| if len(search_region) == 0: | ||
| return np.array([False]) | ||
| # Find peaks in the search region | ||
| peaks, properties = find_peaks(search_region, prominence=artifact_prominence_factor * noise) | ||
| return np.array([len(peaks) > 0]) |
There was a problem hiding this comment.
I am not sure that I understand how this functions works at the high level. Based on its name, I am guessing that it is looking for the 5MHz FWHM dip with a single outlier at the minimum, or something similar. However, when analyzing the dataset #668, the opx_bandwidth_artifact flag is raised to True, whereas the only visible problem is related to the SNR. Even more, by tuning the smoothing window, peaks can be detected, but the analysis fails due to this flag being raise.
Can you please elaborate?
There was a problem hiding this comment.
Right, but this doesn't explain what this functions tries to detect, what features are identified as opx_bandwidth_artifacts?
I am insisting on this because with dataset #544 for instance, the flag 'opx_bandwidth_artifact' is set to True for all qubits whereas there is clearly no limitation regarding the OPX bandwidth and it seems that it is even not part of _determine_resonator_outcome anymore...
There was a problem hiding this comment.
You are absolutely correct - I have simplified the checks and gotten rid of this buggy flag in the latest PR :)
| baseline_window = _get_savgol_window_length(len(arr), 51, 3) | ||
| baseline = savgol_filter(arr, window_length=baseline_window, polyorder=3) if baseline_window > 0 else arr |
There was a problem hiding this comment.
This seems to remove the background, but then how come the peaks from dataset #86 are not properly identified?
Could it be because of the filtering window?
There was a problem hiding this comment.
As discussed in the meeting earlier, we have devised everything to work for fitting single resonators.
Once can certainly, extend the present analysis scheme to fit on multiple resonators
There was a problem hiding this comment.
I understand, but there are two things here:
- Several peaks are identified, so the analysis is marked as failed: this is good.
- If I disable the failing assertion, the fitting routine did converge and found a very broad peak as it seemed to have converged on the the background instead of the most prominent peak. This is the part that I don't understand considering that the baseline should be removed by the savgol filter so that the most prominent peak should still be identified. Can you please explain what I am missing here?
There was a problem hiding this comment.
The decisions we made for this was biased towards not choosing a resonator when many are present but now that we have better understanding of the deliverable we have addressed this issue in the latest PR
…it 1D resonator spec using phase data as well - Added consistent preprocessing functions for peak detection in feature_detection.py. - Introduced circle_fit_s21_resonator_model function in fitting.py for S21 circle fitting. - Updated models.py to include S21Resonator class for fitting 1D resonator spectroscopy data. - Refactored peak detection logic to utilize new preprocessing functions, improving code clarity and maintainability.
| _DELAY_FIT_PERCENTAGE = 10 | ||
| _PHASES_THRESHOLD_PERCENTAGE = 80 | ||
| _STD_DEV_GAUSSIAN_KERNEL = 5 | ||
| _PHASE_ELEMENTS = 5 |
There was a problem hiding this comment.
Same comment here, are the users supposed to play with these parameters, and if so, how can they know how they impact the fit.
For instance on dataset #544, qC2 is systematically failing, but it is very hard to understand why.
There was a problem hiding this comment.
addressed in the latest PR after the fruitful discussions
| """ | ||
| Apply consistent preprocessing to signal data for peak detection. | ||
|
|
||
| This function performs baseline correction using ALS, applies smoothing with | ||
| Savitzky-Golay filter, and estimates noise from the residual. All functions | ||
| use identical preprocessing parameters to ensure consistency. | ||
|
|
||
| Parameters | ||
| ---------- | ||
| arr : np.ndarray | ||
| Input signal array to preprocess | ||
|
|
||
| Returns | ||
| ------- | ||
| Dict[str, np.ndarray] | ||
| Dictionary containing: | ||
| - 'smoothed': Baseline-corrected and smoothed signal | ||
| - 'noise': Estimated noise level from residual | ||
| - 'baseline': Original baseline estimate | ||
| - 'baseline_corrected': Signal with baseline removed | ||
| """ |
There was a problem hiding this comment.
We typically use Google-style docstrings, so it makes sense to keep that in this pull request as well.
There was a problem hiding this comment.
Noted. I will make sure the final PR will have the Google-style docstrings
| print(f"An error occurred during fitting: {e}") | ||
| traceback.print_exc() |
There was a problem hiding this comment.
Let's avoid print statements. Instead, let's use the logging module.
There was a problem hiding this comment.
I have not addressed it since we are no longer using this model and I will delete all this code in the final PR
There was a problem hiding this comment.
What do you call "final PR"? Isn't it this one?
There was a problem hiding this comment.
please disregard that nomenclature 😅
|
|
||
| plt.tight_layout(rect=[0, 0, 1, 0.95]); plt.show() | ||
|
|
||
| def _s21_model(self, frequencies: NDArray, resonance: float, q_loaded: float, q_coupling: float, phi: float = 0.0, amplitude: float = 1.0, alpha: float = 0.0, tau: float = 0.0) -> NDArray: |
There was a problem hiding this comment.
These methods are too dense. It becomes difficult to understand what they are actually doing. They should, first of all, have docstrings and comments. We should also consider moving them to separate helper functions.
There was a problem hiding this comment.
Very fair comments.
I have not addressed it since we are no longer using this model and I will delete all this code in the final PR
| if grid_names: | ||
| if isinstance(grid_names, str): | ||
| grid_names = [grid_names] | ||
| grid_indices = [ | ||
| tuple(map(int, re.findall(r"-?\d+", grid_name))) | ||
| for grid_name in grid_names | ||
| ] | ||
| else: | ||
| grid_indices = [ | ||
| tuple(map(int, re.findall(r"-?\d+", str(ds.qubit.values[q_index])))) | ||
| for q_index in range(ds.qubit.size) | ||
| ] | ||
|
|
||
| if len(grid_indices) > 1: | ||
| grid_name_mapping = dict(zip(grid_indices, ds.qubit.values)) | ||
| else: | ||
| try: | ||
| grid_name_mapping = dict(zip(grid_indices, [str(ds.qubit.values[0])])) | ||
| except Exception: | ||
| grid_name_mapping = dict(zip(grid_indices, [str(ds.qubit.values)])) | ||
|
|
||
| grid_row_idxs = [idx[1] for idx in grid_indices] | ||
| grid_col_idxs = [idx[0] for idx in grid_indices] | ||
| min_grid_row = min(grid_row_idxs) | ||
| max_grid_row = max(grid_row_idxs) | ||
| min_grid_col = min(grid_col_idxs) | ||
| n_rows = max_grid_row - min_grid_row + 1 | ||
| n_cols = max(grid_col_idxs) - min_grid_col + 1 | ||
|
|
||
| # Build grid order using the same coordinate mapping logic as QubitGrid | ||
| # This ensures consistent layout between matplotlib and plotly versions | ||
| grid_order = [] | ||
| name_dicts = [] | ||
|
|
||
| for row in range(n_rows): | ||
| for col in range(n_cols): | ||
| # Use the same coordinate transformation as QubitGrid: | ||
| # grid_row = max(grid_row_idxs) - row | ||
| # grid_col = col + min_grid_col | ||
| grid_row = max_grid_row - row | ||
| grid_col = col + min_grid_col | ||
|
|
||
| qubit_name = grid_name_mapping.get((grid_col, grid_row)) | ||
| grid_order.append(qubit_name) | ||
|
|
||
| # Only add to name_dicts if qubit exists at this position | ||
| if qubit_name is not None: | ||
| name_dicts.append({ds.qubit.name: qubit_name}) | ||
|
|
||
| self.n_rows = n_rows | ||
| self.n_cols = n_cols | ||
| self.grid_order = grid_order | ||
| self.name_dicts = name_dicts |
There was a problem hiding this comment.
There's too much functionality in the init method. We should consider moving it to a separate private method.
There was a problem hiding this comment.
you are totally right. addressed in the latest PR
| def plotly_grid_iter(grid: 'PlotlyQubitGrid') -> Iterator: | ||
| """ | ||
| Generator to iterate over the PlotlyQubitGrid, yielding (subplot_index, name_dict) for each qubit. | ||
| """ | ||
| for i, name_dict in enumerate(grid.name_dicts): | ||
| yield i, name_dict No newline at end of file |
There was a problem hiding this comment.
I think it would make more sense to use the iter method instead.
| from quam_builder.architecture.superconducting.qubit import AnyTransmon | ||
|
|
||
|
|
||
| class TraceConfig(BaseModel): |
There was a problem hiding this comment.
This needs a docstring explaining its purpose. The same applies to the classes below.
| """ | ||
| Computes grid shape and qubit ordering for Plotly subplots based on a dataset's qubit coordinates. | ||
|
|
||
| Unlike QubitGrid, this class does not create or manage any plotting objects or axes. Instead, it provides the necessary information (number of rows, columns, and qubit ordering) to facilitate the arrangement of subplots in Plotly figures. |
There was a problem hiding this comment.
I’m a bit confused by the naming here. If this class doesn’t actually handle any plotting objects, it might be clearer to remove “Plotly” from the name. Right now, having both a QubitGrid and a PlotlyQubitGrid feels a bit redundant, especially if their responsibilities overlap. Would it make sense to consolidate the functionality into a single class
There was a problem hiding this comment.
You are totally right. I have addressed it in the latest PR
| fit_traces: List[TraceConfig] = Field(default_factory=list) | ||
|
|
||
|
|
||
| def create_plotly_figure( |
There was a problem hiding this comment.
If I look in the QUA-Libs PR, I see that only one of the nodes (02a) uses this function. It should either be a common function used by all of the new refactored nodes, or it should not be a standardized function residing in Qualibration-libs.
There was a problem hiding this comment.
yes you are totally right.
I think the confusion stems from the fact that I didn't upload the application note to the github (it was shared over slack). I only implemented the "proper" structure to node02a and left the other 3 nodes in a mostly backwards compatible flow but still kept those plotly methods that I wrote while developing.
nevertheless, the latest PR addresses this issue
… for all the four nodes
… data utilities, and base class consolidation
- Removed unused overlay abstraction (812 lines of dead code): - Deleted overlays.py (507 lines) - Deleted overlay_backends.py (305 lines) - Replaced hardcoded values with constants: - Added matplotlib tight_layout constants to PlotConstants - Replaced hardcoded coordinate strings with CoordinateNames - Added FREQ_FULL constant to CoordinateNames - Added mv_to_v() to UnitConverter and replaced hardcoded /1000 - Imported UnitConverter in plotly_engine All tests passing with no regressions.
- Removed 15 unused imports across main engine files: - plotly_engine.py: QubitGrid, 4 unused exceptions, RobustStatistics, extract_trace_data - matplotlib_engine.py: numpy, LineStyles, 4 unused exceptions, DataExtractor - base_engine.py: QubitGrid No duplicate imports or commented code found. All tests passing.
- Added common methods to BaseRenderingEngine: - _validate_overlay_fit() for centralized fit validation - _extract_overlay_parameters() for unified parameter extraction - _get_frequency_range() for common frequency range extraction - Refactored overlay methods in both engines to use base class: - Plotly: Updated all overlay methods to use base utilities - Matplotlib: Removed 4 redundant helper methods Net reduction of ~150 lines while improving maintainability. All tests passing.
Using parallel sub-agents:
1. Data Extraction Consolidation:
- Added common data preparation methods to data_utils module
- Updated engines to use DataExtractor.extract_qubit_data() consistently
- Removed manual qubit data extraction patterns
2. Grid/Subplot Unification:
- Added 6 methods to base class for unified grid/layout management:
- _create_grid_layout() - centralized grid creation
- _calculate_figure_dimensions() - unified figure sizing
- _get_subplot_spacing() - standardized spacing logic
- _generate_subplot_titles() - centralized title generation
- _calculate_colorbar_positions() - unified colorbar positioning
- _create_subplot_title() - single subplot title helper
- Both engines now use consistent grid creation patterns
Net change: +555 additions, -232 deletions (better organized)
All tests passing.
…or and utility classes
…ized, dynamic, and scalable architecture
…lution for changing fit constants + addressed cosmetic changes raised in PR
Addressing comments from [PR 09](qua-platform#9)
refactor: cleanded up __init__ of grids and added iter
Working on pr fixes
Updated codebase needed to work with
qua-libsPR 325