A comprehensive, step-by-step color correction pipeline for digital images. This package integrates flat-field correction (FFC), gamma correction (GC), white balance (WB), and color correction (CC) into a unified, user-friendly workflow. After training on a reference image with a color checker chart (and optionally a white-field image for FFC), the learned corrections can be applied to any new image captured under the same conditionsβno color chart required for subsequent images.
This package builds upon a previous package ML_ColorCorrection_tool.
A UI version of this package can be found at ColorCorrectionPackage_UI.
β’ Flat-Field Correction (FFC)
Automatically detect or manually crop "white" background image. Fits an n-degree 2D surface to describe the light distribution in the FOV, extrapolates to full image.
β’ Saturation Check / Extrapolation
Identify and fix saturated patches on the chart before proceeding, ensuring accurate downstream corrections.
β’ Gamma Correction (GC)
Fits an optimum polynomial (up to configurable degree) mapping between measured neutral patch intensities and reference values, and applies it to the entire image.
β’ White Balance (WB)
Diagonal white-balance correction using the neutral patches of the color checker. Gets diagonal matrix and applies it to the entire image.
β’ Color Correction (CC)
Two methods:
- Conventional ("conv"): configurable polynomial expansion with the Finlayson 2015 method, produces a 3xn matrix that can be applied to the entire image.
- Custom ("ours"): uses ML with linear regression, pls regression, or neural networks, produces a model that can be applied to the entire image.
β’ Predict on New Images
Once models are saved, apply FFC β GC β WB β CC in sequence to any new photograph, no chart needed.
β’ β‘ Hardware Acceleration (Numba/CUDA)
Automatic detection of CPU parallelism and CUDA at import time. Numba-JIT kernels accelerate sRGBβLab conversion (via precomputed LUTs), 3-D LUT trilinear interpolation for CC prediction, and FFC. Falls back transparently to NumPy when Numba CUDA is unavailable. Delivers up to 1.50Γ speedup (avg 1.41Γ over v1.3.4) with zero code changes required.
β’ π¦ Batch Prediction (predict_images())
Apply the full FFC β GC β WB β CC pipeline to a list of images in parallel using a ThreadPoolExecutor. Accepts file paths or pre-loaded arrays, and an optional progress_callback for real-time progress tracking.
The ColorCorrectionPipeline package includes the following key components:
ColorCorrectionPipeline/
βββ __init__.py # Package exports
βββ __version__.py # Version information
βββ pipeline.py # Main ColorCorrection class
βββ models.py # Model definitions and persistence
βββ config.py # Configuration management
βββ constants.py # Package constants
βββ core/ # Core algorithms
β βββ __init__.py
β βββ accel.py # Hardware acceleration (Numba CPU/CUDA kernels)
β βββ color_spaces.py # Color space conversions
β βββ correction.py # Correction algorithms
β βββ metrics.py # Quality metrics (ΞE)
β βββ transforms.py # Image transformations
β βββ utils.py # Utility functions
βββ flat_field/ # Flat-Field Correction module
β βββ __init__.py
β βββ correction.py # FFC implementation
β βββ models/ # Pre-trained models (included in package)
β βββ __init__.py
β βββ plane_det_model_YOLO_512_n.pt # YOLO model for automatic white plane detection
βββ io/ # I/O utilities
βββ __init__.py
βββ readers.py # Image readers
βββ writers.py # Image writers
Note: The YOLO model (plane_det_model_YOLO_512_n.pt) is automatically included when you install the package, so you don't need to download or specify the model path separately.
Install directly from PyPI:
pip install ColorCorrectionPipelinenumba is installed automatically. Hardware acceleration (CPU parallelism and CUDA, if an NVIDIA GPU is present) is detected and enabled at import time β no extra install flags or code changes needed.
For the latest features or development:
# Clone the repository
git clone https://github.com/collinswakholi/ColorCorrectionPackage.git
cd ColorCorrectionPackage
# Install in editable mode with development dependencies
pip install -e ".[dev]"β’ Python: 3.8 or higher
β’ Operating System: Windows, macOS, Linux
β’ Memory: Minimum 4GB RAM (8GB recommended for large images)
β’ GPU: Optional (CUDA-compatible GPU for accelerated processing)
The package automatically installs the following dependencies:
Core Dependencies:
β’ numpy - Numerical computing
β’ scipy - Scientific computing
β’ scikit-learn - Machine learning algorithms
β’ opencv-python, opencv-contrib-python - Computer vision
β’ torch - Deep learning framework
β’ ultralytics - YOLO object detection
β’ numba - JIT-compiled CPU/CUDA kernels for accelerated image processing
Image Processing:
β’ scikit-image - Image processing algorithms
β’ colour-science - Color science computations
β’ colour-checker-detection - Color checker detection
Visualization & Analysis:
β’ matplotlib, plotly, seaborn - Plotting and visualization
β’ pandas - Data manipulation
β’ statsmodels - Statistical modeling
Development & Testing:
β’ pytest - Testing framework
Verify your installation:
import ColorCorrectionPipeline
from ColorCorrectionPipeline import ColorCorrectionBelow is a simple example of how to use the package:
import os
import cv2
import numpy as np
import pandas as pd
from ColorCorrectionPipeline import ColorCorrection, Config
from ColorCorrectionPipeline.core.utils import to_float64
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 1. File paths
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
IMG_PATH = "Data/Images/Sample_1.JPG" # Image containing color checker
WHITE_PATH = "Data/Images/white.JPG" # Optional White background image for FFC
TEST_IMAGE_PATH = "Data/Images/Image_1.JPG" # Optional New image for prediction
# Output directory (only used if config.save=True)
SAVE_PATH = os.path.join(os.getcwd(), "results")
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 2. Load images and convert to RGB float64 in [0,1]
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
img_bgr = cv2.imread(IMG_PATH)
img_rgb = to_float64(img_bgr[:, :, ::-1]) # convert to RGB (64bit floats, 0-1, RGB)
white_bgr = cv2.imread(WHITE_PATH)
test_bgr = cv2.imread(TEST_IMAGE_PATH)
test_rgb = to_float64(test_bgr[:, :, ::-1]) # convert to RGB (64bit floats, 0-1, RGB)
img_name = os.path.splitext(os.path.basename(IMG_PATH))[0]
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 3. Configure perβstage parameters
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
ffc_kwargs = {
"manual_crop": False, # Optional, for manual white plane ROI selection
"show": False, # Whether to show intermediate plots
"bins": 50, # Number of bins used for sampling the intensity profile of the white plane
"smooth_window": 5, # Window size for smoothing the intensity profile
"get_deltaE": True, # Whether to calculate and return deltaE (CIEDE2000)
"fit_method": "pls", # can be linear, nn, pls, or svm, default is linear
"interactions": True, # Whether to include interactions in the polynomial expansion
"max_iter": 1000, # Maximum number of iterations
"tol": 1e-8, # Tolerance for stopping criterion
"verbose": False, # Whether to print verbose output
"random_seed": 0, # Random seed
}
# Gamma Correction (GC) kwargs:
gc_kwargs = {
"max_degree": 5, # Maximum polynomial degree for fitting gamma profile
"show": False, # Whether to show intermediate plots
"get_deltaE": True, # Whether to calculate and return deltaE (CIEDE2000)
}
# White Balance (WB) kwargs:
wb_kwargs = {
"show": False, # Whether to show intermediate plots
"get_deltaE": True, # Whether to calculate and return deltaE (CIEDE2000)
}
# Color Correction (CC) kwargs:
cc_kwargs = {
'cc_method': 'ours', # method to use for color correction
'method': 'Finlayson 2015', # if cc_method is 'conv', this is the method
'mtd': 'nn', # if cc_method is 'ours', this is the method, linear, nn, pls
'degree': 2, # degree of polynomial to fit
'max_iterations': 10000, # max iterations for fitting
'random_state': 0, # random seed
'tol': 1e-8, # tolerance for fitting
'verbose': False, # whether to print verbose output
'param_search': False, # whether to use parameter search
'show': False, # whether to show plots
'get_deltaE': True, # whether to compute deltaE
'n_samples': 50, # number of samples to use for parameter search
# only if mtd == 'pls' otherwise disable
# 'ncomp': 1, # number of components to use
# only if mtd == 'nn' otherwise disable
'hidden_layers': [64, 32, 16], # hidden layer sizes for neural network
'learning_rate': 0.001, # learning rate for neural network
'batch_size': 16, # batch size for neural network
'patience': 10, # patience for early stopping
'dropout_rate': 0.2, # dropout rate for neural network
'optim_type': 'adam', # optimizer type for neural network
'use_batch_norm': True, # whether to use batch normalization
}
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 4. Build Config and run the Training Pipeline
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
config = Config(
do_ffc=True, # Change to False if you don't want to run FFC
do_gc=True, # Change to False if you don't want to run GC
do_wb=True, # Change to False if you don't want to run WB
do_cc=True, # Change to False if you don't want to run CC
save=False, # Change to True if you want to save models + CSVs
save_path=SAVE_PATH, # Directory for saving outputs (models & CSV)
check_saturation=True, # Change to False if you don't want to check if color chart patches are saturated
REF_ILLUMINANT=None, # Defaults to D65; supply np.ndarray if needed
FFC_kwargs=ffc_kwargs,
GC_kwargs=gc_kwargs,
WB_kwargs=wb_kwargs,
CC_kwargs=cc_kwargs,
)
cc = ColorCorrection() # Initialize ColorCorrection class
metrics, corrected_imgs, errors = cc.run(
Image=img_rgb,
White_Image=white_bgr, # Optional, you don't have to pass anything
name_=img_name,
config=config,
)
# Convert metrics (dict) β pandas.DataFrame for display
metrics_df = pd.DataFrame.from_dict(metrics)
print("Per-patch and summary metrics for each stage:\n", metrics_df.head())
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 5. Predict on a New Image (no color-checker required)
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
test_results = cc.predict_image(test_rgb, show=True)
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
# 6. Batch-predict multiple images in parallel
# βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
def on_progress(done, total, name):
print(f"[{done}/{total}] finished: {name}")
batch_results = cc.predict_images(
images=["Data/Images/Image_1.JPG", "Data/Images/Image_2.JPG"],
show=False,
max_workers=4,
on_progress=on_progress,
)
# batch_results is a list of dicts, one per image, with keys: FFC, GC, WB, CC- A photograph with a color checker chart:
Data/Images/Sample_1.JPG, - An optional matching white-field image (for FFC):
Data/Images/white.JPG, - The YOLO model for detecting the white plane is now automatically included in the package:
ColorCorrectionPipeline/flat_field/models/plane_det_model_YOLO_512_n.pt - Another optional image (no chart required) to test the learned corrections:
Data/Images/Image_1.JPG
The ColorCorrectionPipeline delivers significant improvements in color accuracy and consistency. Below are sample results demonstrating the effectiveness of the complete correction pipeline:
Raw images straight from the camera showing color cast, vignetting, and inconsistent color reproduction:
Same images after applying the complete FFC β GC β WB β CC pipeline, showing improved color accuracy, uniform illumination, and consistent color reproduction:
Key Improvements:
β’ β
Eliminated vignetting and illumination non-uniformities (FFC)
β’ β
Corrected gamma response for accurate neutral tones (GC)
β’ β
Achieved neutral white balance under the reference illuminant (WB)
β’ β
Accurate color reproduction matching reference standards (CC)
β’ β
Consistent results across multiple images captured under the same conditions
Typical results after full pipeline correction achieve ΞE < 2.0 for most images, with many achieving ΞE < 1.2.
We welcome contributions! Please see our contributing guidelines below:
- Fork and Clone
git clone https://github.com/collinswakholi/ColorCorrectionPackage.git
cd ColorCorrectionPackage- Create Development Environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -e ".[dev]"- Run Tests
pytest tests/- Code Style
black .- Submit a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
If you use this package in your research, please cite:
@software{colorcorrectionpipeline,
author = {Wakholi, Collins and Rippner, Devin A.},
title = {ColorCorrectionPipeline: A stepwise colorβcorrection pipeline},
url = {https://github.com/collinswakholi/ColorCorrectionPackage},
version = {1.4.3},
year = {2026}
}We would like to gratefully acknowledge:
β’ Devin A. Rippner for invaluable technical guidance
β’ ORISE for fellowship support
β’ USDA-ARS for funding and research opportunities
Made with β€οΈ by Collins Wakholi
For bug reports and feature requests, please open an issue on GitHub.