Skip to content

shizhouxing/CT-BaB

Repository files navigation

CT-BaB: Certified Training with Branch-and-Bound (e.g., for Lyapunov-stable Neural Control)

Arxiv

CT-BaB is a generally formulated framework of verification-aware neural network training (a.k.a. certified training) with training-time branch-and-bound. It aims to train verification-friendly neural networks with stronger verifiable guarantees that hold for an entire input region-of-interest. As a case study, we demonstrate this framework on an application of learning Lyapunov-stable neural network-based controllers in nonlinear dynamical systems, where the controllers can be verified to satisfy the Lyapunov asymptotic stability within a region-of-attraction.

CT-BaB yields verification-friendly models that can be more efficiently verified at test time while achieving stronger verifiable guarantees with larger ROA. On the largest outputfeedback 2D Quadrotor system experimented so far, CT-BaB reduces verification time by over 11× relative to the previous state-of-the-art baseline while achieving 164× larger ROA.

More details in our paper:

Certified Training with Branch-and-Bound for Lyapunov-stable Neural Control. Zhouxing Shi, Haoyu Li, Cho-Jui Hsieh, and Huan Zhang.

Setup

Python 3.12+ is recommended.

Install dependencies:

git clone --recursive https://github.com/Verified-Intelligence/alpha-beta-CROWN.git
(cd alpha-beta-CROWN/auto_LiRPA; pip install -e .)
(cd alpha-beta-CROWN/complete_verifier; pip install -r requirements.txt)
pip install -r requirements.txt

Usage

Training

A configuration file CONFIG_FILE and output directory OUTPUT_DIR will need to be specified. Training arguments are loaded from the configuration file and may be overriden via command line. Configuration files for each existing system are provided in configs/. To train a model, run:

python train.py --dir $OUTPUT_DIR --config $CONFIG_FILE

Key options:

  • dynamics: Name of the dynamical system (e.g., pendulum, pendulum_small_torque, pendulum_output_feedback, path_tracking, path_tracking_small_torque, cartpole, pvtol, quadrotor2d, and quadrotor2d_output_feedback)
  • lower_limit, upper_limit: Lists of values to specify the region-of-interest
  • max_init_size: Maximum size of each initial sub-region

Modeling:

  • lyapunov_func: Type of Lyapunov function (quadratic or nn)
  • lyapunov_R_rows: Number of rows in quadratic Lyapunov parameter matrix
  • lyapunov_width, lyapunov_depth: Width and depth for NN Lyapunov function
  • controller_width, controller_depth: Width and depth of controller network

Region-of-attraction (ROA):

  • rho_size: Input size for sampling data points to compute the ROA objective
  • rho_ratio: Ratio of sampled data points desired to be within ROA
  • rho_penalty: Strength of regularization for ROA size

Training hyperparameters:

  • batch_size, lr: Batch size and learning rate
  • steps: Total training steps
  • margin: Margin for the output constraint

Checkpointing:

  • dir: Output directory for checkpoints and logs
  • hf: Optional HuggingFace model hub path for uploading

Evaluation

The evaluation stage evaluates the trained model on the training domains, exports unverified input subregions, and computes ROA. Suppose CHECKPOINT_NAME is the name of the checkpoint you have chosen to evaluate (e.g., OUTPUT_DIR/1000 for the checkpoint saved at step 1000), run:

python train.py --dir $OUTPUT_DIR --config $CONFIG_FILE \
    --eval --load $CHECKPOINT_NAME.pt --load_domains $OUTPUT_DIR/domains.pt

You may adjust --batch_size and --eval_roa_ticks for evaluation according to your to fit or maximize the utilization of your GPU memory. This command should report the proportion of verified training domains, export those unverified, and report the ROA size (both raw ROA size and the one scaled by the ROI size).

Verification

Finally, run a complete verifier (α,β-CROWN configured to use auto_LiRPA with massive input-space branch-and-bound) by loading unverified domains. A configuration file for the verification will need to be provided, and those we have used for each existing system are provided in verification/. Run:

PYTHONPATH=$(pwd) python -u alpha-beta-CROWN/complete_verifier/abcrown.py \
    --config $VERIFICATION_CONFIG_FILE --load_model $CHECKPOINT_NAME.pt \
    --csv_name $CHECKPOINT_NAME/specs_unverified/spec.csv \
    --input_split_presplit_domains $CHECKPOINT_NAME/specs_unverified

Reference

@article{shi2024ctbab,
  title={Certified Training with Branch-and-Bound for Lyapunov-stable Neural Control},
  author={Shi, Zhouxing and Li, Haoyu and Hsieh, Cho-Jui and Zhang, Huan},
  journal={arXiv preprint arXiv:2411.18235},
  year={2024}
}

About

Framework of Certified Training with Branch-and-Bound (CT-BaB)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Contributors

Languages