Releases: jkitchin/discopt
Releases · jkitchin/discopt
discopt v0.3.0
Highlights
discopt 0.3.0 is a feature-heavy release that broadens the solver surface and adds two entirely new modeling capabilities on top of the v0.2.5 MINLP core:
discopt.mo-- multi-objective optimization via scalarization (weighted sum, AUGMECON2 ε-constraint, weighted Tchebycheff, NBI, NNC) with aParetoFrontcontainer and hypervolume / IGD / spread / ε-indicator quality metrics. Every scalarized subproblem inherits the full MINLP / differentiable / global-solver stack.discopt.doe-- model-based design of experiments. Identifiability, estimability, and profile-likelihood diagnostics; five model-discrimination criteria (HR, BF, JR, MI, DT) with sequential-design and Akaike-weight model selection; batch / parallel experimental design.- AMP -- Adaptive Multivariate Partitioning global MINLP solver with the soundness guarantee
LB_k <= global_opt <= UB_kat every iteration. - SUSPECT-style convexity detector with sound certificates feeding the convex NLP fast path, tighter αBB underestimators, and MO scalarization fast paths.
- Claude Code skills + CLI installer: 20 expert agents, plus
discopt install-skillsto drop them into~/.claude/skills/. - ripopt 0.6.1 → 0.7.0 with a number of correctness fixes (HiGHS LP/QP false-optimality on wide bounds, single-solve starting points, RLT cuts on bilinear terms with no auxiliary index, residuals using all array observations).
The companion manuscript at manuscript/discopt.org is updated with sections covering the convexity certifier, AMP, the DOE diagnostics, and multi-objective optimization, exported to manuscript/discopt.pdf (36 pages).
This release skips the never-tagged 0.2.6 and folds its draft entries into 0.3.0. The full changelog is below.
[0.3.0] - 2026-04-24
This release skips the never-tagged 0.2.6 and folds its draft entries into 0.3.0 along with the post-0.2.6 feature and infrastructure work.
Added
discopt.mo-- multi-objective optimization (feat(mo): multi-objective optimization via scalarization). Weighted-sum, AUGMECON2 ε-constraint, weighted-Tchebycheff, NBI, and NNC scalarizations; ideal/nadir payoff-table utilities;ParetoFrontcontainer; hypervolume / IGD / spread / ε-indicator quality metrics underdiscopt.mo.indicators.discopt.doe-- model-based design of experiments. Identifiability + estimability + profile-likelihood analysis (feat(doe): identifiability + estimability + profile likelihood, #48); model discrimination criteria + selection + sequential-design loop (feat(doe): model discrimination, #49, #50); batch / parallel experimental design (feat(doe): batch / parallel experimental design).- AMP -- Adaptive Multivariate Partitioning global MINLP solver (
feat(amp), #44). Iterates MILP relaxation -> NLP subproblem -> partition refinement with the soundness guaranteeLB_k <= global_opt <= UB_kat every iteration. - SUSPECT-style convexity detector with sound certificates (#46). Structural convexity / concavity / monotonicity proofs for use by the convex NLP fast path and
discopt.moreformulations. - Claude Code skills + CLI installer (
feat(cli): ship Claude Code skills in package + discopt install-skills,feat(skills): 20 discopt feature / algorithm expert agents). 20 expert agents shipped in the package and installable into a user's~/.claude/skills/viadiscopt install-skills. - Crucible knowledge base tracked in git (
feat(crucible): track wiki, bib, and 3 new articles in git). - Zenodo metadata and refined manuscript sections (#47).
RELEASE.md-- authoritative release checklist documenting the procedure for cutting a discopt release.CHANGELOG.mdin Keep a Changelog format.- Local
cargo-fmtpre-commit hook so Rust formatting is enforced alongsideruffandmypy.
Changed
ripoptworkspace dependency0.6.1->0.7.0(via0.6.2;Cargo.toml,Cargo.lock). The0.6.2step transitively updatedrmumps0.1.0->0.1.1; the0.7.0step adaptedcrates/discopt-python/src/ripopt_bindings.rsto the newNlpProblemtrait signatures: evaluation methods (objective,gradient,constraints,jacobian_values,hessian_values) now take an explicitnew_x: boolflag and returnbool(success / evaluation-failure), matching Ipopt's TNLP contract. Added match arms for the newSolveStatus::Acceptable,SolveStatus::EvaluationError, andSolveStatus::UserRequestedStopvariants, surfaced as"acceptable"/"evaluation_error"/"user_requested_stop"on the Python side;acceptablemaps toSolveStatus.OPTIMAL(KKT residuals within Ipopt's relaxed-acceptable-level tolerances)._solve_continuous(pure-continuous NLP fast path) now promotes the defaultnlp_solver="ipm"to"ipopt"for single-problem solves. The pure-JAX IPM's acceptable-tolerance check only covers variable-bound complementarity, so on problems with unbounded variables plus inequality constraints it could terminate at a non-KKT point and report OPTIMAL. Ipopt is more reliable for single solves; the JAX IPM remains the default for B&B subproblems.differentiable_solveanddifferentiable_solve_l3default backend changed from"ipm"to"ipopt"for the same reason.solvernow routes pure-MILP problems through HiGHS MIP with a B&B fallback (fix(solver): route MILP through HiGHS MIP with B&B fallback).- DAE collocation perf (
perf(dae): vectorize collocation and fix sparse Jacobian for NMPC warm solves). Vectorized collocation residuals; sparse Jacobian assembly fixed so NMPC warm-start solves don't densify. manuscript/discopt.texis no longer tracked -- it is generated frommanuscript/discopt.org.
Fixed
- Jupyter Book docs build with zero warnings. Cleaned up RST-formatting issues in module docstrings across
benchmarks/,modeling/,ro/,solvers/,doe/, andmo/; suppressed autoapi import-resolution warnings for the compileddiscopt._rustextension; escaped**kwargsparameter entries to keep Sphinx from parsing the leading**as inline strong. - HiGHS LP/QP false optimality on wide bounds:
solvers/qp_highs.pyandsolvers/lp_highs.pynow clip any bound with magnitude>= 1e15tohighspy.kHighsInfbefore passing to HiGHS. Bounds like discopt's default+/-9.999e19fall just below HiGHS's internal infinity threshold (1e20) and caused HiGHS to return false-optimal solutions on convex QPs with unbounded variables. - Single-solve starting point:
_solve_continuousnow clips the default starting point to+/-10(respecting actual bounds) instead of the previous+/-100, preventing ipopt from exploding on exp/log NLPs with one-sided large bounds. - Stationary-point starting point: Fully unbounded variables (
|lb| > 1e15and|ub| > 1e15) now start at0.5instead of the midpoint of0. Zero is a stationary point of periodic functions (sin, cos) and even functions generally; starting at0.5lets first-order NLP methods pick a descent direction and escape local maxima of the objective. Same fix applied in_jax/differentiable.py::_safe_x0. _solve_qp_highsand_solve_qp_jaxnow setSolveResult.convex_fast_path = Truewhen solving a detected convex QP directly, matching the semantics of the convex NLP fast path.- Cutting planes with bilinear terms (#35):
_jax/cutting_planes.py::generate_rlt_cutsno longer emits unsound inequalities when a bilinear term has no auxiliaryw_index. fix(estimate):discopt.estimatenow uses all array observations in residuals and Fisher information, instead of dropping all but the first row.fix(ci): cleared a clippycollapsible_matchand repaired the T24vmappath after the MILP rerouting change.
Full changelog: v0.2.5...v0.3.0