Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
65fafcb
Implement NarySum and NaryProduct for flat expression trees (#110)
daggbt Feb 10, 2026
4049f08
feat: lazy VectorBinaryOp materialization & vectorized gradient compi…
daggbt Feb 10, 2026
9346106
Fix static analysis issues and optimize MatrixSum.evaluate (#112)
daggbt Feb 10, 2026
e4ba2b0
feat: Implement Gradient Pre-allocation (#89) and Constant Hessian De…
daggbt Feb 10, 2026
4418113
feat: constant hessian detection for linear combinations (#90) (#114)
daggbt Feb 17, 2026
3c1f6c3
perf: optimize AffineGradientPattern with structured metadata (#91, #…
daggbt Mar 12, 2026
8d58f9f
Add sparsity pattern analysis for gradients and Jacobians (#116)
daggbt Mar 12, 2026
8cb13c3
feat: compile sparse gradients and Jacobians (#95) (#117)
daggbt Mar 12, 2026
5432d9f
feat: add subject_to_matrix() for sparse LP constraints (#96) (#118)
daggbt Mar 12, 2026
53282ea
feat: wire Variable.obj into solvers, fix pyright warnings (#97) (#119)
daggbt Mar 12, 2026
0e306dd
test: add 26 tests for Solution enhancements (issue #98) (#120)
daggbt Mar 12, 2026
c400f22
test: add 38 tests for vector/matrix polishing (issue #99) (#121)
daggbt Mar 12, 2026
9a86f7b
feat: add MILP solver via scipy.optimize.milp (#100) (#122)
daggbt Mar 13, 2026
df31386
Add VariableDict, MILP examples, and sparse benchmark plots (#123)
daggbt Mar 13, 2026
3d53b47
feat: enable incremental model modification (#104) (#124)
daggbt Mar 13, 2026
4df0072
feat: add solver progress callbacks and time limits (Issue #105) (#125)
daggbt Mar 13, 2026
93a1d4c
feat: implement LP format export (#106) (#126)
daggbt Mar 13, 2026
c75724e
Update docs and benchmarks for v1.3.0 (#127)
daggbt Apr 7, 2026
ad64fdc
chore: bump version to 1.3.0
daggbt Apr 9, 2026
a4bdc99
chore: update uv.lock
daggbt Apr 12, 2026
929a939
chore: update repo URLs from daggbt to optyx-dev org
daggbt Apr 12, 2026
fa34244
fix: resolve ruff lint errors (unused imports, ambiguous variables)
daggbt Apr 12, 2026
03c52a5
style: run ruff format on all source files
daggbt Apr 12, 2026
bf3dda9
fix: correct return type in Solution.get() for pyright
daggbt Apr 12, 2026
d7f739b
Polish sparse solver path and benchmark docs
daggbt Apr 21, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 8 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -61,4 +61,11 @@ diagnose*.py
test_scalability.py
test_scipy_scalability.py
# Profiling scripts (local analysis only)
profile_*.py
profile_*.py

dense*.py

setup_*.py

benchmarks_be*.txt
benchmarks_af*.txt
42 changes: 40 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,44 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).

## [1.3.0] - 2026-04-07

### Added
- **Mixed-Integer Linear Programming (MILP)**: `BinaryVariable`, `IntegerVariable`, and `VectorVariable(domain="binary"|"integer")` with automatic routing to SciPy's HiGHS backend via `scipy.optimize.milp()`.
- **`VariableDict`**: Dict-indexed variables keyed by strings, with `.prod(costs)` weighted sums and `.sum(keys=subset)` aggregation.
- **Sparse LP support**: `Problem.subject_to(A @ x <= b)` supports matrix blocks directly, with `as_matrix(...)` enabling `scipy.sparse` inputs for industrial-scale LP.
- **`as_matrix(storage="auto"|"dense"|"sparse")`**: explicit storage override for matrix blocks, including automatic CSR conversion for large low-density dense arrays.
- **Solver callbacks**: `SolverProgress` dataclass and `callback=` parameter on `solve()` for progress monitoring and early termination.
- **Time limits**: `time_limit=` parameter on `solve()` for wall-clock budgets.
- **LP export**: `Problem.write("model.lp")` exports linear/quadratic models in LP format.
- **Solution serialization**: `Solution.to_dict()`, `Solution.to_json()`, and `Solution.from_json()` for logging and auditing.
- **`Expression.between(lb, ub)`**: Range constraints in one call.
- **Generator support in `subject_to()`**: `prob.subject_to(x[i] >= 0 for i in range(n))`.
- **`Problem` context manager**: `with Problem() as p: ...` syntax.
- **`Problem.remove_constraint()`**: Incremental model modification.
- **`Problem.reset()`**: Clear solver cache and warm-start state.
- **Warm starts**: Re-solves automatically use the previous solution as starting point.
- **Per-element array bounds**: `VectorVariable("x", 3, lb=np.array([0, 0.5, 0.2]))`.
- **Fancy indexing**: `x[[0, 2, 5]]` returns a subset vector expression.
- **`Solution.mip_gap`** and **`Solution.best_bound`**: MILP optimality reporting.
- **`Solution.is_optimal`** and **`Solution.is_feasible`**: Convenience status checks.
- **`SolverStatus.TERMINATED`**: New status for callback-initiated stops.

### Changed
- **VectorGradientPattern**: Detects expressions with vectorizable gradient structure (∇f = Ax + b) for O(1) gradient compilation.
- **NarySum / NaryProduct**: Flatten deep loop-built trees to O(1) depth, keeping memory layout flat regardless of sequential assignment length.
- **VectorBinaryOp**: Preserves vector structure for element-wise operations.
- **Sparse Jacobian compilation**: Reduces memory from O(m×n) to O(nnz) for constraint Jacobians.
- **Sparse NLP routing**: large sparse constrained NLPs bias toward `trust-constr`, matrix blocks now flow through SciPy's linear-constraint API, and batched sparse Jacobians are compiled lazily only when that path is used.
- **SciPy objective gradients**: sparse objective structure is now evaluated with O(nnz) derivative work while returning dense vectors directly to SciPy.
- **DotProduct fast-path**: O(1) variable extraction for `x.dot(x)` expressions.

### Performance
- **LP Overhead**: ~1.1x vs raw SciPy (near parity for VectorVariable)
- **CQP Overhead**: ~1.2-2.2x vs raw SciPy (with exact Jacobians)
- **Cache Speedup**: 2x-900x for repeated solves (larger problems benefit more)
- **Rosenbrock**: 0.83x — exact gradients help complex optimization landscapes

## [1.2.4] - 2026-02-08

### Fixed
Expand Down Expand Up @@ -131,8 +169,8 @@ This patch release completes the v1.2.0 release. Due to an incomplete merge, sev
- Cache invalidation is now centralized via `Problem._invalidate_caches()`.

### Performance
- **LP Overhead**: ~0.94-1.15x vs raw SciPy (near parity)
- **NLP Overhead**: ~1.4-2.2x vs raw SciPy with gradients
- **LP Overhead**: ~1.1x vs raw SciPy (near parity for VectorVariable)
- **CQP Overhead**: ~1.2-2.2x vs raw SciPy (with exact Jacobians)
- **Cache Speedup**: 2x-900x for repeated solves (larger problems benefit more)
- **Rosenbrock**: 0.83x - exact gradients help complex optimization landscapes

Expand Down
55 changes: 41 additions & 14 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@
[![PyPI](https://img.shields.io/pypi/v/optyx.svg)](https://pypi.org/project/optyx/)
[![Python 3.12+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](LICENSE)
[![CI](https://github.com/daggbt/optyx/actions/workflows/ci.yml/badge.svg)](https://github.com/daggbt/optyx/actions/workflows/ci.yml)
[![Docs](https://img.shields.io/badge/docs-online-blue.svg)](https://daggbt.github.io/optyx/)
[![CI](https://github.com/optyx-dev/optyx/actions/workflows/ci.yml/badge.svg)](https://github.com/optyx-dev/optyx/actions/workflows/ci.yml)
[![Docs](https://img.shields.io/badge/docs-online-blue.svg)](https://optyx-dev.github.io/optyx/)

📚 **[Documentation](https://daggbt.github.io/optyx/)** · 🚀 **[Quickstart](https://daggbt.github.io/optyx/getting-started/quickstart.html)** · 💡 **[Examples](https://daggbt.github.io/optyx/examples/portfolio.html)**
📚 **[Documentation](https://optyx-dev.github.io/optyx/)** · 🚀 **[Quickstart](https://optyx-dev.github.io/optyx/getting-started/quickstart.html)** · 💡 **[Examples](https://optyx-dev.github.io/optyx/examples/portfolio.html)**

<table>
<tr>
Expand Down Expand Up @@ -79,11 +79,11 @@ Optyx is young and opinionated. It's **not** a replacement for specialized tools

| Need | Use Instead |
|------|-------------|
| MILP at scale | Pyomo, OR-Tools, Gurobi |
| Large-scale MILP with custom branching | Pyomo, OR-Tools, Gurobi |
| Convex guarantees | CVXPY |
| Maximum performance | Raw solver APIs |

But if you want readable optimization code that just works for most problems, Optyx might be for you.
Optyx does support MILP (via HiGHS), sparse LPs with 100k+ variables, and solver callbacks—but if you need industrial-grade MIP with cutting planes, a dedicated solver is the right choice.

---

Expand All @@ -93,7 +93,7 @@ But if you want readable optimization code that just works for most problems, Op
pip install optyx
```

Requires Python 3.12+, NumPy ≥2.0, SciPy ≥1.6.
Requires Python 3.12+, NumPy ≥2.0, SciPy ≥1.7.

---

Expand Down Expand Up @@ -152,6 +152,27 @@ df = gradient(f, x) # Symbolic: 3x² + 4x - 5
print(df.evaluate({"x": 2.0})) # 15.0
```

### Mixed-Integer Programming

```python
from optyx import BinaryVariable, VectorVariable, Problem
import numpy as np

# Binary knapsack: select items to maximize value within weight limit
n = 5
x = VectorVariable("x", n, domain="binary")
values = np.array([10, 20, 15, 25, 30])
weights = np.array([5, 10, 8, 12, 15])

solution = (
Problem()
.maximize(x.dot(values))
.subject_to(x.dot(weights) <= 30)
.solve()
)
# Automatically routes to HiGHS MILP solver
```

---

## Features at a Glance
Expand All @@ -160,37 +181,43 @@ print(df.evaluate({"x": 2.0})) # 15.0
|---------|-------------|
| **Natural syntax** | `x + y >= 1` instead of constraint dictionaries |
| **Automatic gradients** | Symbolic differentiation—no manual derivatives |
| **Smart solver selection** | HiGHS for LP, SLSQP/BFGS for NLP |
| **Fast re-solve** | Cached compilation, up to 900x speedup |
| **Smart solver selection** | HiGHS for LP/MILP, SLSQP/BFGS for NLP |
| **Mixed-integer programming** | `BinaryVariable`, `IntegerVariable`, automatic MILP routing |
| **Vector & matrix variables** | `VectorVariable`, `MatrixVariable`, `VariableDict` for scalable models |
| **Sparse LP support** | `subject_to(A @ x <= b)` with `as_matrix(..., storage="auto"|"dense"|"sparse")` — 100k+ variables |
| **Solver callbacks** | Monitor progress, enforce time limits, early termination |
| **LP format export** | `Problem.write("model.lp")` for interop with other solvers |
| **Solution serialization** | `to_json()` / `from_json()` for logging and auditing |
| **Fast re-solve** | Cached compilation + warm starts, up to 900x speedup |
| **Debuggable** | Inspect expression trees, understand your model |

See the [documentation](https://daggbt.github.io/optyx/) for the full API reference, tutorials, and real-world examples.
See the [documentation](https://optyx-dev.github.io/optyx/) for the full API reference, tutorials, and real-world examples.

---

## What's Next

Optyx is actively evolving:

- **Vector/Matrix variables** — Handle thousands of decision variables cleanly
- **JIT compilation** — Faster execution for complex models
- **MIQP / MINLP support** — Quadratic and nonlinear MIP via native HiGHS or Gurobi
- **MPS format I/O** — Import and export MPS files for solver interop
- **More solvers** — IPOPT integration for large-scale NLP
- **Better debugging** — Infeasibility diagnostics and model inspection

See the [roadmap](https://daggbt.github.io/optyx/contributing.html) for details.
See the [roadmap](https://optyx-dev.github.io/optyx/contributing.html) for details.

---

## Contributing

```bash
git clone https://github.com/daggbt/optyx.git
git clone https://github.com/optyx-dev/optyx.git
cd optyx
uv sync
uv run pytest
```

Contributions welcome! See our [contributing guide](https://daggbt.github.io/optyx/contributing.html).
Contributions welcome! See our [contributing guide](https://optyx-dev.github.io/optyx/contributing.html).

---

Expand Down
9 changes: 3 additions & 6 deletions benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,8 +126,8 @@ prob.subject_to(np.sum(x) >= 1) # Sum constraint
| Category | Target | Status |
| -------------- | ----------------------------------------- | ------ |
| Validation | All problems converge to known optima | ✅ |
| LP overhead | < 1.5x vs SciPy linprog | ✅ ~0.94-1.15x |
| NLP overhead | < 3x vs raw SciPy (with gradients) | ✅ ~1.4-2.2x |
| LP overhead | < 1.5x vs SciPy linprog | ✅ ~1.1-1.6x |
| CQP overhead | < 3x vs raw SciPy (with Jacobians) | ✅ ~1.2-2.2x |
| Cache benefit | > 2x speedup on repeated solve | ✅ 2x-900x |
| Gradient error | < 1e-5 vs finite difference | ✅ < 1e-10 |

Expand All @@ -141,12 +141,9 @@ prob.subject_to(np.sum(x) >= 1) # Sum constraint

## Updating Documentation

When regenerating benchmark plots, copy them to the docs folder:
When regenerating benchmark plots, the runner syncs benchmark artifacts to the docs folder automatically:

```bash
# Regenerate plots
uv run python benchmarks/run_benchmarks.py

# Copy to docs assets
cp benchmarks/results/*.png docs/assets/benchmarks/
```
Loading
Loading