stepik: https://stepik.org/a/268894
π Professional implementation and mathematical explanation of the LSQR algorithm for solving large-scale sparse linear systems.
This repository provides a complete course-style implementation of the LSQR algorithm, including:
- Mathematical derivation
- Connection to Conjugate Gradient
- Sparse matrix optimization
- Numerical stability analysis
- Python implementation from scratch
- Applications in large-scale optimization
lsqr solver
lsqr algorithm
sparse linear solver
iterative least squares
large scale linear systems
lsqr python implementation
numerical linear algebra
sparse matrix solver
least squares solver
cg based least squares
LSQR solves the least squares problem:
Where:
-
$$A \in \mathbb{R}^{m \times n}$$ (sparse or large matrix) $$b \in \mathbb{R}^m$$ -
$$x$$ β unknown vector
LSQR is based on:
- Lanczos bidiagonalization
- Krylov subspace methods
- Connection to Conjugate Gradient on normal equations
Instead of solving:
Directly (which is unstable),
LSQR solves it iteratively in a numerically stable way.
LSQR constructs Krylov subspaces:
and minimizes the least squares residual inside this subspace.
It avoids explicitly forming:
Which improves:
β
Stability
β
Memory efficiency
β
Performance for sparse matrices
LSQR is used in:
- Large-scale data fitting
- Image reconstruction
- Tomography
- Inverse problems
- Machine learning regularization
- Scientific computing
It is one of the most important industrial-grade solvers.
lsqr-solver-course/
β
βββ README.md
βββ LICENSE
βββ CITATION.cff
βββ requirements.txt
β
βββ src/
β βββ lsqr_solver.py
β βββ bidiagonalization.py
β βββ operators.py
β
βββ examples/
β βββ demo.py
β
βββ docs/
β βββ theory.md
β βββ convergence.md
β
βββ images/
β βββ convergence_plot.png
β
βββ index.html
Clean structure improves:
β Discoverability
β Professional appearance
β Research credibility
import numpy as np
def lsqr(A, b, tol=1e-8, max_iter=1000):
m, n = A.shape
x = np.zeros(n)
r = b - A @ x
beta = np.linalg.norm(r)
u = r / beta
for _ in range(max_iter):
v = A.T @ u
alpha = np.linalg.norm(v)
v = v / alpha
# Update solution (simplified step)
x += (beta / alpha) * v
r = b - A @ x
beta = np.linalg.norm(r)
if beta < tol:
break
return xpip install -r requirements.txtRun example:
python examples/demo.pyAdd:
- Residual norm vs iteration
- Convergence curve
- Krylov subspace dimension growth
Example:
import matplotlib.pyplot as plt
plt.plot(residual_history)
plt.xlabel("Iteration")
plt.ylabel("Residual Norm")
plt.title("LSQR Convergence")
plt.show()