Skip to content

LSQR is an iterative method for solving large, sparse, linear systems of equations and linear least-squares problems, including under- or over-determined and rank-deficient systems. It uses the Lanczos bidiagonalization process to provide a robust alternative to conjugate gradients, offering better numerical stability. Solver

License

Notifications You must be signed in to change notification settings

SENATOROVAI/LSQR-solver-course

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

13 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

LSQR Solver β€” Sparse Linear System Course

License Python Website PRs Welcome DOI Code Style Pre-commit

πŸš€ Professional implementation and mathematical explanation of the LSQR algorithm for solving large-scale sparse linear systems.


πŸ”₯ Project Overview

This repository provides a complete course-style implementation of the LSQR algorithm, including:

  • Mathematical derivation
  • Connection to Conjugate Gradient
  • Sparse matrix optimization
  • Numerical stability analysis
  • Python implementation from scratch
  • Applications in large-scale optimization

Keywords


lsqr solver
lsqr algorithm
sparse linear solver
iterative least squares
large scale linear systems
lsqr python implementation
numerical linear algebra
sparse matrix solver
least squares solver
cg based least squares


πŸ“š Mathematical Background

LSQR solves the least squares problem:

$$ \min_x ||Ax - b||_2 $$

Where:

  • $$A \in \mathbb{R}^{m \times n}$$ (sparse or large matrix)
  • $$b \in \mathbb{R}^m$$
  • $$x$$ β€” unknown vector

πŸ”΅ Algorithm Foundation

LSQR is based on:

  • Lanczos bidiagonalization
  • Krylov subspace methods
  • Connection to Conjugate Gradient on normal equations

Instead of solving:

$$ A^T A x = A^T b $$

Directly (which is unstable),

LSQR solves it iteratively in a numerically stable way.


⚑ Core Idea

LSQR constructs Krylov subspaces:

$$ \mathcal{K}_k(A^T A, A^T b) $$

and minimizes the least squares residual inside this subspace.

It avoids explicitly forming:

$$ A^T A $$

Which improves:

βœ… Stability
βœ… Memory efficiency
βœ… Performance for sparse matrices


🧠 Why This Project Is Important

LSQR is used in:

  • Large-scale data fitting
  • Image reconstruction
  • Tomography
  • Inverse problems
  • Machine learning regularization
  • Scientific computing

It is one of the most important industrial-grade solvers.


πŸ— Project Structure


lsqr-solver-course/
β”‚
β”œβ”€β”€ README.md
β”œβ”€β”€ LICENSE
β”œβ”€β”€ CITATION.cff
β”œβ”€β”€ requirements.txt
β”‚
β”œβ”€β”€ src/
β”‚   β”œβ”€β”€ lsqr_solver.py
β”‚   β”œβ”€β”€ bidiagonalization.py
β”‚   β”œβ”€β”€ operators.py
β”‚
β”œβ”€β”€ examples/
β”‚   └── demo.py
β”‚
β”œβ”€β”€ docs/
β”‚   β”œβ”€β”€ theory.md
β”‚   β”œβ”€β”€ convergence.md
β”‚
β”œβ”€β”€ images/
β”‚   └── convergence_plot.png
β”‚
└── index.html

Clean structure improves:

βœ” Discoverability
βœ” Professional appearance
βœ” Research credibility


🐍 Example β€” Simple LSQR Skeleton

import numpy as np

def lsqr(A, b, tol=1e-8, max_iter=1000):
    m, n = A.shape
    x = np.zeros(n)

    r = b - A @ x
    beta = np.linalg.norm(r)
    u = r / beta

    for _ in range(max_iter):
        v = A.T @ u
        alpha = np.linalg.norm(v)
        v = v / alpha

        # Update solution (simplified step)
        x += (beta / alpha) * v

        r = b - A @ x
        beta = np.linalg.norm(r)

        if beta < tol:
            break

    return x

πŸš€ Installation

pip install -r requirements.txt

Run example:

python examples/demo.py

πŸ“Š Visualization (Recommended)

Add:

  • Residual norm vs iteration
  • Convergence curve
  • Krylov subspace dimension growth

Example:

import matplotlib.pyplot as plt

plt.plot(residual_history)
plt.xlabel("Iteration")
plt.ylabel("Residual Norm")
plt.title("LSQR Convergence")
plt.show()

About

LSQR is an iterative method for solving large, sparse, linear systems of equations and linear least-squares problems, including under- or over-determined and rank-deficient systems. It uses the Lanczos bidiagonalization process to provide a robust alternative to conjugate gradients, offering better numerical stability. Solver

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Packages

 
 
 

Contributors