You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This Python repository contains the implementation of the finite difference method for solving the Hamilton-Jacobi-Bellman (HJB) equation associated to the importance sampling (IS) problem of diffusion processes.
Setting
Importance sampling problem
Consider the stochastic process in $\mathbb{R}^d$ following the controlled dynamics for a given potential landscape $U_\text{pot}$:
where $\tau$ is the first hitting time of the set $D \subset \mathbb{R}^d$ and $M^u$ is the corresponding exponential Martingale provided by the Girsanov formula.
Every control $u^* \in \mathcal{U}$ provides us with an unbiased estimator of $\Psi(x)$. We want to find the $u^* \in \mathcal{U}$ which minimizes the variance of the importance sampling estimator
It is well known that the quantity that we want to estimate satisfies the following BVP on the domain $\mathcal{O} \coloneqq \mathcal{D} \cap D^c$
$$(\mathcal{L} -f(x)) \Psi(x) = 0 \quad \forall x \in \mathcal{O}, \quad \Psi(x) = \exp(-g(x)) \quad \forall x \in \partial{\mathcal{O}},$$
where $\mathcal{L}$ denotes the infinitesimal generator of the original not controlled process i.e. case $u=0$.
Contains
Finite difference method for the 1d and 2d cases where the original stochastic dynamics follow the overdamped langevin equation with double well potential.