From Instantaneous to Predictive Control: A More Intuitive and Tunable MPC Formulation for Robot Manipulators
This repository is intended to accompany the paper "From Instantaneous to Predictive Control: A More Intuitive and Tunable MPC Formulation for Robot Manipulators", submitted to RA-L. In particular, it containts the source code for the motivation example used throughout the paper, as well as some additional animations to better illustrate the main points.
An accompanying video is available here:
We consider solving the following optimal control problem (OCP) at every control interval of the model predictive controller (MPC):
To motivate the challenge of tuning the MPC controller, we compare three different objective functions. For all three objective functions, the terminal cost
For this objective function, the weighting matrices are chosen as
Clearly, for smaller values of
However, this simulation used the same model as the MPC controller, without any mismatch.
A more realistic scenario can be achieved by modifying the simulation to include a first-order model for the actuator dynamics.
The results of repeating the previous experiment, but now with the actuator dynamics included, can be seen below.
Introducing the actuator dynamics causes the very fast response (
The quantitative results for this experiement can be found below, showing the unstable response for
Objective function B introduces a penalty on the time-derivative of the task error,
Clearly, the performance significantly depends on the horizon length. More specifically, the performance improves as the horizon length increases, with poor performance for a short horizon length.
The purpose of this objective function is to achieve a closed-loop performance that is less dependant on the horizon length. In particular, it is desireable that the performance should be good even for a short horizon length. This can be achieved by penalising the deviation of the error from a first-order response, as opposed to penalising the error directly.
In fact, this is the same as using objective function B, but with the following weighting matrices:
Therefore, this approach can simply be viewed as a way of choosing
Previously it was shown that using objective function C, the same closed-loop performance can be achieved for a wide range of horizon lengths. This begs the question, why have a long horizon length, if it gives the same performance as for a short horizon length? The answer to this question can be found by modifing the problem formulation to include constraints on the joint positions, velocities and accelerations. When these constraints are present, it can be seen that the closed-loop performance improves as the horizon length increases. The reason for this is that the longer prediction horizon allows the MPC controller to better anticipate and avoid constraint violations. The results of the experiments with the added constraints are visualised below.
- Make use of the provided devcontainer and docker file. To do this, it is nescessary to install vscode, docker and the vscode devcontainer extension. Instructions on getting started with devcontainers are available here.
- Once vscode and docker is successfully installed, open vscode in the root directory of this project.
code . - Visual studio will ask you to open the workspace in a container. Say YES. If you don't get the message, you can press "F1" (or ctrl-shift P) and look for "Reopen in Container". Make sure that you opened the workspace in the directory of the tuning_mpc repository.
- The first time the Docker image will be created. This will take some time (download of a base image and compiling the dependencies). The next time that you start, the container will start in a few seconds.
- The MPC controller requires Casadi and Fatrop (with the spectool specification) installed from source. Instructions are available here and involves (1) Creating a virtual environment; (2) Install Casadi from source; (3) Install Fatrop with Spectool.
- Once casadi and fatrop is installed, install the additional requirements
pip install -r requirements.txt && pip install -e .
Running the following command reproduces all the graphs presented in the paper
python paper_experiments.py
The resulting figures are stored in the "figures" directory, and the animations in the "animation" directory.










