Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added data/images/nn.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1,340 changes: 0 additions & 1,340 deletions data/results/blueleg_beam_cube.csv

This file was deleted.

1,340 changes: 1,340 additions & 0 deletions data/results/blueleg_beam_cube1331.csv

Large diffs are not rendered by default.

524 changes: 0 additions & 524 deletions data/results/blueleg_beam_sphere.csv

This file was deleted.

524 changes: 524 additions & 0 deletions data/results/blueleg_beam_sphere515.csv

Large diffs are not rendered by default.

15 changes: 11 additions & 4 deletions lab_AI.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,20 @@ In this lab you will learn:
We are going to need third-parties libraries for this lab.

Click the button below to install them:
#python-button("-m pip install --target 'assets/labs/lab_AI/modules/site-packages' -r 'assets/labs/lab_AI/requirements.txt'")
#python-button(pyargs=["-m pip install --target", "assets/labs/lab_AI/modules/site-packages", "-r", "assets/labs/lab_AI/requirements.txt"])

This will install the following libraries:
```
#include(assets/labs/lab_AI/requirements.txt)
```

:::

#include(assets/labs/modules/camera_calibration.md)
#include(assets/labs/lab_AI/sections/3_scikit-learn.md)
#include(assets/labs/lab_AI/sections/1_dataset.md)
#include(assets/labs/lab_AI/sections/2_scikit-learn.md)
#include(assets/labs/lab_AI/sections/3_from_scratch.md)
#include(assets/labs/lab_AI/sections/4_pytorch.md)

## Appendix
#include(assets/labs/lab_AI/sections/change_dataset.md)
#include(assets/labs/lab_AI/sections/1_dataset.md)
#include(assets/labs/lab_AI/sections/change_dataset.md)
6 changes: 3 additions & 3 deletions lab_AI_dataset_generation.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,9 +35,9 @@ def __init__(self, emio, target, effector, assembly, shape, steps=20, direct=Fal

self.animationSteps = steps
self.animationStep = self.animationSteps
self.motorsAngle = [np.copy(self.emio.getChild(f'Motor{i}').JointActuator.value.value) for i in range(4)]
self.motorsAngle = [np.copy(self.emio.getChild(f'Motor{i}').JointActuator.value.value) for i in range(4)] if direct else [np.copy(self.emio.getChild(f'Motor{i}').JointActuator.angle.value) for i in range(4)]
self.motorsAngleGoals=self.targetsPosition[self.targetIndex]
self.motorStep = [(self.motorsAngleGoals[i]-self.emio.getChild(f'Motor{i}').JointActuator.value.value)/(self.animationSteps-1) for i in range(4)]
self.motorStep = [(self.motorsAngleGoals[i]-self.emio.getChild(f'Motor{i}').JointActuator.value.value)/(self.animationSteps-1) for i in range(4)] if direct else []

self.direct = direct

Expand All @@ -61,7 +61,7 @@ def onAnimateBeginEvent(self, _):
self.animationStep = self.animationSteps
self.effector.effectorGoal = [list(self.targetsPosition[self.targetIndex]) + [0, 0, 0, 1]]
self.targetReached = False
elif self.assembly.done:
elif self.direct and self.assembly.done:
self.animationStep -= 1
if self.targetIndex >= 0 and self.animationStep <= 0:
self.writeToCSVFile()
Expand Down
60 changes: 47 additions & 13 deletions sections/1_dataset.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,46 @@
:::::: collapse Dataset Generation
<a id="datasets"></a>
:::::: collapse Datasets
## Datasets

We can use the simulation to generate the dataset.
Since we want to train a model that recovers the motors angles based on a desired 3D position of the end effector, the dataset will need these data:
The datasets used in this lab are in CSV files containing the motors angles and the corresponding end-effector positions of Emio. The datasets are located in the `assets/labs/lab_AI/data/results` folder. Both datasets have the following fields:
- the four motors angles _m0_, _m1_, _m2_ and _m3_
- the 3D position of the effector _pos_

The datasets can then be used to train the model. In this lab, we will only use a simple 2-layer perceptron.

### Inverse Simulation
The inverse simulation is used to get the motors angles based on a desired position for the TCP (Tool Center Part) of the robot (i.e., the center part).
To create the desired target positions we want, we sample points on geometric shapes; cube or sphere with a ratio defining the distance between two points relative to the shape size.

Two datasets, created in simulation, are available:
- `blueleg_beam_cube1331.csv`: by sampling 1331 points in a cube
- `blueleg_beam_sphere515.csv`: by sampling 515 points in a sphere

They have been generated using the SOFA simulation of Emio, with the script `dataset_generation.py`.

You can take a look at `blueleg_beam_cube1331.csv`:
#open-button(file="assets/labs/lab_AI/data/results/blueleg_beam_cube1331.csv")

### Direct Simulation
Here, we explore the workspace by directly applying angle orders to the motors then measuring through the simulation the resulting TCP position.

Two datasets, created in simulation, are available:
- `blueleg_beam_direct625.csv`: bby combinating **five** possible angles for the four motors, leading nto 625 points
- `blueleg_beam_direct2401.csv`: by combinating **seven** possible angles for the four motors, leading nto 2401 points


You can take a look at `blueleg_beam_direct625.csv`:
#open-button(file="assets/labs/lab_AI/data/results/blueleg_beam_direct625.csv")

### Real Robot

Equivalent datasets were recorded on the Emio robot using a high precision magnetic sensor:
- `blueleg_beam_real_cube2197.csv`: by sampling 2197 points in a cube, contains both the simulated and measured effector positions
- `blueleg_beam_real_sphere1018.csv`: : by sampling 1018 points in a sphere, contains both the simulated and measured effector positions

These datasets were created by tracking the robot's tool center point (TCP) position with a _Polhemus_ magnetic tracker. These datasets have an extra column `Real Position` with the recorded tracked position.

You can take a look at `blueleg_beam_real_cube2197.csv`:
#open-button(file="assets/labs/lab_AI/data/results/blueleg_beam_real_cube2197.csv")

::::: exercise
**Generation SOFA Scene:**
Expand All @@ -20,14 +55,14 @@ Select the point generation method:
::: option direct
::::

Ratio of the sampling ]0, 1[ (the higher the coarser):
Ratio of the sampling $]0, 1[$ (the higher the coarser) for `sphere` and `cube` options:
#input("dataset_ratio", "Ratio to sample (the higher the coarser)", "0.08")

#runsofa-button("assets/labs/lab_AI/lab_AI_dataset_generation.py", "dataset_shape", "dataset_ratio")
#runsofa-button(file="assets/labs/lab_AI/lab_AI_dataset_generation.py", pyargs=["dataset_shape", "dataset_ratio"])

<br>

Here is is an excerp of the _blueleg_beam_sphere.csv_ dataset file that comes with this lab:
Here is is an excerp of the _blueleg_beam_sphere515.csv_ dataset file that comes with this lab:

```text
# extended ;1
Expand All @@ -39,12 +74,11 @@ Here is is an excerp of the _blueleg_beam_sphere.csv_ dataset file that comes wi
# connector ;bluepart
# connector type ;rigid
Effector position;Motor angle
[-39.96175515 -90.41789743 -39.96175525];[-0.14670205865712832, 0.14670207392254797, 2.43823807942873, -2.438238056118855]
[-39.95720099 -90.4415037 -31.95609913];[0.1329811350050557, 0.13624487007045172, 2.29165178728331, -2.488099187824528]
[-39.95397373 -90.45505537 -23.95436099];[0.4217800556565714, 0.13599500085968805, 2.113863614076125, -2.5101582582004642]
[-39.9514583 -90.46332739 -15.96017202];[0.7233308263521361, 0.14326494422921077, 1.8979428979904553, -2.5098560718291005]
[-39.95029449 -90.46182801 -7.97640971];[1.0359369002803307, 0.15246389567464924, 1.640800631571854, -2.4992699487352867]
[-3.99504339e+01 -9.04556845e+01 -8.41130293e-05];[1.3485569542409783, 0.1566254899703859, 1.3478513803217718, -2.4934454150763674]
[-39.96175319 -90.41790123 -39.9617533 ];[-0.14671493815230305, 0.14671495863621398, 2.438245119379324, -2.438245064719189]
[-39.95721404 -90.44149463 -31.95606774];[0.13278020542251584, 0.1361462088456899, 2.291810947074616, -2.4880786839009525]
[-39.95398396 -90.45504974 -23.95431237];[0.42151032888874346, 0.13590327750354508, 2.114101371561853, -2.5101908380204403]
[-39.9514658 -90.46332978 -15.96010252];[0.7230089875813547, 0.14319760751651572, 1.8982513705899897, -2.5099090246584557]
[-39.95029801 -90.4618368 -7.97632692];[1.0355881112380285, 0.15242992354764623, 1.6411482937976718, -2.499306559350879]
```

:::::
Expand Down
194 changes: 194 additions & 0 deletions sections/2_scikit-learn.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
:::: collapse A MLP for robotics with sickit-learn

**Goal**: better understand how neural networks can be used for robotics and simulation

You are going to train an MLP based on different datasets. It is important to understand how those were generated and how they are used in the context of MLP training and evaluation.

First, you will use the _SOFA Robotics simulator_. This is an application specialized in soft robots simulation.
What you will do in the lab is to try to learn the inverse model of the robot. Meaning that based on where you want the robot _end effector position_ in space, you want the MLP to ouput four _motor angles_ to actuate the robot. Applying these four angles to their respective motors (in _real life_ or in the _simulation_), the end effector should move to the desired position.

In the simulator, you can visualize the error between the desired position and the simulated position of the end effector after applying the angles in the **Plotting Window** as the `error`, `errorX`, `errorY`, `errorZ` which are respectively the Euclidian distance between the two positions (desired and simulated), then projected along the X-, Y-, and Z-axis.
In real life, when sending the motor angles to the real robot, we can measure the effect of the new motor angles thanks to the camera. This is the error called `camera_to_target_error`.
Another useful measure is the $r^2$ score of the MLP, it is continuously calculated across all the previous targets.

To train the MLP, you will use datasets. A dataset is then comprised of desired end effector positions and the matching motor angles. The way these datasets are generated can vary and is described in the [previous Datasets section](#datasets).

A summary of this is in the diagram below:

![](assets/labs/lab_AI/data/images/context_diagram.png)

### Train the Model and Test it

In this part, we will use scikit-learn to train a MLP. Scikit-learn is an open-source Python library that provides tools for a wide range of machine learning tasks like including classification, regression, clustering, and dimensionality reduction. Among other functions, it provides the [MLP regressor](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html#sklearn.neural_network.MLPRegressor) class that we will use to create our first MLP.

We will see in further depth how training works in the next section but to have a first grasp of the **training process**, here is a high-level description: **the goal is to optmize the weigths and bias of the neural network so that it fits our training dataset.**

To do that, it follows this high-level algorithm:

$$
\begin{align*}
1. & \text{The parameters of the layers (weights and biases) are randomly initialized} \\
& \text{For each row of the dataset:} \\
& \hskip1em 2. \text{The input (position) from the dataset is passed through the MLP} \\
& \hskip1em 3. \text{Get the MLP ouput and the dataset output (the ground truth)} \\
& \hskip1em 4. \text{Calculate the gradient on the output data} \\
& \hskip1em 5. \text{Backpropagate the gradients} \\
& \hskip1em 6. \text{Update the weights and bias of the layers using gradient descent and backpropagation} \\
& \hskip1em 7. \text{Loop back to step 2 until maximum number of iterations or epochs is reached}
\end{align*}
$$

#### Create a MLP and train it
Scikit-learn comes with its own implementation of an [MLP regressor](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html#sklearn.neural_network.MLPRegressor).

You can use it here for a quick exploration of the architecture needed.

Several (hyper-)parameters can be played with. Here are some of the parameters:
- the sizes of the layers
- the activation function for all neurons (`identity`, `logistic`, `tanh`, `relu`)
- the solver/optimizer for the gradient descent (`lbfgs`, `sgd`, `adam`)
- the batch size, for `sgd`, `adam` solvers, until update of the weights and biais
- the maximum count of iterations (epochs for `sgd`, `adam` solvers)

```python
from sklearn.neural_network import MLPRegressor

# creates a MLP with one hidden layer of 100 neurons, the 'adam' optimizer and will train on a maximum of 500 epochs
mlp = MLPRegressor(hidden_layer_sizes=(100,), solver='adam',
max_iter=500,)

mlp.fit(X_train, y_train) # train the model using the X_train dataframe of the features and y_train as the target dataframe
```

In the code above, since our features are the components of a 3D position, we have 3 features as input. Regarding the output, since we want the 4 angles of the 4 motors, the output of the MLP is four values.

::: exercise
**Exercise 1**

1. Create an MLP with two hidden layers of _128_ nodes each and that will train on _20000_ epochs in the `modules/sklearn_MLP.py`
#open-button(file="assets/labs/lab_AI/modules/sklearn_MLP.py")

2. Train it:
#python-button(file="assets/labs/lab_AI/train_model.py", pyargs=["scikit-learn", "assets/labs/lab_AI/data/results/blueleg_beam_sphere515.csv"])

The trained model save path is `data/results/model_sklearn.joblib`

Note that we used the dataset called `blueleg_beam_sphere515.csv`. This is because we generated it using **an inverse model** (to be presented next time) of Emio configured with the **blue legs**, the **beam** model, and data points sampled on a **sphere**.

:::

#### Evaluate the model

In Machine learning, an common evaluation metric is the $r^{2}$ score or coefficient of determination. Essentially, it measures the proportion of the variance in the dependent variable that is predictable from the independent variables in the model.

A high value indicates that the models highly fits the data.

The general mathematical definition is:

$$
\def\ssres{\sum_{i=1}^{n} (y_i - \hat{y}_i)^2}
\def\sstot{\sum_{i=1}^{n} (y_i - \bar{y})^2}

r^2 = 1 - \frac{\ssres}{\sstot}

\\[1em]

\begin{array}{ll}
\text{where:} \\
\quad y_i & \text{predicted value} \\
\quad \hat{y}_i & \text{observed data point} \\
\quad \bar{y} & \text{mean of observed data points}
\end{array}
$$


##### Without the simulation
You can use the [MLPRegressor.score](https://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPRegressor.html#sklearn.neural_network.MLPRegressor.score) method to calculate the coefficient of determination on the test data.

```python
mlp.score(X_test, y_test)
```

::: exercise
**Exercise 2**

Let's see the performance of our model. Calculate the score of the model by pressing the button below:

#python-button(file="assets/labs/lab_AI/evaluate_model.py", pyargs=["scikit-learn", "assets/labs/lab_AI/data/results/blueleg_beam_cube1331.csv", "assets/labs/lab_AI/data/results/model_sklearn.joblib"])

*Note*: we are testing the model on another dataset: *blueleg_beam_cube1331.csv*

You should have a score that is quite low. This is mostly due to the fact the MLP is using relu as an activation function. However, if you look at the dataset, you have lots of negative values because of the where the reference frame of Emio is.

1. To avoid this problem, use the `logistic` activation function in `modules/sklearn_MLP.py`, train and calculate the score again:
#open-button(file="assets/labs/lab_AI/modules/sklearn_MLP.py")

2. Train again
#python-button(file="assets/labs/lab_AI/train_model.py", pyargs=["scikit-learn", "assets/labs/lab_AI/data/results/blueleg_beam_cube1331.csv"])

3. Calculate the $r^2$ score again
#python-button(file="assets/labs/lab_AI/evaluate_model.py", pyargs=["scikit-learn", "assets/labs/lab_AI/data/results/blueleg_beam_cube1331.csv", "assets/labs/lab_AI/data/results/model_sklearn.joblib"])

:::


##### With the SOFA simulation
Now that you have a theoretically good-enough model, lets use it in simulation!

The trained model will be used to compute the robot’s inverse kinematics; that is, for a desired position in space, the MLP will provide the corresponding motor positions. This is the foundation of control and motion planning in robotics.

In the **Plotting** window, you can see the $r^2$ score calculated over the last points.

::: exercise
**Exercise 3**

Use your model in the SOFA scene.

---

***First test: Manual Position Control of the Robot***

Using the sliders in the _My Robot_ window, you can control the x, y, and z desired/target effector position.
This allows you to manually test different robot configurations, and for each one, measure the error between:
- the desired position,
- the simulated model position (which we'll discuss next time), and
- the position measured by the camera.

| ![](assets/labs/lab_AI/data/images/Pos3_EmioTest.png){width=90%} | ![](assets/labs/lab_AI/data/images/Pos1_EmioTest.png){width=90%} | ![](assets/labs/lab_AI/data/images/Pos2_EmioTest.png){width=90%} |
|:--:|:--:|:--:|


#runsofa-button(file="assets/labs/lab_AI/lab_AI_test.py", pyargs=["scikit-learn", "data/results/model_sklearn.joblib", "notargets", "0.4"])


**Questions**
1. Is the error between the desired position and the simulated position always the same depending on the the desired position?
2. How does the error vary with respect to the position measured from the camera (`camera_to_target_error`)?
3. At this stage, can you provide a first analysis of the errors?

---

***Second Test: More systematic***
Here, we propose to perform a systematic scan of positions in the form of a grid of points evenly spaced on a plane.The white dots are the targets (desired positions) and the red dots are the positions of the end effector after applying the angles output from the MLP.

#runsofa-button(file="assets/labs/lab_AI/lab_AI_test.py", pyargs=["scikit-learn", "data/results/model_sklearn.joblib", "plane", "ratio_sklearn"])

By default, the spacing is set to `0.1`, meaning the spacing is given by the plane size divided by 10.
To change this spacing, you can enter a number in $]0, 1[$:
#input("ratio_sklearn", "Ratio for sampling", "0.1")

**Questions**
1. After letting the simulation run through all the targets, what conclusions can be drawn from this?
2. Now it’s your turn ! What strategy can you apply to improve the learning ? (not mandatory, but see some possibilities in the next sections)

---

**Additional note:**
This is similar to the previous simulation, but here you can visualize the entire set of points used for training.

#runsofa-button(file="assets/labs/lab_AI/lab_AI_test.py", pyargs=["scikit-learn", "data/results/model_sklearn.joblib", "plane", "ratio_sklearn", "data/results/blueleg_beam_sphere515.csv"])

| ![](assets/labs/lab_AI/data/images/Workspace.png){width=90%}
|:--:|

::::
Loading