This project explores reinforcement learning (RL) applied to robotic manipulation using a Kinova Gen3 robot, under two conditions:
- NoRTSim/: Training with deterministic, fixed-timestep simulation.
- RTSim/: Training with real-time simulation, introducing dynamics noise and delay.
Pretrained models are available in the Test/ directory for evaluation purposes.
This work is associated with this paper.
Much inspiration and implementation guidelines were taken from OpenAI Spinning Up.
NoRTSim/– Training with deterministic (fixed timestep) simulation.RTSim/– Training with real-time simulation.Test/– Pretrained agents evaluation.
Each environment contains its own main.py as the starting point.
Note: The
kinova_sim/modules differ across environments and are not unified, as they have custom URDFs and control logic.
-
Clone the repository:
git clone https://github.com/amsoufi/RL_Sim2Real_RealTimeSim.git cd RL_Sim2Real_RealTimeSim -
Install the requirements: This project was developed and tested with Python 3.6
pip install -r requirements.txt
-
Train an agent:
-
Without real-time simulation:
cd repo/NoRTSim python main.py -
With real-time simulation:
cd repo/RTSim python main.py
-
-
Test a pretrained agent:
cd repo/Test python main.py➡️ Choosing Different Models:
InsideTest/main.py, you can manually change which pretrained model to load by editing the call:ac = torch.load('Agents/ppo_model_RTR.pt')
Select the appropriate model from
Agents/.
You can modify simulation properties for domain randomization by editing:
- The robot URDF files inside
kinova_sim/resources/ - The code inside
kinova_sim/resources/robot.py
Variables like J1, J2, etc., can be exposed and modified dynamically.
This project is licensed under the MIT License. See the LICENSE file for details.
If you find this project useful, please cite:
A. M. S. Enayati et al., "Facilitating Sim-to-Real by Intrinsic Stochasticity of Real-Time Simulation in Reinforcement Learning for Robot Manipulation", IEEE, 2023.