███████╗██████╗ ██████╗ ██████╗ ███████╗
██╔════╝██╔══██╗██╔════╝ ██╔═══██╗██╔════╝
█████╗ ██████╔╝██║ ███╗██║ ██║███████╗
██╔══╝ ██╔══██╗██║ ██║██║ ██║╚════██║
███████╗██║ ██║╚██████╔╝╚██████╔╝███████║
╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚══════╝
ERGOS is meant to be a low-cost, open-source platform designed for dynamic humanoid robotics research. It serves as a ROS2-based control environment that integrates serial servo hardware with Reinforcement Learning (RL) techniques. The primary goal of ERGOS is to develop and optimize dynamic, stable gait patterns for bipedal systems, demonstrating advanced movement capabilities on an economical, self-contained hardware platform. Here is a photo of ERGOS's current design:
- Hardware
mechanical/— CAD, renderselectrical/— schematics, PCB, BOM
- Software
software/— firmware, ROS 2 packages, tools, simulations
- Docs
docs/— BOM, images, datasheets
(Planned and In Progress)
- Custom humanoid mechanical design
- Belt-driven actuation using HTD-3M pulleys (18T–30T) for joint-specific torque amplification
- 3D-printed structural housings optimized for low cost and rapid iteration
- Modular joint architecture to support future actuator and transmission upgrades
- Custom PCB featuring an ESP32-S3 for real-time servo communication
- Custom full-duplex to half-duplex TTL transceiver for serial servo bus control
- On-board power management from a 3S 8400 mAh LiPo battery
- Custom power and communication harness backbone for distributed serial servos
- Jetson Orin Nano as the primary compute module, interfacing with the MCU for control and perception workloads
- Reinforcement learning-based locomotion control for stable, adaptive walking
- Simulation-first training pipeline with planned sim-to-real transfer onto physical hardware
- Architecture designed to support experimentation with different control formulations and reward structures
- Vision system integration for obstacle detection and terrain awareness
- Planned fusion of perception outputs into locomotion and navigation policies
- On-board large language model (LLM) for voice-based command interpretation
- Natural-language interface for high-level behavior control and human–robot interactions
Figure 1. High-level system architecture of the ERGOS humanoid platform. The system is powered by a centralized 3S LiPo battery, which feeds a custom power distribution and regulation stage. This stage supplies both the actuator subsystem and the main compute unit. High-level perception, planning, and reinforcement learning policies are executed on the Jetson Orin Nano within a ROS 2-based control stack. Control commands are transmitted to an ESP32-S3 real-time controller, which handles low-level actuation and interfaces directly with modular servo daisy chains. Sensor inputs, including depth vision, inertial measurements, and foot force sensing, are processed on the Jetson and integrated into the control pipeline. Detailed mechanical, electrical, and software architectures are documented in subsystem-specific sections.
=== WIP ===
-
Phase 1 — Hardware Creation
Finalize CAD design and PCB for servo control and power distribution. -
Phase 2 — Reinforcement Learning Simulation
Develop a simulation environment for dynamic walking using RL.
(Likely frameworks: PyTorch, custom physics simulation, or Isaac Gym) -
Phase 3 — Real-World Testing
Transfer trained model to the Jetson Orin Nano for live walking experiments. -
Phase 4 — Vision and Speech Integration
Add camera input for object detection and LLM for high-level control.
- Programming Language: Python (planned)
- Frameworks: TBD (PyTorch or TensorFlow for RL; ROS2 possible integration)
- Hardware: Jetson Orin Nano, ESP32-S3, Feetech STS3235/STS3250 smart servos
- Design Tools: SolidWorks (mechanical), Altium Designer (PCB)
(To be updated as the project matures)
Planned sections:
- Hardware Assembly (It's not easy, this will be detailed in Mechanical Section)
- Updated BOM
- Installation requirements (JetPack SDK, Python environment, dependencies)
- Simulation setup (RL environment)
- Model training and deployment instructions
- Hardware configuration and servo connection mapping
- WIP
This project is open source and released under the MIT License.
See the LICENSE file for more information.
Bryan Heddle
Mechatronics & AI Systems Engineering Student — Western University
Special thanks to the open-source robotics community for the tools, documentation, and inspiration that contributed to this project’s foundation.
Project sponsors:
