Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage News Joanneum Research Releases Robot AI Platform Robo-Gym Version 1.0.0

Joanneum Research Releases Robot AI Platform Robo-Gym Version 1.0.0

This item in japanese

Joanneum Research's Institute for Robotics and Mechatronics has released version 1.0.0 of robo-gym, an open-source framework for developing reinforcement learning (RL) AI for robot control. The release includes a new obstacle avoidance environment, support for all Universal Robots cobot models, and improved code quality.

The release was announced on the Robot Operating System (ROS) discussion forum. The robo-gym framework is based on ROS and uses the Gazebo physics engine and the OpenAI Gym interface for simulation, enabling researchers to develop their RL algorithms in a simulated environment and transfer them to real-world robots with minimal updates. The release adds support for ROS Noetic, the latest version of the ROS platform, and also adds support for Python versions greater than 3.6. The release contains nearly 500 commits, including bug fixes and improvements to logging and debugging capabilities.

Reinforcement learning is a branch of machine learning that deals with agents interacting with their environment. In contrast to problems such as natural language processing (NLP) or computer vision (CV), where AI models only transform data, an agent trained with RL seeks to achieve some goal state by sensing the world, performing a series of actions, and receiving a reward signal in response to the action. In 2013, AI company DeepMind applied deep learning techniques to RL, producing an agent that could successfully play classic Atari video games. The company was later acquired by Google, and its continuing work in RL led to the development of AlphaGo, an AI system that defeated the best human players in the game of Go.

Many of these game-playing AI systems developed with RL do not interact with the physical world; for example, AlphaGo does not actually physically play Go, but instead prints out moves for a human operator to implement. RL systems for controlling real-world robots, however, must often take into account the unpredictable nature of the world, such as noisy sensors, as well as the mechanical dynamics of the robot hardware. On the other hand, performing RL training using real physical robots would risk damage to the robot in the early stages of training, and would also be prohibitively time-consuming. To mitigate these problems, many RL-for-robotics platforms perform the bulk of their training using simulation engines.

Joanneum Research first developed robo-gym in 2020 and described the system in a paper presented at the International Conference on Intelligent Robots and Systems (IROS), one of the premier academic conferences on robotics. The robo-gym framework uses ROS to provide an abstraction and control layer for the robot, whether real or simulated. To simulate the environment, robo-gym uses the Gazebo 3D physics simulator. The high-level interface to the system uses the OpenAI Gym interface, a popular framework for RL research. In their IROS paper, the developers used robo-gym to train an AI to solve two different tasks via simulation, and successfully ran the AI on real robots without further training.

The initial release of robo-gym ran on Python 3.5 and ROS Kinetic, and included drivers for two physical robots: the MiR100 mobile robot and the UR10 collaborative industrial robot. The latest release drops support for ROS Kinetic and Python 3.5, requiring a minimum of Python 3.6 and ROS Melodic. It includes additional robot drivers for all Universal Robot models: UR3, UR3e, UR5, UR5e, UR10, UR10e, and UR16e.

The use of simulated environments for RL development is a popular research topic. In 2019, Acutronic Robotics released their gym-gazebo2 toolkit, another RL platform based on ROS and Gazebo. More recently, several organizations, including MIT, Facebook, and the Allen Institute for AI, have released simulation environments and related challenges, to spur development. DeepMind recently open-sourced their AndroidEnv platform for developing RL agents that run on Android mobile devices. DeepMind has also submitted a paper to the Artificial Intelligence journal, outlining their hypothesis that RL is sufficient to produce artificial general intelligence.  

The robo-gym source code is available on GitHub.

Rate this Article