BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations From Robot Simulation to the Real World

From Robot Simulation to the Real World

Bookmarks
35:08

Summary

Louise Poubel overviews Gazebo's architecture with examples of projects using Gazebo, describing how to bridge virtual robots to their physical counterparts.

Bio

Louise Poubel is a software engineer at Open Robotics working on free and open source tools for robotics, like the robot simulator Gazebo and the Robot Operating System (ROS).

About the conference

QCon.ai is a practical AI and machine learning conference bringing together software teams working on all aspects of AI and machine learning.

Transcript

Poubel: Let's get started. About six years ago, there was this huge robotics competition going on. The stakes were really high, the prizes were in the millions of dollars, and robots had to the tasks like this, driving vehicles in a disaster scenario kind of thing, handling tools that he would handle in this kind of scenario, and also traversing some tough terrain. The same robot had to do these tasks one after the other and in sequence. There were teams from all around the world competing, and as you can imagine- those pictures are from the finals in 2015 and that was really hard for the time, and they’re still tough tasks for robots to do today.

The competition didn't start right there straightaway, “Let's do it with the physical robots”. The competition actually had a first phase that was inside simulation. The robot had to do the same thing in simulation. They had to drive a little vehicle inside the simulator, they had to handle tools inside the simulator just like they would handle later on in the physical competition, and they also had to traverse some tough terrain. The way that the competition was structured is that the teams that did the best in this simulated competition, they would be granted a physical robot to compete in the physical competition later, so teams that couldn't afford their own physical robots or they didn't have the mechanical design of their own robots, they could just use the robots that they would get from the competition.

You can imagine the stakes were really high; these robots cost millions of dollars, and it was a fierce competition in a simulation phase as well that started in 2013. Teams were being very creative with how they were solving things inside the simulation and some teams had very interesting solutions to some of the problems. You can see that this is a very creative solution, it works and it got the team qualified, but there is a very important little detail. It's that you can't do that with the physical robot. Their arms are not strong enough to withstand the robot's weight like that, the hands are actually very delicate so you can't be banging it on the floor like this.

You would never try to do this with the physical robot, but they did it in stimulation and they qualified to compete later on with the physical robot. It's not like they didn't know. It's not like they tried to do this with the real robot and they broke a million dollar robot, they knew that there is this gap between the reality of the simulation and the reality of the physical world, and there will always be.

Today, I'll be talking to you a little bit about this process of going from simulation to the real life, to the real robot, environment, interacting with the physical world. Some of the things that we have to be aware when we are doing this transition and when we are training things and simulation and then to put the same code that we did in simulation inside the real robots. We have to be aware of the compromises done during the simulation, we have to be aware of the simplifying assumptions that were done while designing that simulation.

I'll be talking about this in the context of the simulator called Gazebo, which is where I'm running this presentation right now, which is a simulator that has been around for over 15 years. It's open source and free, people have been using it for a variety of different use cases all around the world. The reason why I'm focusing on Gazebo is that I am one of the core developers and I've been one of the core developers for the past five years. I work at Open Robotics, I'm a software engineer, and today, I'll be focusing on Gazebo because of that. This picture here is from my master thesis back when I still dealt with physical robots, not so much with robots that are just virtual inside the screen. I'll be talking a little bit also later about my experience when I was working on this and I also would use simulation then I went to the physical robot.

At Open Robotics, we work on open source software for robots, Gazebo is one of the projects. Another project that we have that some people here may have heard of is ROS, the robot operating system, and I'll mention it a little bit later as well. We are a big team of around 30 people all around the world, I'm right here in California in the headquarters. That's where I work from and all of us are split between Gazebo and ROS, and some other projects and everything that we do is free and open source.

Why Use Simulation?

For people here who are not familiar with robotics simulation, you may be wondering why, why would you even use simulation? Why don't you just do your whole development directly in the physical robot since that's the final goal, you want to control that physical robot. There are many different reasons, I selected a few that I think would be important for this kind of crowd here who are interested in AI. The first important reason is you can get very fast iterations when dealing with a simulation that is always inside your computer.

Imagine if you're dealing with a drone that is flying one kilometer away inside a farm, and every time that you change one line of code, you have to fly the drone and the drone falls and you have to run and pick it up, and fix it, and then put it to fly again, that doesn't scale. You can iterate on your code, everybody who's a software engineer knows that you don't get things right the first time and you keep trying, you can keep tweaking your code. With simulation, you can iterate much quicker than you would in a physical robot.

You can also spare the hardware, but hardware can be very expensive and mistakes can be very expensive too. If you have a one million dollar robot, you don't want to be wearing out its parts, you don't want to risk it falling and breaking parts all the time. In simulation, the robots are free; you just reset and the robot is back in one piece. There is also the safety matter, if you're developing with a physical robot and you are not sure exactly what the robot's going to do yet, you're in danger, depending on the size of the robot, depending on what the robot is doing, how the robot is moving in that environment. It's much safer to just do the risky things in simulation first, and then go to the physical robot.

Related to all of this is scalability, in simulation, it's free. You can just have 1,000 simulations running in parallel, while for you to have 1,000 robots training and doing things in parallel, that costs much more money. You can have for your team, the whole team would have one robot, then if you have all developers trying to use the same robot, they are not going to move as fast as if they each were working in a separate simulation.

When Simulation is Being Used

When are people using simulation? I think the parts that most people here would be interested in is machine learning training. For training, you usually need thousands and millions of repetitions for your robots to learn how to perform a task. You don't want to do that in the real hardware for all the reasons that I mentioned before. This is a big one, and people are using simulation, people are using Gazebo and other simulators for this goal. Besides that, there's also development, people are just good old fashioned, trying to send commands to the robot for the robots to do what he wants, to follow a line, or to pick up an object and use some computer vision.

All these developments, people were doing in simulation for the reasons I said before, but there's also prototyping. Sometimes you don't even have the physical robot yet and you want to create a robot in simulation first and see how things work and tweak the physical parameters of the robot, even before you manufacture it. There's also testing, a lot of people are ready CI in their robots, like every time you make a change to your robot code and maybe nightly or at every port request, you run that simulation to see if your robot's behavior is still what it should be.

What You Can Simulate

What can people simulate inside Gazebo? These are some examples that I took from the ignitionrobotics.org website, which is a website where you can get free models for using robotic simulation. You can see that there are some ground vehicles here, all these examples are wheeled, but you can also have legs robots, either bipeds with two leg or quadrupeds, or any other kinds of legged robot. You can see that there are some smaller robots, they are self-driving cars with sensors and some other form factors. There's also flying robots, both robots with fixed wing or quadcopters, hexacopters, you name it. Some more humanoid-like robots, this one is from NASA, this one is the PR2 robot. This one is on wheels, but you could have a robot like Atlas that I showed before that had legs. Besides these ones, there are also people simulating industrial robots, underwater robots. There are all sorts of robots being simulated inside Gazebo.

It all starts with how you describe your model, all those models that I showed you before, I showed you the visual appearance of the model and you may think, "This is just a 3D mesh." There's so much more to it, for the simulation, you need information, all the physics information about the robot like dynamics, where is the center of mass, what's the friction between each part of the robot and the external world, how bouncy is it, where exactly are the joints connected, are they springy? All this information has to be embedded into that robot model.

All those models that I showed you before are described in this format called the simulation description format, SDF. This format doesn't describe just the robot, but it describes also everything else in your scene. Everything else here in this world from the visual appearance, from where the lights are positioned, and the characteristic of the lights and the colors, every single thing, if there is wind, if there is a magnetic field, every single thing inside your simulation world is described using this format. It is an XML format, so everything is described with XML tags, so you have a tag for specular color of your materials or you have a tag for the friction of your materials.

But there's only so far that you can go with XML, sometimes you need more flexibility to put more complex behavior there, some more complex logic. For that, you use C++ plugins, Gazebo provides a variety of different interfaces that you can use to change things in simulation from the rendering side of things, so you can write a C++ plugin that implements different visual characteristics, makes things blink in different ways that you wouldn't be able to do just with the XML. The same goes for the physics, you can implement some different sensor noise models that you wouldn't be able to just with the SDF description.

The main language, like programming interface to Gazebo, is C++ right now, but I'll talk a little bit later about how you can use some other languages to also interact with simulation in very meaningful ways.

Physics

When people think about robot simulation, the first thing that you think about is the physics, how is the robot colliding with other things in the world? How is gravity pulling the robot down? That's indeed the main important part of the simulation. Gazebo, unlike other simulators, we don't implement our own physics engine. Instead, we have an abstraction layer that other people can use to integrate other physics engines. Right now, if you download the latest version of Gazebo, which is Gazebo 10, you're going to get these four physics engines that we support at the moment. The default is the Open Dynamics Engine, ODE, but we also support Dart, Bullet, and Simbody. These are all external projects that are also open source, but they are not part of the core Gazebo code.

Instead, we have this abstraction layer, what happens is that you describe your word only once, you describe your SDF file once, you write your C++ plugins only once, and at run time, you can choose which physics engine you're going to run with. Depending on your use case, you might prefer to run it with one or the other according to how many robots you have, according to the kinds of interactions that you have between objects, if you're doing manipulation or if you're doing more robot locomotion. All these things will affect what kind of physics engine you're going to choose to use in Gazebo.

Let's look a little bit at my little assistant for today. This is now, let's see some of the characteristics of the physics simulation that you should be aware of when you're planning to user simulation to then bring the codes to the physical world. Some of the simplifying assumptions that you can see are, for example, if I visualize here the collisions of the model- let me make it transparent- you are seeing these are orange boxes here, they are what the physics engine is actually seeing. The physics engine doesn't care about these blue and white parts, so for collision purposes, it's only calculating these boxes. It's not like you couldn't do with the more complex part, but it would just be very computationally expensive and not really worth it. It really depends on your final use case.

If you're really interested in the details of the parts are colliding with each other, then you want to use a more complex mesh, but for most use cases, you're only interested when the robot really bumped into something and for that, an approximation is much better and you gain so much in simulation performance. You have to be aware of this before you put the code in a physical robot, and you have to be aware of how much you can tune this. Depending on what you're using this robot for, you may want to choose these collisions a little bit different.

Some of the things that you can see here, for example, are that I didn't put collisions for the finger. The fingers are just going through here. If you're doing manipulation, you obviously need some collisions for the fingers, but if you're just making the robot play soccer, for example, you don't care about the collisions of the fingers, just remove them and gain a little bit of performance in your simulation. You can see here, for example, that actually the collision is hitting this box here, but if you remove the collision, if you're not looking at the collision, the complex part itself looks like the robot is floating a little bit. For most use cases, you really want the simplified shapes, but you have to keep that in mind before you go to the physical robot.

Another simplifying thing that you usually do, let's take a look at the joints and at the center of mass of the robot. This is the center of mass for each of the parts of the robot, and you can see here the axis of the joints, here on the neck, you can see that there is a joint up there that lets the neck go up and down, and then the neck can do like this. I think the robot has a total of 25 joints, and this description is made to spec, this is what the perfect robot would be like and that's what you put in simulation. In reality, your physically manufactured robot is going to deviate a lot from this, the joints are not going to be perfectly aligned from both sides of the robot. One arm is going to be a little bit heavier than the other, the center of mass may not be exactly in the center. Maybe the battery moved inside it and it's a little bit to the side. If you train your algorithms with a perfect robot inside the simulation, once you go and you take that to the physical robot, if it's overfitting for the perfect model, it's not going to work in the real model.

One thing that people usually do is randomize a little bit this while you're training your algorithms, for each iteration, you move that center of mass a little bit. You reduce and you increase the mass a little bit, you change the joints, you change all the parameters of the robot and the idea is not that you're going to find the real robot, because that doesn't exist. Each physical robot, they are manufactured differently, from one to the other, they're going to be different. Even one robot, over time, will change, it loses a screw and suddenly, the center of mass shifted. The idea of randomization is not to find the real world, it's to be robust enough to arrange a variation that once you put it in a real robot, the real robot is somewhere there in that range.

These are some of the interesting things, there are a bunch of other things, there's inertia, which is nice to look at too, but that's with the robot not transparent anymore. Here is a little clip from my master thesis and I did it within our robots and I did most of the work inside simulation. Only when I had it work in simulation, I went and I put the code in the real robot. A good rule of thumb is if it works in simulation, it may work in the real robot, if it doesn't work in simulation, it most probably is not going to work in the real robot, so at least you can take out all the cases that wouldn't work.

By the time I got here, I had put enough tolerances in the code and I had to test it a lot also with the physical robot, because it's important to periodically also test with the physical robot, that I was confident that the algorithm was working. You can see that there is someone's hands there in case something goes wrong, and this is mainly for a thing that we just had in model in simulation which is the physical robot. I was putting so much strength onto one of the feet all the time because I was trying to balance and those motors in the ankles were getting very hot and the robot comes with a built-in safety mechanism where it just screams, "Motor hot," and turns off all of its joints. The poor thing had the forehead all scratched, so the hand is there for these kind of use cases.

Sensors

Let's talk a little bit about sensors, we talked about physics, how your robot interacts with the world, how you describe the dynamics and the kinematics of the robot, but how about how the robot is consuming information from the world in order to make decisions. Gazebo supports over 20 different types of sensors from cameras, GPS, IMUs, you name it. If you put something in the robot, we support it one way or the other. It's important to know by default what the simulation is going to give you, it's going to be very perfect data. It's always good for you to try to modify the data a little bit too also add that randomization, add that noise so that your data is not so perfect.

Let's go back to now, it has a few sensors right now, let's take a look at the cameras. I put two cameras in, it has one camera with noise and one camera with the perfect image. You can see the difference between them, this one is perfect, it doesn't have any noise, it’s basically what you're seeing through the user interface of the simulator and here, you can see that it has noise, I put a little bit too much. If you have a camera in the real robot with this much noise, maybe you should buy a new camera. I put some noise here, and you can see also there is some distortion because real cameras also have a little bit of a fisheye effect or the opposite, so you always have to take that into account. I did this all by passing parameters in XML, these are things that Gazebo just provides for you, but if your lens maybe has a different kind of distortion or you want to implement a different kind of noise, this is very simple gushing noise, but if you want to use a more elaborate thing, you can always write a C++ plugin for it.

Let's take a look at another sensor, this was a rendering sensor and we're using the rendering engine to collect all that information, but there is also physical sensors like an altimeter. I put this ball bouncing here and it has an altimeter, we can take a look at the vertical position and I also made it quite noisy, so you can see that the data is not perfect. If I hadn't put noise there, it would just look like a perfect parabola because that's what the simulator is doing, it's calculating everything perfectly for you. This is more what you would get from a real sensor and I also put the update rate very low, so the graph looks and better. The simulation is running at 1,000 hertz and, in theory, you could get data at 1,000 hertz, but then you have to see would your real sensor give your data at that rate and would it have some delay? You can tweak all these little things in the simulation.

Interfaces

Another thing to think about is interfaces, when you're programming for your physical robot, depending on the robot you're using, it may provide an SDK, it may provide some APIs that you can go and program it, maybe from the manufacturer, maybe something else, but then, how do you do the same in simulation? You want to write all your code once and then train in simulation, and then you just flip the switch, and that same code is acting now on the real robot, you don't want to have to write two separate codes and duplicate the logic in two places.

One way that people commonly do this is using ROS, the Robot Operating System, which is also, as I mentioned earlier, an open source project that we maintain at Open Robotics. ROS provides a bunch of tools in a common communication layer and libraries for you to be able to debug your robots better in a unified way and it has integration with the simulation so you can control your robot in simulation, and then you just switch to a physical robot and then you're controlling that physical robot with the same code. It's very convenient and ROS offers a variety of different language interfaces, you can use JavaScript, Java, Python, it's not limited just to C++ like Gazebo is. The interface between ROS and Gazebo C++ but once you're using ROS, you have access to all those other languages.

Let's look at some examples of past projects that we've done inside Gazebo in which we had both the simulation and the physical world component. This is a project called Haptics that happened a few years ago, and it was about controlling this prosthetic hand here. We developed the simulation and it had the same interface for you to control the hand in simulation and the physical hand. In this case, we were using MATLAB, you could just send the commands in MATLAB and the hand in simulation would perform the same way as the physical hand. We improved a lot the kind of surface contact that you would need for successful grasping inside simulation.

This was one project, another one, this one is a competition as well, it was called Sask and it was a tag competition between two swarms of drones. They could be fixed wing or quadcopters, or a mix of the two. Each team had up to 50 drones, imagine how it would have been to practice with that in the physical world. I have all these drones flying in for every single thing that you want to try, you have to go collecting all those drones, it's just not feasible.

The first phase of the competition was simulation, things were competing on the cloud. We had the simulation running on the cloud and they would just control their drones as if they were controlling with the same controls that they would eventually use in the physical world. Once they had practiced enough, they had the physical competition with swarms of drones playing tag in the real world, that's what the picture on the right is.

This one was the space robotics challenge that happened a couple of years ago. It was hosted by NASA using this robot here, which is called the Valkyrie, it's a NASA robot, it's also known as Robonaut 5. The final goal of Valkyrie is to go to Mars and organize the environment in Mars before humans go there. The competition was all set in Mars and you can see that the simulation here was set up in Mars, in a red planet, red sky, and the robot had to perform some tasks just like it's expected that it will have to do in the future.

Twenty teams from all around the world competed in the cloud. In this case, we weren't only simulating the physics and the sensors, but we were also simulating the kind of communication you would have with Mars. You would have a delay and you have very limited bandwidth, these were all parts of the challenge and the competition. What is super cool is that the winner of the competition had only interacted with the robot through simulation up until then. Once he won the competition, he was invited to go to a lab where they had reconstructed some of the tasks from the simulation. This is funny, they constructed in the real world something that we had created for simulation, instead of going the other way around. It took him only one day to get his codes that he used to win the competition in the virtual world to make the physical robot do the same thing.

This is another robot example that uses Gazebo and there's also a physical robot, this is developed by a group in Spain called Accutronix, and they integrated Gazebo with OpenAI Gym to train this robot. I think it can be extended for other robots to perform tasks in simulation, it is trained in simulation and then you can take what you learned in simulation and put the model inside the physical robot to perform the same task.

Now that you know a lot about Gazebo, let me tell you that we are currently rewriting it, as I mentioned earlier, Gazebo is over 15 years old and there is a lot of room for improvement. We want to make use of more modern things like running simulation, distribute it across machines in the cloud, we want to make use of more modern rendering technology, like physically-based rendering and retracing to have more realistic images in the camera sensors.

We're in the process of taking Gazebo which is currently a monolith huge code base and breaking it into smaller reusable libraries. It will have a lot of new features, like the physics abstraction is going to be more flexible so you can just write a physics plugin and use a different physics engine with Gazebo. The same thing will go for the rendering engine, we're just making a plugin interface so you can write those plugins to interface with any rendering engine that you want. Even if you have access to a proprietary one, you can just write a plugin and easily interface with it. There are a bunch of other improvements coming and this is what I've been spending most of my time on recently. That's it, I hope that I got you excited a little bit about simulation, and thank you.

Questions & Answers

Participant 1: You mentioned about putting in all the randomization to train the models. I don't have too much of a robotics background, so could you just shed some light on what kind of models and what do you mean by training those models?

Poubel: What I meant by those randomizations is in the description of your world where you have literally in your XML mass equals 1 kilogram and in the next simulation, instead of mass 1 kilogram, you can put mass 1.01 kilograms. You can change the position of them as a little bit, every time you load the simulation, you can have the simulation be a little bit different from the one before and when you're training your algorithms, like running 1,000 simulation, 10,000 or 100,000 simulations, having the model not be the same in every single one of them is going to make your final solution, your final model, much more robust to these variations. Once you come to the physical robot, it's going to be that more robust.

Participant 2: Thanks for the talk. As a follow up to the previous question, does that mean if you use no randomization, then the simulation is completely deterministic?

Poubel: Mostly, yes. There are still sometimes some numerical errors and there are some places where we use a random number generator that you can set the seed and make it be deterministic, but there's always a little bit of differences there, especially like sometimes we use some asynchronous mechanisms, so depending on the order the messages are coming, you may have a slightly different result.

Moderator: I was wondering if there are tips or tricks to use Gazebo in a continuous integration environment? Is it being done often?

Poubel: Yes, a lot of people are running CI and running Gazebo in the cloud. The first thing is turn off the user interface, you're not going to need it. There is a headless mode, right now, I'm running two processes, one for the back end and one for the front end. You don't need the front end when you're running tests, Gazebo comes with a test library that we use to test Gazebo and some people use to test their code. It's based on G test and you will have pass/fail and you can put expectations, you can say, "I expect my robot to hit the wall at this time," and, "I expect the robot not to have any disturbances during this whole run." Yes, these are some of the things that you're going to use if you use it for CI.

Participant 3: What kind of real-world simulations does Gazebo does support, like wind or heat, stuff like that? Does it do it or do we have to specify everything ourselves?

Poubel: I didn't get exactly what kind of real-world simulation.

Participant 4: As a simulation, in the real world, you have lots of physical effects from heat, wind. Does it have a library or something like that, or do we have to specify pretty much the whole simulation environment ourselves?

Poubel: There's a lot that comes with Gazebo, Gazebo ships with support for wind, for example, but it's a very simple wind model. It's a global wind, always going the same direction. If you want a more complex wind model, you would have to write your own C++ plugin to make that happen, or maybe import winds data from a different software.

We try to provide the basics and an example of all physical phenomena. There is buoyancy if you're underwater, there is a lift-drag for the fixed wings, we have the most basic things that are applied to most use cases and if you need something specific, you can always tweak, either download the code or start the plugin from scratch and tweak those parameters. It's all done through plugins, you don't need to compile Gazebo from source.

 

See more presentations with transcripts

 

Recorded at:

Jun 28, 2019

BT