BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Podcasts Joe, Florian and Sebastian on the Indy Autonomous Challenge

Joe, Florian and Sebastian on the Indy Autonomous Challenge

Bookmarks

The Technical University of Munich has won the Indy Autonomous Challenge. A competition for self-racing vehicles. In this podcast Roland Meertens discussies the the event itself, what makes it challenging, and the approach the TU Munich took with Joe Speed of Apex AI, Florian Sauerbeck, and Sebastian Huch. We discuss the importance of simulation, the limits of hardware, and how Docker helps crossing this gap. We end the podcast by discussing the role of open source software when taking on such a challenge.

Key Takeaways

  • The Indy Autonomous Challenge lets race cars drive autonomous at high speeds
  • Simulation sped up the development of the algorithms in absence of the actual vehicles
  • It's important to be aware of the limits of your hardware, and replaying data helps you find these. 
  • Docker helped bridge the simulation and real hardware gap by enabling the same software to run in two environments
  • Open source software was key to winning this race. The final stack was made open source.

Transcript

Roland Meertens: Hello, and welcome to the InfoQ podcast. My name is Roland Meertens, editor for AI and machine learning at InfoQ and product manager at Annotell. Today I will host the podcast and I will be talking with Joe, Sebastian, and Florian about the Indy Autonomous Challenge and how they managed to win it. Joe, Sebastian, and Florian, welcome to the InfoQ podcast. Could you maybe introduce yourself to the listeners and tell them about what the Indy Autonomous Challenge is?

Joe Speed: Sure, happy to. So, Joe Speed. I'm a technical advisor for the Indy Autonomous Challenge, which is an amazing university challenge for autonomous racing.

Florian Sauerbeck: My name is Florian. I was part of the TUM Autonomous Motorsport team. We managed to win the challenge in the end, and my main responsibilities were the mapping and localization part.

Sebastian Huch: And I'm Sebastian. I'm the colleague of Florian, also part of the TUM Autonomous Motorsport team. And my main responsibility in our team was the perception, mainly the object detection.

The Indy Autonomous Challenge [01:19]

Roland Meertens: All right, maybe we can get started with you Joe. Maybe you can say that about what this Indy Autonomous Challenge is.

Joe Speed: It's an amazing program. So, a lot of this is anecdotal. So Sebastian Thrun, who is very much like the godfather of modern autonomous driving, he had won the DARPA Grand Challenge. He was out at Indy and had commented something like, "Some of the things happening autonomy are not that exciting to me anymore, but if this, if the Indy 500 was autonomous, that would be interesting. I would love that." And so that's kind of the Genesis, just that conversation is how state of Indiana and many others, Indy Motor Speedway and the universities, sponsors, and technology companies, we all came together to create the Indy Autonomous Challenge. Which was set up as a one and a half million dollar prize challenge for autonomous racing. So the first phase around the in simulation... Florian and Sebastian talk more about that. And then culminating with the event, the race October 23rd, last month at Indy Motor Speedway. But it turns out there's so much interest in momentum behind this that it has legs beyond. Well, I'll tell you later more about what's next.

Roland Meertens: But so this is self racing vehicles, but how fast do I have to think about, how fast are they going?

Joe Speed: Well, there's a couple things there. So one is how fast is the vehicle capable of, and then what can the autonomy software and the autonomy kit? So all the sensors, actuators and drive by wire deal with. And a whole different topic is the actual track conditions, right? So racing on a cold track in October is really not the same as racing in the end of May when it's nice and hot and the rubber's very sticky.

Roland Meertens: So about speed, should I roughly think about?

Joe Speed: Well, we have not yet done a maximum speed test on it. The vehicle's designed for something in the neighborhood of 180, 185 miles per hour. In normal trim, Indy Light cars are good for 200 miles per hour, but we've done some things to the vehicles to make them more durable. So, we can't really be replacing engines after each race in this challenge. So we've got some very reliable power plants in these things.

Roland Meertens: So if I think about the Indy race, it's just driving one direction in one lap, right? What actually makes this challenging? Maybe that's something which especially Florian can talk about.

Florian Sauerbeck: I think the main challenge for us was when we started this whole project, which was the first time I heard of it was basically at our Christmas party we had here at end of 2019 when COVID wasn't a topic. And then things started to get serious around May to June when we actually started thinking about how we would challenge this project. So we didn't know how will the race in the end look like? How many competitors will we have? How many cars will hit the track? What speeds do we have to think about? We didn't even know the exact sensor setup or the compute setup of the car. So everything was still in discussion and things were changing from week to week. So we still had to make plans, had to think about how our software would look like without actually knowing what the challenges will be in the end.

Building the simulator [04:31]

Roland Meertens: How do you attack such a problem where at a start, you don't really know what you will be facing?

Sebastian Huch: We first build a team. On average, we were 12 PhD students, but all of us had many, many students, undergrads that were supporting us. And first within our team of the 12 PhD students, we split up the modules that we had to cover. Of course, first we started with the perception and the next would be of course also the localization and parallel, the prediction, the planning and control. And then we assigned people to each task basically based on their PhD projects. And then the first step was that we generated or that we developed those algorithms that need it. And then we tested them in a simulation. So we built our own hardware-in-the-loop simulator here at our lab. And we could test the entire software stack that we developed in the last one or two years, and could test it here, which was a pretty crucial part.

Sebastian Huch: So this was our first step. And I think also the main challenge here was, so we started in a simulation. And back then we thought, okay, maybe the biggest challenge would be that we have to develop all those algorithms, but now one month ago, just a few weeks or days before the actual race happened, this was not a challenge anymore. So the biggest challenge at the end was more like, okay, as Florian already mentioned. We had no experience with the tires, with the car in general, so we had to find the limits at the end. So it was not only about the software, it was also about some race strategies and the temperatures of the tires, as I mentioned before, no one had experience with that. So this was kind of a very interesting ride the last one or two years.

Roland Meertens: And in terms of race, are there multiple people on the race track?

Joe Speed: That was always the intent and the plan. You kind of have to back up a bit. We had to get the first car built and delivered and break-in testing at the racetrack, and with COVID and other challenges that got delayed. The delay of that delays building the additional cars. So really, these guys went into this October 23rd race with not the amount of track days for testing that everyone expected.

Roland Meertens: But you mentioned that you had a hardware-in-the-loop simulator. Did you have to build this yourself or was this already available?

Florian Sauerbeck: So the whole team we formed for the Indy Autonomous Challenge basically is based on the team that did the Roborace before. So we had an existing simulation. We had some existing algorithms that are also available on GitHub already. And this was our starting point. So we didn't start from zero, but we had already something also in terms of simulation, but the whole perception topic was new at the Indy Autonomous Challenge. So we also had to develop sensor simulation, environment simulation, and we also had to change the vehicle dynamics models because the cars are entirely new, and also the interactions. Right now, there is 10 computers connected to our hardware-in-the-loop simulation. So we can run 10 times the full software stack and basically race against our own software.

The sensors and compute power [07:33]

Roland Meertens: And in terms of car, what kind of car do you actually get? What kind of sensors are on there? How many computers are there on the car itself?

Sebastian Huch: Well, so on the car itself, there's mainly one computer that the teams could use and we could decide what is running on this computer. And in terms of sensors, we had everything that you need, if you think of an autonomous car. So we had three lighter sensors, we had six cameras in total. Both of them were covering 360 degrees field of view. Then we had three radars, one to the front, two to the side. We obviously had GPS. We had two GPS systems just for redundancy. In the end we also had tire temperature sensors. As I mentioned before, tire temperature was a huge factor here, but we haven't looked into those tire temperatures until we really hit the high speeds because at low speeds, it's not very important. So those were only important one or two weeks before the actual race.

Joe Speed: So building a car for this application, you come into a lot of new things, right? So the car is an adaptation of the Dallara Indy Light 15. So this is called the AV-21. So they make the monocoque in Parma, Italy. They fly them to Indianapolis where they're assembled, Juncos Racing does all the assembly. Then they take them to A-stuff who installs the autonomous driving kits. They bring it back to do all the final suspension break in. And the thing is, is really none of the autonomy kit on here was designed for racing.

Joe Speed: The normal applications for this is robo-taxis and things going slightly slower. And then you get into issues like the tires don't start to heat up until 120 miles per hour. And until this point, none of the teams had been driving over a hundred miles per hour. And so you have cold days, you don't have tire heaters, you have to go do some warmup laps, but you have to go over 120 or the tires don't warm up. So there's just really getting to a lot of new ground here.

Roland Meertens: How did you start approaching the building of this vehicle? So you mentioned that you got something which is normally used for self driving cars and you just modified an existing race vehicle, I understand?

Joe Speed: Clemson had the lead on putting together car number one, and it would be the template for the rest of the fleet. They did it with a lot of iteration and feedback from the teams. So TUM and other teams giving feedback about, I like that sensor. I don't like this one. Giving feedback about where to place, what field of view they would want for cameras. Do they want or not want stereo cameras and all of these things, but then a lot of it is not until you start getting time on the track that you start to figure things out.

You start to figure out that it turns out that rear facing radar interferes with the front facing radar of the other cars, because they're all running on the same frequency. You start to learn things like GNSS antennas will require special isolation mounts. You don't learn these things until you're on the track. And honestly, if we had gotten car number one on the track doing laps back in January, and then iterate from there and delivered the entire fleet at the Indy 500 May 29th, I think what you would see is all the cars on the track October 23rd, but it's just difficult. It's a difficult challenge even in good times, and COVID did not help.

The approach to this challenge [12:35]

Roland Meertens: Just getting back maybe to the initial question. So what kind of approaches do work here? Are you just doing way point following using GPS, or do you do some end-to-end deep neural network, or what kind of things are the go-to methods when you're building a racing vehicle?

Florian Sauerbeck: It's something in between I would say. So from the beginning, we wanted to develop a racing software stack that's capable of doing multi-vehicle racing and racing against others, defending position, overtaking. So just with way point following that wouldn't be possible. So we planned a software that's also taking into account the other cars, that's doing some predictions on where the other cars will go in the future and then landing its maneuvers. So of course there is the global optimized race line that our software basically calculates, and it says this is the quickest way to go around the track if there is nothing else in front of you. But if there are other cars, the car will have to make some decisions. So sometimes it's better to stay behind another vehicle. Sometimes it's better to overtake on the inside or on the outside. So we thought about this all the time and we developed a software stack that is capable of taking those decisions and bringing it on the track.

Roland Meertens: And you mentioned you were already racing against other universities in simulation?

Florian Sauerbeck: Exactly. So it was also part of the challenge, some simulation races. So to qualify for the final event on October 23, we had to do simulation races and all of the teams had to show that they were capable of doing those things. And also on Lucas Oil, the other track we were testing on, we also showed this on the real track with the real cars that we and some of the other teams, we did actual overtaking and avoiding other cars.

Bridging the simulation gap with Docker [12:35]

Roland Meertens: In terms of race in a simulation and race in the real world, what kind of problems did you find? I heard that you got the cars relatively late?

Sebastian Huch: Exactly. So the simulation race, I guess it was in around May this year. And then afterwards, I think we got the cars around August or September. So in between, we had a little bit of time to adjust all the algorithms to the real world. So the perception was not part of the simulation. So basically our car got the exact position of the other cars in the simulation. So when we first got the real car and went on a real track, we of course had to test our perception, our entire perception stack. We did it in our own simulation, our own hardware-in-the-loop simulator here at our lab.

But then again, so there's a huge difference between the simulation between the kind of perfect world in the simulation and then once you go on the real car, you have a lot of noise in the sensors. So there was a huge gap here between the simulation and the real car. But not only from the sensors, also some interfaces were different. So this was a quite limited time to change the software from the simulation to the real car. But I guess in the end we did a pretty good job cause we were using Dockers and then with the Docker system, it was quite easy that we could just deploy Docker containers that we used already in a simulation. We could kind of directly use them on a real car.

Roland Meertens: So how did you end up with this Docker solution? Did you try multiple things or how did you get to that?

Sebastian Huch: We had different modules as I mentioned before. So basically we had the entire perception stuff. Then we had the prediction, we had planning into control. Those were our main modules and basically every single module here was running in our own Docker container. And they were just communicating via ROS. In every Docker container there was a ROS of multiple ROS nodes running. And then the communication was just with the normal subscriber publisher scheme of ROS

Roland Meertens: So you kind of make microservices approach on a race car, on embedded hardware.

Sebastian Huch: Kind of, exactly.

The importance of open source software [14:33]

Roland Meertens: Interesting. And so how did it work with his space vehicle software working group? How did you decide in the end what the teams got access to and what they could manage and etc?

Joe Speed: It's an open competition so they can use any software they want. Commercial, open source, homegrown, anything at all. And back at the beginning of the challenge, the teams were working using some ROS, also a lot of commercial software. But what I saw, I think the ISE organizers, they might have been of a mind that what we'll do is we'll build this physical car and then we'll give it to the teams and then we'll wish them good luck. And they're very smart, they will figure out the software. And from my experience, I was thinking, I don't think that's enough. It might be enough for TUM, but for the other universities that just was not going to work. So TUM leader Alex, myself, Gina O'Connell, Neil Puthuff, Josh Whitley of Autoware Foundation. What we did is we put together this ISE base vehicle software work group.

So Alex represented all the universities and what they need. And then the rest of us work with the open source community and member companies getting contribution. And we assembled this stack for the teams and nobody's required to use it, but it was kind of a reference implementation of... The idea was let's develop a stack for the teams that the car can go to a yellow flag lap, and then it's up to the teams to take that and make it go race.

So that was Open Robotics ROS 2. So we got them from dashing up to Foxy, it's with the crypt's foundation middleware. So that's things like the Cyclone DDS with the Zenoh for the V2X over Cisco radios. Xerox is built into that as a galactic so that's a zero copy thing, talk about later. And then Autoware Foundation with things like LiDAR localization, drive by wire interface and all of that. I think where we arrived at is all the teams used ROS 2. Most used Foxy with the cyclone. Some teams used the Autoware autonomous driving packages with that. And then TUM actually did a bit of an upgrade for a competitive advantage.

Sebastian Huch: Well, so I think what you want us to say, Joe, is that we used ROS 2 Galactic right? So first we also used ROS 2 Foxy, but then at some point we saw that we had some problems, especially while recording some data with the rosbags that are implemented in ROS 2. And here we had a huge speed up with ROS 2 Galactic compared to ROS 2 Foxy.

Joe Speed: The rosbag2 is known to be broken in Foxy. And so your friends, Apex.AI, Dejan and lot of them are from TUM, so the RAs are close to the TUM team. So Apex got ADLINK and Bosch and Tier 4 to all chip in engineering support and funding for this company Robotech.ai and Warsaw, who does autonomous driving for Volvo truck. So they renovated the rosbag and made it six times faster in galactic so that it could record everything in the ISE race car. Because you're talking about it's six cameras at up to 150 frame per second. I've got 3D Flash LIDAR. So I've got all these high bandwidth sensors and I've got to write them to disk because otherwise I don't have my test data. I don't have my training data.

Roland Meertens: So it's really about the limit in writing speed of your hardware.

Joe Speed:  Well, it's two things. So it's the software and how efficient is it at capturing and writing? So Robotech did some clever things with double buffering. So gather all the little messages together and then do that is one right to disk. So doing a small number of large rights is more efficient than doing a large number of small rights. And then ADLINK, we basically try to give the fastest hardware we could. So every car has three terabyte of what was at the time, the fastest NVMe SSDs we could get. I'm now looking at doing a refresh actually with FireCuda PCIe Gen4. It's the world's fastest NVMe SSD. So I'm talking to Seagate about doing a refresh of that.

Roland Meertens: Okay. And that's enabled you to get all the data you needed for off-board processing or for off-board analysis, in what way does it help you?

Sebastian Huch: Exactly. So during one run, which is usually around 20 minutes, I think we recorded on average maybe 20 gigs of just pure raw data. And this was mainly the data from a LIDAR, so it was the point clouds that were quite large. And we used those point clouds. So basically we just tested our algorithms offline with those point clouds that we recorded. But we also recorded all the GPS data of course, that we could optimize our trajectories.

Florian Sauerbeck: In addition to the closed-loop simulation we did before with limited reality and sensor data. It also allowed us to do some kind of open-loop data replay with our software. But therefore we got the actual data from the actual racecourse.

Roland Meertens: So that way you can match the simulation with the actual event. Is that correct?

Sebastian Huch: Yeah, exactly. So this was one point so that once we got the real data, we could also adjust our simulation and adjust the sensor models of the simulation, adjusting the noise models of or LiDAR sensors. But also we could use the real data to adjust. For example, we used the neural network for the object detection of the other race cars. And here we could just use the data to train this neural network.

Joe Speed: I have a theory, but... Florian, Sebastian, tell me if I'm right or crazy. Object detection is important, but object classification is not because if there is an object, it is a race car.

Sebastian Huch: Exactly. And then we also had this object evasion competition at the end of the high speed runs. And here there was only one object. There were only those kind of pylons on the track. So the classification was not important for us. So once there was an object, we knew, okay, we have to evade this object. We have to drive around this object and we don't care if this is a car or if this is a pylon, we just don't have to move into this object.

Joe Speed: So these pylons, there were these giant inflatables that they put out in the racetrack. So the car has to evade around these big inflatables. They put out their blocking lanes.

Testing in the real world [20:45]

Roland Meertens: And then what kind of approach do you take? Do you just take a normal deep neural network based on vision or how do you train this? Where do you get your training data?

Sebastian Huch: We used the neural network, which was based on the LiDAR data on the point clouds. And we used this only for the object detection of the other cars, not for the pylons. In this case, we used the normal clustering algorithm. But then for the object detection of the other race with the neural network, we started an initial training with the data that we generated in our simulation here. So those were those kind of perfect point clouds. We implemented some very simple noise models here, but they were still pretty perfect those point clouds. We used those as an initial training and then once we had some good results in a simulation, we tried to use the state of this neural network and used it on the real point clouds that we recorded before with the rosbags. And then once the neural network detected something there, we could use those detections, fine tune them manually and then use those new generated labels and point clouds as a new base for our new data set. This was kind of a iterative process. So we did many, many loops here.

Joe Speed: There were some learnings here too. So with LIDAR, they published what is the range for different conditions and things and reflectivity. And I had pointed out to Clemson and IAC over a year ago that LIDARs don't like flat black, because black absorbs. And they're like, "Oh that's interesting." I say, "Yeah, you should use something bright, reflective." But the IAC folks, I think state of Indiana, they love... The car it looks like it's Darth Vader's race car.

It looks like the Batmobile with all this carbon and black and it looks amazing. And I was like, "Yeah, but LIDAR doesn't like that." And so what they finally realized when they started doing testing is there's couple issues. So one is they were flat black. But the other is, if you see these race cars, they're basically shaped like a stealth fighter. So stealth fighters are designed to not reflect, so there's no flat surfaces facing the sensor. And it's for different reasons. So stealth fighter do it for evasion. The race car does it for aerodynamics, but it gives you the same result. So that's why you saw one day that all the cars change from black to highly reflective white.

Roland Meertens: So you were basically always driving under the radar literally.So one other thing I saw was that you actually had a spin out during testing, leading up to a race, right?

Florian Sauerbeck: Exactly. So that was actually the Thursday before the race, just two days before the final event, this was when we were not thinking about software architecture anymore, but it was more going into what you would imagine is racing. So thinking about tire temperatures, track conditions and hitting the limit of what the cars are capable of. And we were increasing the speed and the same day, poly move from Milano, they had a spin out in turn one. We were somehow quite confident that we might be able to go a bit quicker than they were. Then the same thing happened to us and it was right after turn one, the car spun out at around 220 kilometers per hour. And we were at the control station and we could just hear the sound of the tires and we were just waiting for the impact, but it didn't come and we checked the data later on and the car was around 50 centimeters away from the wall where it stopped moving.

So we were super lucky that the car didn't get any damage this day. And we also got some data in that was extremely important for us at the end because we knew where the limits of the tires and of the cars were at least on this day with those conditions.

Sebastian Huch: So maybe to add to this, what happened here. So we started with a lower speed. I'm not exactly sure, maybe it was around 50 meters per second. And then we increased the speed, every half a lap by one or two meters per second. And then as where I mentioned at one point that happened that the car was doing a pretty nice 360 turn, but then we exactly knew that half a lap before everything worked fine. So we exactly knew the limit for this configuration and for those tire temperatures, which was pretty useful because this was actually the last track day or basically it was Thursday before the race. But then on Friday there was only rain. So it was the last track day that we could use before the actual race. And then we had around 48 hours to figure out what speed and what strategy we want to use on the actual race day.

Roland Meertens: This is basically kind of chaos engineering to make sure that the car crashes so that you can learn to prevent these crashes.

Sebastian Huch: Well kind of, but it was not our intention to actually spin the car. I think this was also the maximum speed that we wanted to drive on this day so we wouldn't have increased it anymore. But in the end, I guess it paid off to know the limit before the actual race, because then we could plan ahead which speeds we can safely drive on the race day,

Joe Speed: The term team, they made adjustments and then Juncos Racing, Lauren and team made adjustments to the car. So some change to suspension, changing the rear arrow package for a little more down force on the tail. And so they all did that Thursday night. And then Friday, it rained all day. And it raining all day, that's not just the loss of a track day. It also washed all the rubber off the track. So now you have a track, a race track that's not as sticky as it was the day before. And that's what they went into the Saturday into the Autonomous Challenge with.

Roland Meertens: So you have to lower your speed based on the data you have which is completely different circumstances.

Florian Sauerbeck: Exactly, this was one of the counter measures we did try to increase the speed, but we also tried to get more temperature into the tires by doing some acceleration and breaking on the straights. We also made some last adjustments to the controller. So we hoped that we could get other limits or higher limits in terms of lateral acceleration than we had the day before.

Roland Meertens: And you mentioned before that the software, that you took some packages off the internet of GitHub, how does this work? Can I build my own self-racing car?

Florian Sauerbeck: I think you can if you have enough time and motivation and a car to deploy your software on. I think it's definitely possible. So there is a huge community. It was important for all of the teams because something like this within this short amount of time, wouldn't be possible if you couldn't build on something. And so the whole opensource thing was extremely important for all teams. And we also want to pay back to the community and opensource the software we develop for this challenge.

Joe Speed: So the TUM team, it's amazing. They announced their intention that they were going to, and then when they won, they did. So they open-sourced everything in the car the day after. So immediately they open sourced everything in the car. And so now you have people like Johannes Betz who is their co-founder of the TUM racing team. He's now at UPenn he runs F-1/10th. So he's working to take the tomb software stack, integrate it with the free open source SVL simulator, which is written in unity like TUM simulator, and then put that up on AWS where any university in the world can work with it and learn. So we're really looking to take this ISC and open it up to not just the ones who qualified, but to all the universities in the world can get involved. So it's an incredibly exciting thing.

And F-1/10th is this brilliant thing where you take a 1/10th scaled remote control race car, and you put a Jetson and a camera and then you teach it to go race. So we have this idea that if you think of the Indi autonomous challenge as the pinnacle of autonomous racing, how do we make it accessible so that any student, even high school students in the world can learn and get into this? And so you think about these if we can get the TUM stack, the whole ISC software stack running in these little 1/10th scale race cars. And from that, it goes into SAE formula, student driverless things like EV grand pre-autonomous, which is a brilliant racing league. You take a $4,000 electric go-kart and you make it autonomous, right?

So very accessible compared to a million dollar in the autonomous challenge vehicle. But I think the whole thing's going to do amazing things for the community. And there's already a long list of improvements that have happened to ROS and Cyclone and Zenoh and Autoware and all these open source packages that are taught and used at university. There's a long list of improvements that have happened because of TUM and these teams working with the community and using the software.

Roland Meertens: Did you have to make a lot of issues on GitHub saying your software breaks as soon as I go faster than 100 miles an hour?

Joe Speed: Well, yes, not that exact case, but yeah. So the open source community, we live on a steady diet of pull requests and GitHub issues, and that's what the community thrives on. And TUM was very helpful in feeding us.

Roland Meertens: After the Indy Autonomous Challenge what's next? What's the next thing you guys will be working on or what's the next big thing we have to look at?

Sebastian Huch: As Joe already mentioned before, there's a follow up event planned, and this event will be held in Las Vegas in the beginning of January as part of the CES. And there will be a race. There will be a multi vehicle race with also the teams that participated here and the Indy Autonomous Challenge and the race will be on January 7th. So we also plan to participate in this challenge. We already preparing everything that we can use, the software that we used now in the last few months that we can use this software again, we of course have to make some adjustments. We already implemented the new map of the Las Vegas Motor Speedway in our own simulation that we can. Again, test everything here in the simulation before we actually go on the actual raceway.

Getting started with robotics [30:42]

Roland Meertens: Let's say that I'm currently a software engineer working with banking systems or something. How can I get started? How can I transition into robotics? What's the best way?

Florian Sauerbeck: I think the most important thing is to get your hands on actual robot. So there is lots of projects in the internet and open source projects you can find. And I think the most important is just to get started, to look into something. Of course, there's some theoretical basics you need to know about robotics, but the most important thing is the applications of how does all of this work together? How do the sensors work? How do you get the data, how to use it? And I think the best to start off is just to start your own little project.

Joe Speed: And the instructions for building these scale model race cars are all open source, so you can just download the instructions. They're all online. You can go order all the bits you need from people like Mouser and Amazon and others. And it's really an eclectic leak. So you've got the F-1/10th with Johannes Betz and Raul and Venkat in Madurai of University of Virginia. You've got donkey car and DIY robocar with Chris Anderson and all his friends out at circuit launch in Berkeley. There's the jet bot, jet racer, deep racer. So there's so many options at all different price points, all different performance. So anyone from high school on up. And that's one of the things that's lovely, with the F-1/10th community, the biggest F-1/10th community is in Stuttgart. And so these are not all students. These are automotive engineers that are bored and want to do something fun. So it's great to see all the community and people from industry, all collaborating on this.

Roland Meertens: All right, that sounds great. Then thank you very much for joining the interview podcast and thank you for listening and have a great day.

Florian Sauerbeck: Thank you For having us!

Sebastian Huch: Thanks for having us here.

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT