BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations When Machine Learning Can't Replace the Human

When Machine Learning Can't Replace the Human

Bookmarks
44:06

Summary

Pamela Gay explores how creative software solutions let scientists explore the solar system. She looks at the case of an asteroid - a 500m across rock named Bennu. While unimpressive in size, this orbiting rubble pile has posed a challenge to its mission team: how can a safe spot to get a sample be found quickly on an object with half a million hazards? Answer: use humans as part of the algorithm.

Bio

Pamela Gay is a senior scientist at the Planetary Science Institute, a 2018 Podcasting Hall of Fame inductee, and recipient of the 2019 Isaac Asimov Science Award. She is an astronomer, technologist, and creative focused on using new media to communicate astronomy and space science. She is a co-host of the award-winning Astronomy Cast podcast.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Gay: As an astronomer, I have to admit, my day-to-day life is sitting at home writing software to help us better understand our universe. Then, as a communicator of science, it just makes me so excited to come out here and tell you about the kind of stuff I get to do. As an astronomer, I use data; images, spectra, photos but taken with cameras that are sometimes orbiting our world and other planets, moons, asteroids. For a lot of my career, everything I wanted to study, everything I wanted to learn, I could do with software, a database, and sometimes some really clumsy-linked lists because that was C in the 90s.

Along the way though, I got curious about all these other areas of science that are different from mine. It was from the planetary-science community where I've somehow migrated over the years that I learned there are people - such as the folks who are today mapping out planet classic Pluto - that the way they do their analysis of the geological features on this world are to sit around round tables with a screen and a Wacom tablet. They draw by hand what they perceive to be the boundaries between different kinds of glaciers, different kinds of mountains, different features on this distant world. This is science by hand because humans and software don't know what to make of Pluto but the humans can at least guess.

There's a lot of science that works this way. One of the most disturbing things I learned is there is a brilliant scientist Stuart Robbins who, as his PhD work at the University of Colorado, drew three million circles - again, with a Wacom tablet; go Wacom - three million circles on thousands and thousands of images of the surface of Mars. This ended up leading to a catalog of 600,000 craters. The reason he had to draw so many circles is he had to periodically remap regions to make sure that his bias hadn't changed over time. He had to map things at small scales, at big scales, at in-between scales, bridge across all of these, have overlapped between his image. Three million circles got him a PhD.

As someone who never met a problem that an algorithm and a database can't solve, at least in science, at the time I wanted to think there had to be a better way. There was, but the answer wasn't any of the things that we want it to be. It wasn't computer vision, it wasn't neural nets, it isn't machine learning. It's people. It turns out that the way that we have done science in the past isn't scalable.

Once upon a time long long ago, by which I mean up until around 2000, but going all the way back to like zero, professors, scientists would work by themselves, work with one or two undergrads. Herschel worked with his sister. Science was done in tiny collaborations. It was fine. The Mariner 2 spacecraft sent back 64 bytes of data. That was kind of cool, but not a lot of information to crunch.

Citizen Science

Today, we're dealing with things that are measured in gigabytes, terabytes, and petabytes. Students aren't a scalable resource. They're also not available 24/7. They have to go take exams and they ask to study. They sometimes have lives and want to sleep. It turns out we need something that will be there, year on year, hour on hour helping us solve all these complicated visual problems. What I've started doing, what I was part of founding back in 2008, is a movement to reach out to the world and say, "Can you come help us? Can you help us make a science please?"

This isn't a new idea. This is the concept of citizen science. It's been called amateur astronomy at various points in the past, amateur bird-watching. We've gotten rid of the word "Amateur," added in the word "citizen scientist," because "amateur," implies you're not as good, "citizen" just implies you're probably not paid. True story.

Our citizen scientists are different from just crowdsourcing. You can crowdsource the name of a boat and this is how you get Boaty McBoatface. When we're trying to do our science, we're actually trying to get people helping us discover things about our universe, our world around us that we have no other way of knowing. The contributions these people are giving us are fundamentally generating new knowledge. Citizen science isn't citizen science if people are going out and replicating - as some of you may have done with the last eclipse, back in 2017 - replicating earlier observations of the corona or replicating the images that were taken to confirm relativity is real. Relativity is real, please stop asking. Citizen science is generating new knowledge.

Since 2008, I've worked with a number of projects. In 2011, I got the opportunity to start my own platform for citizen science. This is CosmoQuest. This is a platform that was designed very specifically with the idea that I can go up to anyone and say, "Come join me. Come map other worlds." I can be Tom Sawyer saying, "I have this amazing fence. No fence has been as glorious as this fence and we need to paint it white. I am so lucky to paint it white, and you guys, you don't get to paint it white." Then, you'll ask, "Can we help paint?" Soon enough, I have you painting the fence for me. With citizen science, it would be really easy to be Tom Sawyer. It would be really easy to be out there saying, "I've got all this amazing science data," and, "what an amazing opportunity I have for you to come map out these other worlds."

The truth is citizen science is asking you to do the work that we can't make our undergraduates do. I need to be honest about that and I need to be respectful of the hours and hours, and sometimes years, over the course of a lifetime, that people are contributing to help us. In building CosmoQuest, we have worked very hard to build a place where all of our citizens, all of our community members, our volunteers, are people that we treat the same way we would treat students we like. I love the lag on that one as you thought about it. We do twitch streams where we will sit and do citizen science with people. Where there's no task, we will ask you to do that we won't do ourselves. Where we will go online, we will livestream creating our software. We want to build a community of peers, recognizing that we all start somewhere. We all grow, and we progress, and we learn, and we can do this together.

One of the big problems that we've been trying with CosmoQuest to solve is the Moon. It doesn't seem like the Moon should be a huge problem to solve. After all, people went and walked on it before I was born. They haven't made it back, and it turns out that, while we can send amazing spacecraft to visit it and take amazing images, measuring out these images is really hard.

When I say, "We have good images of the Moon," I mean we can look at the Moon and see the dark trails left on the surface as astronauts traipsed around. Those squirrely tracks, those were made by people before I was born. Not bitter. We can see this in these images. Our images have tens of centimeters resolution per pixel. This means that, if any of you were to lay down and assume the snow angel position on the surface of the Moon, we would see you as a tiny splurge that we wouldn't recognize as human, but we could see you as a fleck on the Moon in this imagery.

One of the things we would really like to do is be able to map out this world and find where the safe place is to land a spacecraft. Where are the scientifically interesting places that we have got to go and explore?

Toward this end, back in 2012, we launched a project called Moon Mappers that asked people to do what Stuart had done for Mars, except for the Moon. We asked people to help us identify every feature that was about 6 meters across or bigger. We started chewing away one small, roughly 200-meter, piece of the Moon at a time. We wanted to make sure that what we were doing was valid. This is always the problem that you have, it’s testing to make sure your inputs from strangers are as good as the inputs you need. We were counting on strangers.

We did two things. The first was we took eight professional crater counters - this is a career, there are people out there who have been counting craters. I laughed when I heard this too, there are people out there who have been counting craters, marking craters, drawing circles on worlds, on pictures of worlds, since before the Apollo missions. They're still doing it. We took eight of them, ranging in age from Stuart Robbins, who had just finished his PhD with Mars, to Clarke Chapman who had mapped the Moon for Apollo. We took all eight of these scientists and we gave them a section for the Moon, and we said, "Go forth. Use your favorite software, mark everything that is above this limiting size, all the way up in the data." Then, we took this same region of the Moon and we partitioned it up into small tractable sections, 450 by 450 pixels on the side because that fits nicely on a screen. We said, "Humanity of the world, please come draw circles on the Moon. We know it's boring, but it needs to be done. We love you." They came in not millions, but in the tens of thousands, and that's enough.

We had 15 people view every that we gave the public, we had 8 professional scientists, we made 2 of them also use CosmoQuest software to mark the surface, so they had to do everything twice. Turned out that the professionals using our software hit the exact mean of how many craters of different sizes were in the images, which tells us our software isn't screwed up - good feeling. Then, when we compared the professionals to the volunteers, we found that there's a lot more noise in the volunteer data. That's the data that's on the bottom, all this myriad of red circles.

There was surprisingly a lot of noise in the data from the professionals. When we combined the data so that all those 15 people's marks became one aggregate mark requiring each thing to have been seen by at least six people, suddenly, we had a master catalog that we compared to a similar master catalog from our professionals. When we compared these catalogs of thousands of marks, there's a 1:1.01 ratio of marks between the two of them. This was less than the difference between individual professionals. This actually tells us that we're better off working with a group of volunteers than we are working with a single professional. We know, NASA and the National Science Foundation, they're not going to pay us to have multiple professionals. We bribed these eight people, they were not paid, they got a publication.

We have something - and we have something that works, but it's not entirely scalable. The universe is maybe even infinite in size and that small pale blue dot in this image, that's all of humanity. How do you ask all the world to help us understand something so much greater than we are, one hand-drawn circle at a time?

Dealing with Continuous Changing

I'm working on the Moon and other things, Moon, Mars, Mercury. Vesta, it doesn't begin with the letter M. For all the effort that we've put in across all the years, 7 years now of getting data, we've done the tiniest percent of the Moon surface. The Moon isn't that big a problem in the grand scheme of data. It doesn't change that fast. Sure, it occasionally gets hit by falling rock, the sky is falling when you're in outer space, we call the things falling asteroids. New craters are getting generated very slowly, so we don't really worry. This is a tractable problem, we should eventually be able to finish the Moon. But our own star, the Sun, changes instant to instant. As we try and map it out, as we try and understand space weather and the solar flares that periodically eat spacecraft and knock out power grids, as we work to understand it, we're getting back 1.5 terabytes of data per day. We have to figure out how can we handle this.

We have individual projects, like the Event Horizon Telescope, that took our first ever image of the light coming from objects near a black hole. The Event Horizon Telescope took 64 gigabytes of data per second. One petabyte of data that was processed by one fabulous woman. It was not a fast process.

The worst is yet to come. The Large Synoptic Survey Telescope is currently being built in the Atacama Desert in Chile. This telescope is going to image the entire visible southern sky and a little bit of northern that it can see every 4 days. They are estimating an average of 15 terabytes of data per night, of which there will be order of a million unknown objects triggering alerts that we have to figure out how to deal with. These are the things in the sky that flicker, that flare, that move.

When Keith said he wants a love as constant as a star, he seemed to forget they explode. This telescope is going to be mapping out those explosions and finding the asteroids that could potentially do to us what happened to the dinosaurs. We need to be able to separate out those two different things and process them in the right way. There is data literally raining down on our heads. This is a screen capture I took about an hour ago of what was, at that time, currently being looked at by the Deep Space Network. This is a set of receivers, basically giant satellite dishes, that are scattered at three different sites on the Earth so that they can capture our entire sky and communicate out with all the different space probes, satellites, explorers that we have. What I loved was, quite by accident, I caught the network at a moment when it was talking both with the New Horizons probe, which is out in the inner part of the outer solar system, and also with the Voyager missions, which have left the solar system. We have, at any given moment, across the world, being captured by telescopes on the ground, in the sky, and scattered on other worlds, petabytes of data being collected that we need to figure out how to deal with.

Citizen Science vs Machine Learning

For now, I just want to map the Moon. While I have a lot of help from my closest few thousand friends on the internet, that's not really enough. The answer everyone keeps coming to me with is, "Machine learning." I'd like to tell you that we have great training data, once you combine all of these marks. A lot of conversations here have focused on, "How do we deal with the bias in our data?" One of the cool things about doing citizen science is, because we can assume that the wrong answers are probably unique or we trained our humans wrong, the right answers will get marked by multiple people. This means that, when I have that one person who's just, "I hate marking little craters, there's too many of them." True story, that's our professionals. When I have volunteers that look at the massive craters on the Moon that are ancient and worn down, and they're, "I don't know if that's real or if that's a hill or a valley." Well, they may miss that. Putting everyone together though, we snag the missing objects that are caught by other people. We reject the false positives because they're not replicated by other people. We have great training data.

When we ask our machine-learning algorithms to handle data where we take half of the images we've already marked, which are a fairly contiguous region, and we say, "Go, machine learning," and we compare its outcome to what the citizen scientists had done on those same images, it's pretty good. The problem is different areas of the Moon have very different soils. When we ask our algorithms to do an area where the surface is a little bit different, it's like, "No, I will not. Nope, you're on your own." It nopes us rather hard and ignores many things and makes us bitter.

In addition to the algorithm periodically just noping us, we're also dealing with the fact that the Moon is not a perfect laboratory environment. Sometimes our spacecraft is looking out at a slight angle, sometimes we're looking at things that aren't on a perfect surface and we end up looking at things at a slight angle. Again, our software is just, "No. Sorry." If I'm trying to land a tiny spacecraft in a field like this, this is not going to keep my spacecraft safe.

The Moon likes to eat spacecraft. This was learned by Israel, in April. It was learned by India, a couple of months ago. China's doing good there, on the far side of the Moon, roving away. Landing on other objects is hard enough even with the best maps. We need to have the best maps. There are so many reasons that the software struggles. Our human beings even struggle too but we learn faster.

This is the exact same field with the Sun in two different places. Our Sun has this annoying habit of moving through the sky. This blights all of us with time zones and it blights our machine learning with changing shadows. Because of the extreme texture on the Moon, you can end up with this as you go from not Sun straight overhead, on the left, to Sun straight overhead on the right. That's the exact same place.

Then, the Moon just sometimes is, "I'm going to play with you." This amazing pattern is generated by magnetic fields that, as grains of material on the Moon have, through various processes, been kicked up into the very thin atmosphere, magnetic fields move those particles around causing particles of different compositions to preferentially land in different places in this magnetic field essentially sand painting the surface of the Moon.

We still continue to ask people, "Can you come map the Moon?" We keep going. The thing is all because machine learning can't solve the Moon today. Does it mean that that won't become possible tomorrow? This is where I have a dream. It's a small dream but it's a big universe that this small dream might be able to affect. Being here at this conference this week, I have to admit, you guys, and girls, and everything else - that was just awkward phrasing. Gender is not binary and I'm still learning. All of you humans out there, you have made me feel that this is possible, this is what I want to do. I want to create a platform where volunteers come as they are with the skills they have and they go through an informed consent process where they understand what we're trying to do, where it's painting Tom Sawyer's fence. Where it is trying to do something new, and amazing, creative and how all the data they're creating is going to be used.

Once they've gone through this, "my goal is to create a multimedia informed consent that levels up..." the same way you don't realize you're taking a tutorial in Portal but it's really like a tutorial in the beginning, I want informed consent to be like you're taking a story instead of just text that you click through because TLDR.

Then, I want people who want to look at pictures to go look at pictures, annotate them, help us see what is out there by loaning us your eyes. I want people who want to contribute to code, to help us improve our products, improve our features, improve our UI. Then, ultimately, I want to take these catalogs that we're producing and feed them to our machine learning that in my world is like a happy robot that produces data for me. I know that's not real but the icon exists so I can do that.

I want to use that data to train the machine learning, and then, have what comes out of the machine learning go back to our citizen scientists to review the data, to swipe left if it's right, swipe right if it's wrong. I want to ultimately create a platform that makes the generation of new knowledge easier than it is today so that we can level up and do more science.

For years I was at institutions where open-source was not an option, and it made me "words." I've now switched institutions and I'm so excited to say I am supported in becoming part of the open-source community for the first time. I desperately want to say, "Come, let's code this, let's start tomorrow." I want to do that. First, I've got this asteroid. Its name is Bennu and I have to finish mapping it first. We're almost there.

Mapping Bennu

For the past 3 years, my team has been working with Bennu. We've been a contractor for the OSIRIS-REx spacecraft. Working with the spacecraft may sound amazing but it's also really terrifying. Spacecraft has this habit of occasionally blowing up on launch, ceasing to function for reasons you don't know, crashing into the surface of the object you're going to. This dichotomy between amazing and terrifying is one that you have to, as an astronomer, decide, "What is the level of risk I'm willing to live with?"

One of the things that we recognize is, we live in a universe that is trying to kill us. While we are generally protected by our glorious atmosphere that burns up incoming asteroids, and blocks gamma rays, and X-rays, and so many other wonderful things, our spacecraft are leaving our atmosphere, and it's scary out there.

I've worked really hard to build a career that maximizes the awesome and minimizes the terrifying. This is a result of a graduate student experience that included things like this. This is a photo I took back in the days of film cameras that I scanned with a really bad scanner, so please forgive the quality. I was standing on the dome railing of the 107-inch telescope, which is 100 feet away from that dome that you see in this picture. When my undergraduate, who was out there with me, looked at her arm and said, "Isn't that the lightning rod?" and we both realized the hair on our arms had gone up. We went inside through a set of double doors. I set up my camera to look outside just as this lightning went off. We were on the mountain for 22 nights. We got four nights that we could use the telescope. That light on the hill off in the distance, that's a forest fire.

I'm a really unlucky person when it comes to weather, I'm batting zero for three solar eclipses. I'm not the person you want to be with if you want to look at the sky. Being unlucky with weather is not enough to say, as I feel I am, that I am that person that defines the tail end of the bell curve of luck, because this also happened while I was working on my dissertation. Astro-E was an X-ray satellite, a collaboration between the U.S. and Japan that I planned to use. It didn't blow up, it died in a much less dramatic and somehow more painful way. It simply had an engine that didn't fire right so it didn't hit orbit and the entire process led to the satellite just not working.

After I finished my PhD, which was done entirely as an observational astronomer, getting my own data, I was, "I'm done, I'm using archival data. There are survey telescopes for a reason." That's a solid way to build a career. Then, I had people come to me and they said, "We have a problem and citizen science can solve this problem. This problem is this little object that moves relative to the stars, its name is Bennu. We're going to send a tiny spacecraft to it. It's about 2.5 meters on the side, 2.5 meters tall." Just way up there but still a tiny spacecraft. What we want to do is grab a rock and bring it home. Or at least a bunch of gravel and bring it home."

The problem that they were looking at is, when they looked at their mission timeline, our spacecraft - and we're having these discussions back in 2015-16 - our spacecraft was slated to make it to Bennu in January of 2019 - this year. It went into orbit in December, but we'd start getting the good stuff in January, the good images. Then, we'd have a full mosaic of the world by mid-May. By mid-July, we needed to have found the places that are actually safe for our spacecraft. We needed to survey this entire asteroid, and that's an amazingly awesome project.

Their original plan when they pitched the mission was, "We'll just get all of our students to do this," but the timeline had changed. This was school year, so sure, you just assign like all the geology and planetary-science students in the United States to map an asteroid. It can be done, no big deal. Then, it was realized, "Wait, May 15th, students are taking finals, or they're gone for the summer. We have to solve this in the summer." That's ok, we have citizen science.

In September of 2016, when our spacecraft was slated to launch, this was my team of undergraduates and one very young senior programmer and my project manager. We piled into a minivan, and because we didn't want to skip too many classes for the students, we drove in one day on a Friday, except for one person whose prof wouldn't let him out of class. We drove all day from a suburb of St. Louis to the Space Coast and we watched our rocket launch with our spacecraft. I have never been more scared.

It made it. A year later, it was flying back past our world, getting a gravity assist to help it get a little further out into the right orbit to meet Bennu. It looked back and it showed us us. As it moved away, it caught Earth and the Moon. That's Earth and the Moon showing us. This is our place in space, this is us hanging in the emptiness. To see these images and realize, "We have a spacecraft and this spacecraft is going to deploy an arm and it's going to go down and it's going to descend towards the surface of a world we have never seen up-close. It is going to deploy what I can only explain as an angry vacuum cleaner and it's going to churn up the surface and, as it draws in materials, it's going to load up the sample containers. It will eventually fling back at our world to be caught and be used to do science that will help us understand the origins of our solar system or understand the composition of other objects, to help us just figure out, 'Is it worth it to go mine these things?' I'm part of figuring out how to do this safely."

We didn't know what Bennu would look like. We've seen asteroids, we've seen a bunch of them. They're small and all of them look different. Some of them look like dog bones. We didn't know what we'd be getting into. Based on the size of our object and where it is in the solar system, we figured it probably looks like Itokawa. This is a small asteroid that was visited by the Hayabusa spacecraft a number of years ago. We figured that the surface would be pretty gnarly but it would be a mix of boulders, and dust, and nice big areas that would be easy to land in, and we'd get our pick, we'd get to find the scientifically most awesome place of a whole lot of safe places to go.

We wrote all of our software assuming that, when we partition these images up into their highest-resolution samples, we'd have these images with two or three craters and maybe a dozen boulders and tens of rocks. We tested our software and it worked. Our spacecraft drew closer to Bennu. As we got closer, we realized there's nothing smooth about that object, "Oh my goodness. Help."

These are the things that we're thinking while trying to sound optimistic and encouraging. As we got closer, it only got worse, there's nothing smooth. Nothing smooth at all. Our goal had been to find 10-meter or larger smooth areas, 30 feet by 30 feet - nope. Our idea of having citizen scientists look at these images, we figured it's not too bad to mark three craters to mark tens of rocks, dozen boulders. People will do this, no big deal, it takes about 10 minutes. This is our reality, an order of 300 rocks per image. Dozens and dozens of boulders. Spacecraft-eating boulders.

We launched our project, May 22nd, and we begged to the internet. People joined us and they created memes. They started talking, in the worst dad puns, about rocks you have ever read. They started streaming on Twitch as though mapping an asteroid were a video game. In August, we announced that we had found four places big enough that we hope - they're smaller than what we wanted - but we think they're big enough that we can safely descend into each, but we want to make sure. We've been taking, since August, higher-resolution data. We dropped our spacecraft into a lower orbit, we actually got closer to get better images. We had our 8 best citizens scientists ask to be part of a team of about 20 people who went through and marked these 4 regions. This is Nightingale, it's a small crater encompassed by a larger crater. This basically means a large rock came in, smashed the area to be kind of clear. Another rock came and smashed it further, like a giant hammer and a smaller hammer, and in the process created an area that was fairly flat. Ok, that works.

This is Kingfisher, it gets its name from a restaurant that the mission team in Tucson likes to go to - overly honest methods. Kingfisher is a bit terrifying because it's surrounded by boulders. It's about 4 meters across, 13 feet, and it is a crater. It turns out, if you smash an area, you can get a nice safe place. This is Osprey, again, a crater - we're consistent. This is Sandpiper, which is actually a flat place inside of a really big crater that's bigger than this image, but there's some boulders in the crater floor so we have to be careful with those.

We did it, we mapped a world. We didn't use machine learning, we couldn't. First of all, how the heck do you train an algorithm that fast when you don't know what your world is even going to look like? Second of all, our world is too tiny. We would need all of the data for the entire world as our training set, at which point, why bother? We used humans. We used over 3,000 people who came in and marked from one image to thousands of image. Only eight people actually did more than 1,000, we loved those eight people. They marked altogether over five million objects by hand, one mouse click Wacom tablet at a time. I think Wacom owes me something because our citizen scientists started buying these when they found out that's what the pros were using. Wacom, if you're out there, I could use a sponsor.

This is what we do, we help people learn. We explain the science, we explain how what they do is going to be used. We make them our partners. We invite them to help us explore these other worlds so that we can discover new things. I invite all of you to join me. The dream I told you about, I'm not going to say it starts now because I still have some data reduction I need to do to finish Bennu. In December, we're grabbing our soil sample. In January, I start building that dream. I invite all of you to help. We have the saddest GitHub repository ever right now. It was produced while live streaming on Twitch over about 6 hours. Turns out coding and talking and streaming the first few times you do it is really hard, but it's what we're going to do. We're going to do our science out loud in the public eye and invite you to join us. We invite you to help us create this machine-learning future that requires humans and people to work together to support one another; hopefully, not create silence but to instead create knowledge and understanding.

Join us. We're CosmoQuestX on pretty much every platform, the X Mark Science, and it's also because CosmoQuest was usually already taken. I'm Dr. Pamela Gay, I'm Star Stryder, spelled with a Y due to mistakes made in my 20s. I invite you to help us explore the Universe. It takes a village to raise a child and it takes a world to explore our universe.

 

See more presentations with transcripts

 

Recorded at:

Dec 10, 2019

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT