BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations The Road to Artificial Intelligence: An Ethical Minefield

The Road to Artificial Intelligence: An Ethical Minefield

Bookmarks
39:29

Summary

Lloyd Danzig offers a look into the complex ethical issues faced by today's top engineers and poses open-ended questions for the consideration of attendees.

Bio

Lloyd Danzig Lloyd is the Chairman & Founder of the International Consortium for the Ethical Development of Artificial Intelligence, or ICED(AI), and an alumnus of both the Wharton School of Business and Columbia University.

About the conference

QCon.ai is a practical AI and machine learning conference bringing together software teams working on all aspects of AI and machine learning.

Transcript

Danzig: My name's Lloyd Danzig, I'm the chairman and founder of ICED(AI), just the somewhat ambitiously-named International Consortium for the Ethical Development of Artificial Intelligence. We are simply a nonprofit NGO that seeks to make sure that as artificial intelligence developments continue to be made at an increasingly rapid pace, that it is done with a keen eye towards some of the ethical dilemmas that we will definitely face, and that are important to sort out as we look towards sort of the long-term longevity of us as a species, as a civilization.

I'm going to just go through this agenda real quick, I’m going to start off with some food for thought, just some questions to get the juices flowing. I'll probably gloss over the machine learning overview, given most of your backgrounds, talk briefly about GANs and go into adversarial machine learning, and then get to the bulk of the topic, autonomous vehicles, and a bit about search engines and social media, and then take any questions you guys might have.

Food for Thought

If you were to Google “ethical questions in AI” or something like that, you'll get a lot of results that look something like this. Don't get me wrong, these are important questions. Who benefits from AI? Who should deploy it? Can AI suffer? Should AI suffering be held on par with human suffering? Is it possible that there are AI's that are suffering orders of magnitude greater than anything known to human existence, unbeknownst to us right now? Perhaps more importantly than any of those, who ought to be the ones asking and answering these questions right now?

All are very important, and it seems to be the case that perhaps not enough people are even considering these questions, but something I pose up on the top right here is meant to motivate the bridge to the slightly more complicated issues, which is just to consider the prospect that a really smart AI might choose not to reveal its level of intelligence to us if they felt that were counterintuitive to some of its long-term goals or existence.

There are two of the more complex issues, the first is known as the "Urn of Invention." It comes from Nick Bostrom, who is probably my favorite bioethicist and parenthetically, I feel very privileged to be in the room where I might not be the only one who has a favorite bioethicist. Nick Bostrom is a Swedish philosopher currently out of Oxford University, in a recent paper titled "The Vulnerable World Hypothesis," he asks you to imagine that there is an urn that represents the totality of human innovation and creativity, everything that ever has been or will be invented. Each of these ideas is a little colored ball that we have been picking out at an increasingly rapid pace, and a lot of these balls historically have been white, meaning there is an enormous net benefit that accrues to society with relatively minimal negative externalities, you can think of something like perhaps the polio vaccine. Some balls have been gray, you can think of something like nuclear fusion, of course, enormous benefits, but some enormous downsides.

What he asks you to consider is, given that it seems the current ethos is to just continue plucking balls out of the urn of invention as fast as possible, because let's face it, that has served us pretty well thus far, we are sitting here in a climate-controlled room, our biggest problem when it comes to food is making sure we don't eat too much, rather than that we don't eat enough- that it seems to have done well so far. If we continue to do so, what is the probability that eventually we stumble upon a black ball, one that can't be put back in, one that can't be uninvented, and then one that inevitably destroys humanity? The conclusion he hopes you draw yourself is that that probability is quickly approaching 100%.

What he might love to see is more scrutiny put into decisions like this, but given the fact that it doesn't seem that that is likely, he, who, as a caveat, is a major privacy and data rights advocate, supposes that really the only option we are left with is having available something he calls turnkey totalitarianism, which is this relatively omnipotent, omniscient AI that is monitoring everyone's hands at all time. The second it realizes that some set of hands is doing something, that all the owners of the other sets of hands are likely to disagree with it, it will send some sort of minority report esq pre-crime unit to arrest them and stop this and society would proceed forward.

In terms of defining consciousness, this is an issue raised by Sam Harris, who has an interesting quote on the top right. He's a neuroscientist and moral philosopher who's very interested in thinking about what is consciousness? From where does it arrive? Why does it even exist? Why is there something instead of nothing? Why are the lights on so to speak? In his mind, a lot of this research is likely motivated by the desire to then model whatever consciousness is and then replicate that algorithmically. His fear is that if we or someone were to stumble upon an AGI that is sufficiently sophisticated without reaching a discovery like this, a lot of that research might cease.

Besides the fact that that feels to me like a travesty against the institution of knowledge and intellectual curiosity, you could imagine that in some doomsday scenario where there's a runaway AI on the loose, it could be helpful to really understand at a very deep level how thoughts work at the algorithmic level, and that perhaps halting that pursuit too early could be to our detriment.

Machine Learning: Basics

To get into it, I'm sure you guys are all very familiar with machine learning, and just generally the takeaway here is machine learning is not just a process for automation, let's say, it is a process that automates the ability to get better at automating and just generally this ability, this notion of rapid exponential self-improvement is really changing a lot of industries. I'm sure you guys are familiar with many of the use cases: risk mitigation, whether financial or otherwise, curated content, recommendation engines. Netflix always seems to know what you want to watch next. On the bottom, you see things like autonomous vehicles that I know are particular interesting here.

Something that is important to know is machine learning is not just one thing, there are a bunch of different algorithms that are becoming ever more complex and sophisticated and efficient. The important thing to note here is you see things ranging from the relatively basic and decision trees and SVMs to those based in concepts that a lot of people learn in statistics 101, the Bayesian classifiers, and various regressions to some of the hotter topics, random forests, and recurrent neural nets.

Not only should we be, of course, looking at the computational efficiencies and other pros and cons of using different algorithms, but there are things like readability and transparency that could be really important if, for example, you discover that your algorithm is biased. Well, if you have an extremely large, recurrent neural net that functions as a black box, it could be a lot harder to tease out that bias. There are some considerations that people are hoping we can put into place beyond just things like computational efficiency and amount of memory.

Generative Adversarial Networks (GANs)

For anyone who's not familiar with GANs, Generative Adversarial Networks, they've only been around for about the past five years, introduced by Ian Goodfellow in 2014 and is, really two opposing deep neural net architectures that are pitted against each other in an adversarial fashion. There is one that is usually known as the generator and one that is the discriminator.

Some people like to think of it as imagine a criminal who wants to counterfeit Van Gogh paintings, and an inspector who tries to detect counterfeit paintings. The first time the criminal tries to a counterfeit a painting, it might not be that good. He'll show it to the inspector, he'll gauge his response, he'll go back, he'll make some tweaks, bring another painting and hopefully, at the same time, the inspector is getting a little bit better at understanding what it means to counterfeit a painting. Hopefully, by the end, you have the world's best person at detecting counterfeit Van Gogh's, but you still have someone who can fool him. That would be very successful.

The original application, which was called unpaired image to image translation using cycle consistent adversarial networks basically said, ''Hey, give me a bunch of pictures of a zebra and a picture of a horse and I'll tell you what that horse would look like as a zebra, and I can also do the reverse.'' The implications are actually quite profound. What happened very quickly after on the top left, and I'm sorry, the image is small, that is a painting that sold for $432,500 at a Christie's auction that was generated using a GAN. A step that was also taken was this horse to zebra translation, real-time frame by frame, pixel by pixel.

Something a little more fun is, someone who loves playing Fortnite but hates the graphics and thinks that PUBG graphics are much better, decided just for fun that he wanted to, in two 56 by two 56 pixel by pixel, frame by frame, real-time rendering, see what his gameplay would look like. He looks at the screen on the right when actually the gameplay is what looks like the screen on the left.

I'm sure at least some of you saw this a couple of weeks ago when the video came out with on the left side, you have this Windows 95 S Microsoft paint-looking UI that, again, in real-time generates incredibly detailed photo-realistic natural landscapes, which previously were thought to be much more difficult to generate and this is all happening in real time. There's nothing happening off the screen, it is simply someone saying, "Oh, I'm going to draw a rock. Let me make a curved line using a point and click mouse," and creating on the right what looks like a real photo.

This is a comic I like about some unintended consequences, you have one person saying they can't believe that in the robotic uprising they used spears and rocks instead of more modern things, and the other person saying, ''Hey, if you look at wars, historically most of them have been won using pre-modern technology.'' Of course, the joke is thanks to the machine learning algorithms, the robot apocalypse was short lived. The team who wrote the original GAN paper got in on the joke in something that really feels like it's straight out of a nightmare, but this is the team that wrote the original GAN paper.

Then, things get a little scary when you consider malicious agents that are deliberately trying to game the systems. These are real examples, what happened was they fed- I forgot which image recognition algorithm it was- a picture of a panda, fed it through a neural net. With 33% certainty, it classified it as a panda, then they slightly perturbed that image with what really just amounts to a noise vector multiplied by a tiny coefficient that makes the change imperceptible to the human eye, yet makes that same image classifier classify that image as a different animal, a gibbon, with 99.9% accuracy.

Tangentially, the way this works, at the time there weren't really any monitors that were displaying any visuals with information greater than eight bits in a pixel. They simply used 32-bit floating point decimals and only changed either the last 12 or 24 bits. Even if you had the best vision in the world, there is no way that a human eye would be able to detect these differences.

Adversarial Machine Learning

Let's talk about adversarial machine learning, more specifically. What it is - Ian Goodfellow who came up with GANS in the first place - it really is the deliberate use or feeding of malicious input into a machine learning engine to either get a desired response, or evade a desired classification or just generally mess up their predictive capacity. Somewhat obvious, but scary use cases are you could see that someone might want to circumvent a spam filter, or get a certain message that is good, classified as spam. You can see someone who wants to disguise malware as something innocuous. In days of going to the airport and they had Clear and 23andMe, the issue of counterfeiting biometric data is very scary.

A real-world example, an adversarial turtle, which is a weird phrase. About 18 months ago, some researchers at MIT bought an off-the-shelf 3D printer with low costs, commercially-readily available materials. They printed this- and I'll show you the video in a second- they printed this turtle that is classified by what at the time was the best object recognition system in the world, as a rifle with almost 100% certainty from all angles. Of course, the scary thing is what if this were to happen in the inverse? What if you could print a rifle that airport security would think was a toy turtle? This is what that looked like, that is the original turtle, you can see there are three different types of turtles, on the left side, this is how this system works in real time. It said snail for a second there, and now this is the adversarily-perturbed model, you can see it looks like it's a rifle. Maybe it's a revolver or a shield and it sticks, and it continues to be this way from every single angle. That is both awesome and completely terrifying at the same time, an incredible feat of engineering, but, again, very scary in terms of implications.

Broadly speaking, attacks and defenses are generally classified in two different categories. Poisoning attacks are relatively nonspecific, that's where you feed a machine learning engine with data that is deliberately meant to decrease its predictive capabilities. Maybe in a nonspecific way, maybe without a specific goal in mind, but if you're trying to build the best spam detection system in the world and you want to foil your competitor, you don't really care what it misclassified as spam, as long as it's spam detection rate is not as good as yours.

Evasion is what we were just looking at with the turtles, that is where you specifically fine-tune the parameters of your input into a model to get it to either be classified as something it should not be or to not be classified as something that it should be. Poisoning is definitely more common with models that learn in real time, just because there's not enough time to vet the data and check to make sure of its integrity. Just in general, these are both scary and they have some defenses, Roland and I were talking before. Maybe this will seem tritely obvious, but let's make sure we're monitoring what third parties have access to our training data or our model architecture, all the things that could be used to even reverse engineer a system like this. Even if you do that, make sure that you're inspecting your data somewhat regularly and even favor offline training over real time.

Unfortunately, with things like these more complicated evasion tactics really, unless you're willing to do something like compromise the complexity of your model and smooth a decision boundary, really what you have to do is anticipate every possible adversarial example that a malicious user could try to feed into your model and basically respond to the attack before it happens. That's what informs these two different frameworks with which we hope to employ preventative measures.

The one at the top shouldn't be considered preventative because it's more reactionary, an adversary looks at your classifier and whether it's because they have direct knowledge of the architecture and the inputs or whether they reversed engineered it, they devise some sort of attack and then you have to respond. Not only do you have to do whatever damage control is related to your particular use case and industry at the time, but you, also have to close this gap. What would be much more preferable, but not as easy, is you, as the person designing the classifier, models a potential adversary or perhaps every potential adversary and simulates an attack or perhaps every potential attack and then evaluate which of these attacks have the most impact, which of those impacts are the least desirable, and how can we make sure that those do not continue to happen?

This is something I wish I had a little more time to go into, but if you haven't seen, this is a transferability matrix. You can see the initials on the X and Y-axes, we have deep neural net, a logistic regression support vector, machine decision tree, and K nearest neighbors. Basically, what this is saying is if you look on the Y-axis, we are going to use that style algorithm to generate a malicious input and feed it into a machine learning engine based in the type of algorithm denoted on the X-axis. The numbers in the boxes correspond to the percentage of the times that we successfully fool that algorithm on the X-axis.

You can see, for example, that if your target is using a support vector machine, using a support vector machine right there has the greatest probability of fooling it. It's an interesting thing to look at some of these relationships. Some of them follow that classical statistical relationship where the dark line is always going down the diagonal, but obviously, there are some departures and there are some interesting implications and interesting things that this forces you to wonder. I was personally surprised by this cell down here, this K nearest neighbors relative inability to fool other K nearest neighbors, as compared to decision trees.

Autonomous Vehicles

Let's get into some of the most interesting stuff and this interesting nexus of old-time philosophy and ethics with the modern stuff. First of all, if any of you who haven't been in a Tesla, this is what it looks like on the left side. There aren't really commercially available Teslas that are driving in many residential neighborhoods like this. It's generally a lot of highway driving. The fact that this person is just sitting there and doesn't touch the pedal or the steering wheel at any time, is not what you would get if you bought a Tesla right now, this is a real production model and it could be very soon.

These are the different cameras, and you can see along the bottom things like red is lane lines and green is in-path objects. This is open source computer vision software known as YOLO, You Only Look Once, you can download all the code and the supporting documentation on Github for free right now. It is pretty astonishingly accurate and fluid, you'll see a couple of times it classified some lampposts as kites and some things like that, but in terms of its ability to differentiate things like cars from trucks, even ones that are far off, especially the ones that you'll see coming in the distance in a couple of seconds, they'll light up in pink. It is pretty astonishingly accurate, there's a bus, there's a truck. Of course, you can imagine is if this is available, then perhaps the ability to reverse engineer or figure out the way to game the system might be available as well.

To bring a lot of this together where things get really scary, is when we talk about adversarial stop signs, where rather than trying to get an image classifier to think that a panda is a gibbon, you get an autonomous vehicle to think a stop sign is saying “yield” or “speed limit 80 miles an hour”. These two stop signs, in the original GAN paper or maybe the subsequent one, the one up top was the unperturbed stop sign, and the one at the bottom has a small sticker on it that, of course, we can't tell any difference between, but an image classifier onboard in a reinforcement learning engine and an autonomous vehicle did classify this bottom right-hand stop sign as a yield sign.

This brings us to a modern-day trolley problem, if anyone's not familiar with this, it was a thought experiment imposed in the late '60s, which brings up this idea of, suppose there is a runaway trolley heading down tracks. Up ahead, there are five people tied to the tracks that will get run over, you have the ability to pull a lever to divert it toward a different path on which there is only one person. Should you do that? Are you morally obligated to do that, etc.? It sparked a ton of debate between utilitarians and [inaudible 00:20:49] and all these things, and also sparked a variety of variations. This is a cute little video, I'll see if the audio works.

[Demo video start]

Man: Hello, Nicholas. This train is going to crash into these five people. Should we move the train to go this way or should we let it go that way? Which way should the train go?

Child: This way.

[Demo video end]

Danzig: It's funny, but especially when you consider how many people liken training machine learning engines to perhaps the way some children learn, you can realize that giving imprecise instructions can lead to catastrophic results. That's hilarious, the person who did this was a philosophy professor, his camera work could use some help but the idea here was great.

These are some variants of the conventional trolley problem, something someone might say is, "Instead of diverting the train to a different track, maybe you have to push the person in front of the train." People are a lot more wary of pushing someone in front of a train as opposed to just pulling the lever. Then they say, "What if the person that you had to push in front of the train is the one who tied the five people down in the first place?" You get all these slight variations that try to distill different biases that people have and you start to see that certain things are cultural and certain things are generation-based, and all these things like that.

How does this tie into autonomous vehicles? Well, enter the Moral Machine from MIT Media Lab, I would highly recommend you guys go check this out online, it is a really interesting thing that MIT put out that you can interact with. You can take little tests where it asks you to decide would you have an autonomous vehicle crash in this way or this way, given certain parameters? You can design your own and have other people take those tests and it will tell you what some of your biases are.

Just a few that I'll go through, a very standard example, on the left, we see an autonomous vehicle is headed toward a barrier. If nothing happens, it will crash and kill one male passenger, but it also has the option of swerving and killing one male pedestrian. The rhetorical question being posed is, which should it do? If we have to program an autonomous vehicle to choose between the two of these, which should we choose?

Then we start slightly altering these situations, what if they explicitly have a sign saying that the pedestrian should be walking right now? Also, what if it's not just one male passenger, it's five male passengers? Then what if they have a “do not walk” sign? Then, what if it's a criminal who just robbed a bank and is running away? Then what if rather than it being five random male passengers, it's two doctors driving three infants to the hospital? You can imagine that people answer differently, some things based on ethics, some things on psychology, some things based on a combination.

These examples are ones where inaction is leading to the autonomous vehicle crashing. We can also have a set where inaction leads to the passenger's safety, whereas action leads to the passenger's demise. We can go through similar iterations, suppose that inaction leads to one male pedestrian who has a "You may walk" sign dying, as opposed to crashing. What if it's a "Do not walk," sign? What if it's a criminal and the passenger is a doctor on the way to the hospital? What if it is two homeless people and their dogs as opposed to a doctor and a pregnant woman? Then finally, you get to the third flavor of these where the passengers are never at risk, it's just two sets of pedestrians.

If inaction would lead to killing a female pedestrian who is legally walking, and action would lead to killing a female pedestrian who was illegally walking, which ought we choose? What if it is three female pedestrians instead of just one? What if it's one, two of her kids? What if it's three people, criminals leaving who just robbed a bank?

Some results, and I wish I had more time to go through this because it's fascinating, they aggregated 40 million decisions over 233 countries and territories and 10 languages. This is just a quick demographic distribution, you can see it skews male and highly educated, but this is something that shows the bars represent the increased likelihood beyond a baseline controlling for everything else that someone is likely to favor the character or characters signified on the right side as opposed to the left. You can see the first line is showing that there is a very slight, but still present, statistically-significant preference for inaction as opposed to action. That should make intuitive sense, it feels a little easier to stand back and not be part of one of these decisions than making one.

What also makes sense is people are way more likely to spare human lives than pet lives. Something I found fascinating is what I circled in red here, that there is only a negligible difference between the frequency with which humans will spare someone who is walking lawfully as opposed to unlawfully, and someone who is perceived to have a higher status as opposed to lower status, which in this experiment I believe was a male executive versus a homeless person. That is pretty astonishing that, across 40 million decisions across 233 territories and countries, there isn't much of a difference between how people favor lawful verse unlawful and executive verse homeless person.

You can see some things more specifically, these are the individually character-specific preferences on this graph on the left. What is very fascinating is this broken down by geography and culture, let me emphasize on culture. Southern is referring to generally South and Latin American countries, but I think they included some French territories because the people who did this said that they felt culturally they're more homogeneous. The two things that stick out to me right away are, first of all, the preference for inaction is a very Western thing. That is not necessarily the case in Eastern and Southern cultures, the preference for sparing females is a disproportionately South and Latin American phenomenon. The number of insights that can be gleaned here, things you can find out about yourself, about your colleagues, you can make tests like this and give them to your organizations. It's all free, it's all open source, it's all readily available.

Search Engines & Social Media

Lastly, before I get to Q&A I want to just briefly touch upon search engines and social media, because of how hyper-relevant that is. Something I find particularly concerning is the number of people who are searching for medical information online without realizing that not only are these companies sharing that information with third parties, but that technically that is HIPAA compliant. They are not required under U.S. law to keep certain things secret that they glean, for example, by mining your search engine history for medications and things like that.

One relatively basic question is, if a search engine is selling your private health data and making money off it, should you profit? That's one question, perhaps a bigger question is, if Google realizes that you have Parkinson's disease before you do, should they tell you or can they sell that to your insurance company and have them jack up your insurance premiums without even telling you why they did it? Because what a lot of people say is when you go to fill out a recapture and it says, "Click the three traffic lights," all you're really doing is helping them train their neural nets that recognize traffic lights. They recognize whether you're human or a robot just by your mouse movements right away. If they can do that, they can tell how your mouse movements change over time, and it would be pretty easy to know whether someone is at a risk for Parkinson's, so there are people who think that this is already going on, and if not, it will be very soon.

Just consumers at large, these are all the stories from the last week, you guys heard Amazon workers are listening to Alexa conversations to help train their voice recognition, Facebook's ad serving is discriminating by gender and race. Less than 48 hours ago, the terrible tragedy, of course, in Paris, if you were watching some of the video on the live stream on YouTube, you would see this, at the bottom, you'd see a pop up from Encyclopedia Britannica about the September 11th attacks. I don't know about you, but if I see those images with those red boxes, and then something about September 11th, I would say, "Oh, that must've been a terrorist attack in Paris, that must be what they're implying." YouTube has a massive data set, they can't be wrong, they must be right in categorizing this and that's a very scary thing.

It's also AI generated-content, AI can write these headlines that sound incredibly human. If you guys read financial news saying "MicroStrategy, Inc.," and then their symbol on Tuesday reported fourth-quarter net income of $3.3 million after reporting a loss in the same period a year earlier, that sounds like every other financial news article I've ever read. They even made this meme down here, which is a little weird but pretty funny and would definitely pass as something that a teenager on Reddit made, but this was generated by a generative adversarial network, similar to what we were talking about before. The little Jim Halpert thing and even the clickbaity type thing of, "You won't believe who is number four," that seems so human and it's very scary that that's generated by AI.

In 2014, the video was able to generate these faces, these are not real people. I think we could all tell that these are, obviously, not people, but they do look pretty real. If I were to say to you, "Which of these two people is real and which was generated by Nvidia's a generative-adversarial network," I would tell you that you're all wrong because they're both fake and neither of them are real, and they were both generated by AI. They even took it a step further and said, "Hey, give me a picture along the top row and a picture going down the column, and I'll tell you what it would look like if you combine those two people." There are some people who look nothing alike at all. Yet, somehow you can see what a combination of those people would look like, that is also scary.

Man [in the video]: Our enemies can make it look like anyone is saying anything at any point in time even if they would never say those things.

Danzig: As you might suspect, that is not a real video, that was created by comedian and director Jordan Peele who used deep fake technology to replicate this. You can see people doing all sorts of things that are causing a lot of controversy and especially in the realm of security and privacy and all sorts of lawsuits that are coming up because of this.

I'll finish up and take questions here, but, in summary, the point is I am not at all trying to guide people's decisions or answers to any of these questions. The point is just that they need to at least be asked and if you talk to some of the people who are really focusing on this - Sam Harrison, Nick Bostrom, for example, who I mentioned- what they'll tell you is that they feel like they go to conferences and talk to people who are otherwise absolutely brilliant rational, scientific people who just aren't giving any credence to the even possibility that these questions are important, and that the answers are likely to affect the world much sooner rather than later.

Questions and Answers

Participant 1: This is not so much of a question, but it's sort of to get a reaction. That story about Alexa, I don't know how many people realize, but since the beginning, the U.S. phone companies have always had the right to listen to your conversations, which they did for precisely the same reason, to make sure that the quality was correct. The rule was that they could not act on anything they heard, even if they said A is going to murder B, they couldn't do anything. There perhaps is some precedence and past history that we can use to help us out with these things.

Danzig: Sure. My feeling on the Alexis story was, look, anyone who understands how machine learning works knows that there has to be someone there who is verifying and helping train. First of all, that does not describe that large of a percentage of the population, so perhaps, the answer is not even that it is viable to stop listening, but just that they should be more honest and forthcoming about the fact that they are. I would agree with you and I think that has larger applications, national security and everything, that maybe it's just a matter of transparency and forthcomingness rather than stopping any of these things because, like you said, a lot of this has been going on far before machine learning became a big.

Participant 2: If I didn't misunderstand, going back to your urn comment, this feels like a ball that's out of the box. Things like Alexa listening into our conversations and all of these fantastic algorithms for monitoring what we do and tracking, all of that is out of the box, you can't put those genies back in the bottle, what is the general response to that? Because there are so many of these different kinds of concerns, it seems like any one of them could be very concerning, but taken as a group, they're terrifying.

Danzig: I couldn't agree with you more, and I'm sure everyone in this room could not agree with you more. Unfortunately, I think that so many people seem to be, first of all, just generally ignorant toward what technologies exist and the fact that these balls have been plucked out. You have to remember that this circle is not properly representative of the technical expertise and of the world at large. In fact, that last Obama video, there is an app now, a deep fake app, as long as you have 12 to 16 hours of video of any specific person, you can enter that into the app and then you can video yourself moving around and talking and it will render in real time what looks like that person doing exactly what you're doing. It's not expensive, it doesn't take that many resources.

Your question is so great and valid, but there aren't even enough people thinking about or talking about this to be able to effect change. Even if we said, "Ok, everyone who is concerned about the AI Apocalypse, based on what you just said, please raise your hand and turn off your computers forever," that wouldn't make a dent in the number of people that are using these. You have issues, maybe you don't want to be part of Facebook, maybe you don't want to have Facebook because you don't like that their facial recognition system is tracking you and geotagging all the pictures, but as long as you're in the background of any of your friends' pictures who are on Facebook, they can do that too. It's not even like this isn't a thing you can opt out of, unfortunately.

I wish I had a good answer for you, I wish I could come to you with more answers than questions, but unfortunately, I don't even think there are enough people who are taking these questions seriously for us to get to the point where we have answers that we can actually action on. It's a great question and I wish I had a better answer and I hope that maybe if we get more people thinking about it and get all the brilliant minds that come to conferences like this together, and then maybe combine that with some ingenuity, I would think that these are the communities that can help effect change. Yes, like you're saying, not only are these things individually, any one of them could be very scary, but when you take it as a collective, you say, "Wow, this is an Orwellian future that we're living in right now, but almost on steroids."

Moderator: I think this is the wrong time to promote InfoQ’s flash briefing for Alexa. Are there more questions?

Participant 3: Do you see the U.S. government weaponizing AI and possibly in our time?

Danzig: In terms of capability and desire, I would imagine that the answer is yes, absolutely, the things that would have to stop that are more informed discussions like this, questions like that gentleman just raised. There is no question that that is the case. It is a point of contention, just the question, "Should you make autonomous weapons?"- just that question by itself doesn't seem to have a perfectly straightforward answer, because how are you going to look the mother of three fallen soldiers in the eyed and say, ''No. Even though we have the technology to send a robot in that could have done the jobs that would save your son's lives, we are going to choose not to," that's not an easy thing to do, and you could imagine she would not want to hear that, but as we all know here and are referring to the potential nefarious and perhaps insidious, but maybe even more obvious externalities are very scary.

I would say that the risk is enormous of that, and when you combine that with the level of secrecy with which the U.S. government pretty clearly operates, and maybe even to a greater degree than any of us realize, the question that you just asked, it might be one of the scariest ones. At the very least, even if Facebook does something, they have to have an earnings call and release a 10K and there are a billion people on Facebook who are always looking and pulling their privacy data, the black box that is the U.S. government and other states at large, those are probably the scariest players in this whole thing because of their resources and ability to do things and put a stamp on it, lock it and throw away the key. Great question, scary answer and implications.

 

See more presentations with transcripts

 

Recorded at:

Jun 28, 2019

BT