BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Q&A on the Book The Driver in the Driverless Car

Q&A on the Book The Driver in the Driverless Car

Leia em Português

Bookmarks

Key Takeaways

  • Technology is actually moving faster and faster, as defined by adoption and development
  • We as a society generally do not deeply consider the impacts of powerful new technologies before allowing them to proliferate  
  • Ideally, normal people can and should create simple frameworks to think about and assess new technologies
  • Specifically, these frameworks should ask basic questions: Is a technology more likely to create inequality? Will it create dependency? What are its risks versus rewards?
  • If we do not consider the impacts, then the technologies just “happen” to us and consequences can be unpleasant or negative

The book The Driver in the Driverless Car by Vivek Wadhwa and Alex Salkever explores how technology is changing faster and faster, and what impact that can have on the future of our society. It aims to help anyone - technical or non-technical - frame decisions and thinking about rapidly developing technologies. Salkever and Wadhwa cover a wide variety of such technologies, including robotics, AI, quantum computing, and driverless cars.

InfoQ interviewed Wadhwa and Salkever about how the future from a technological point of view can look, how to approach technology in a positive way, what tasks robots are able to do or not do and what the future will bring, the benefits that self-driving cars bring and the challenges developing them, what developments are causing energy to become cheaper and cleaner, and what becomes possible with quantum computing.

InfoQ: What made you decide to write this book?

Vivek Wadhwa: This book came about from a simple observation. I noticed that even my techie friends in Silicon Valley were feeling overwhelmed by the pace of technological change. I also believe that the risks of letting technology just develop without thinking through societal implications is a massive problem - look at the rise of Facebook and all the problems that came because they refused to consider the privacy implications or the implications of their tools being hijacked for genocide and hate speech. At the risk of being cliché, we can choose between a "Mad Max" society or a "Star Trek" society with our choices (or lack of choices) about technology.

Alex Salkever: I also felt this way and it made me wonder, could we write a book that tried to create a framework for people to think about this issue and help us collectively do a better job of deciding the future of our technology, rather than it just being decided for us?

InfoQ: For whom is it intended?

Salkever: This book is intended for anyone who cares about the future of the world and understands that technology will have a massive impact on that future. We wrote it for everyone.

Wadhwa: We specifically wanted to make this book accessible to people who may not be immersed in tech like we are in order to make this debate and the societal decisions something they could easily relate to.

InfoQ: You stated in the book that from a technological point of view, the future can look bright or become frightening and alienating. Can you elaborate?

Wadhwa: Take CRISPR and gene editing. In the future, millions and millions of children will survive otherwise fatal diseases because with CRISPR, we can now very affordably edit their genes. But we are already seeing instances where scientists are considering using CRISPR for eugenics. And so we face the specter of the rich - who can pay for CRISPR therapy for their kids - getting a permanent leg up on the rest of society at the genetic level! That’s horrifying!

Salkever: Or AI. AI is in the very early stages of development. But we already are allowing it to take over big chunks of critical decision-making processes that affect our lives - whether we get custody of a child, whether we are offered a mortgage, whether we are considered for a job. We know that AI is fundamentally biased in many cases because it is only as good as the data we have fed it, and the data has been encoded with baked-in biases (against gender, race, age or other categories). So AI can reinforce prejudice at a very hidden and fundamental level. AI can also allow us to do amazing things, such as learning a language very quickly with personalized teaching, very affordably. This service can be offered at a global scale for pennies.

InfoQ: How can we approach technology in a positive way, focused on possibilities while not ignoring the threats and risks?

Salkever: We specifically laid out a simple "Three Question" framework to guide people as they consider technologies. Roughly the questions are: Will new tech have the potential to benefit us all equally? Does it promote autonomy or dependence? What are the risks and rewards?

Wadhwa: Really, this comes down to asking the most basic and obvious questions as we encounter new technologies. But the most important thing is that we ask. If we do not ask, if we just adopt and pray that it will all turn out well, then we accept tremendous risks.

InfoQ: What tasks can robots do, and what's too difficult for them?

Salkever: Robots are good for three types of tasks: dirty, dangerous and dull jobs. Dirty jobs might be for example, cleaning out oil pipelines. Dangerous are jobs like bomb disposal or drones inspecting communications towers, infrastructure, or roof tops for faults and damage. Dull jobs are things like delivering food in a hospital or dispensing medicine. Curiously, autonomous vehicles are a great use case for robots - driving is both dangerous and dull. It’s also important to capture that a job can be both complex and based on repetition. This is part of why we have checklists for tasks like prepping for surgery; the human mind maxes out at a certain number of boxes it can check reliably. Robots are great for these types of jobs, too.

Wadhwa: Where robots breakdown is judgement. They cannot make really good snap judgements that are moral in nature. They cannot come up with innovative solutions. In the classic scenario, the question is whether an autonomous car should drive off the road and kill its occupants, or cause an accident that involves many more casualties and save its driver. But humans are magically creative when it comes to figuring out alternative ways to solve problems under intense pressure. Robots simply can’t do that and won’t be able to for many years.

InfoQ: What do you expect that robots will be able to do in the near future? What's needed to make it possible?

Salkever: This is really a question of software. Robots are just software in a hardware skin. So I’d ask, what do we expect AI to do that is more and more human - and how can we translate that into robots? We’ve already seen some amazing improvements - the Boston Robotics systems that can do backflips, for example. So as we see more and more software that can do complex things - like AI’s teaming up to beat humans at really advanced multi-player games - then we will see robots gaining these capabilities.

InfoQ: What benefits do self-driving cars bring?

Wadhwa: There are many. From a structural standpoint, it would mean less cars on the highways, less carbon emission, less congestion in city streets (we hope). Hundreds of thousands of people each year die in accidents. That should all go away. Driverless cars will be powerful for those who cannot drive, the elderly, and for children who need to get around. It will also eliminate discrimination - there is no "Driving While Black" when a robot driver is at the wheel.

InfoQ: What are the challenges of developing self-driving cars?

Salkever: There are so many edge cases that it’s really hard to make a self-driving car that works in every single situation. This is not to say that driverless cars won’t be better than humans. On the contrary, they are already better than humans by many measures. But for us to adopt a technology that is so intimate and makes us so vulnerable, it has to be really close to perfect. The other big problem is the human drivers. Humans are so unpredictable and hard to work with that driverless systems really struggle to predict what humans will do. I can definitely relate to that.

InfoQ: What developments are causing energy to become cheaper and cleaner, and how does everyone everywhere benefit from this?

Wadhwa: Energy is following a Moore’s Law curve, in solar in particular. That’s because it’s a semiconductor technology. So by that math, solar energy will become very close to free in the not so distant future. Free energy will benefit all mankind because it will both remove a major cost that people must pay, and also create entirely new types of businesses that we could not have imagined without free energy. Free energy will also vastly improve the lives of the poor; they will be able to read at night, have clean water, and cook cheaply and without burning coal or wood to pollute the environment and destroy the lungs of their friends and family.

InfoQ: What becomes possible with quantum computing, and are the benefits that it can bring worth the risks?

Salkever: Quantum allows us to solve many sorts of problems that were too large and complex for our current computational paradigms. For example, calculating chemical interactions and energy behaviors or atoms and molecules could make it relatively trivial to do advanced chemical design - which might be how, say, we could perfect algal biofuels. Biology, too, could benefit tremendously - drug discovery, in particular. The risks of quantum are that it will render many existing security systems obsolete and could be used for evil purposes. That said, it’s not a very accessible technology; you can’t just order one online, like you can with CRISPR.  So the risks of mass use by non-state actors or non-academics is pretty small.  The risks of Quantum are definitely worth it.


About the Authors

Vivek Wadhwa is an American technology entrepreneur and academic. He is Distinguished Fellow & Adjunct Professor at Carnegie Mellon's School of Engineering at Silicon Valley and Distinguished Fellow at the Labor and Worklife Program at Harvard Law School.  

Alex Salkever is a writer, consultant and technology executive. Aside from "Driver in the Driverless Car" he was also the co-author with Wadhwa of and "Your Happiness Was Hacked: Why Tech is Winning The Battle to Control Your Brain - And How to Fight Back." In his writing and speaking he explores rapidly-advancing technologies such as robotics, genomics, renewable energy, quantum computing, artificial intelligence and driverless cars.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT