Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Q&A on the Book Future Ethics

Q&A on the Book Future Ethics

Leia em Português

Key Takeaways

  • It’s long past time our industry took ethics seriously; the ideologies of disruption and “move fast and break things” have led to countless mistakes.
  • The tech industry is facing a new wave of regulation, and to be honest we probably deserve it. 
  • A code of ethics for technology is a tempting idea, but several dozens have failed – what difference will another make?
  • Design is applied ethics; all design makes a statement about the future, and every future has ethical implications.
  • Beware of the business case for ethics. Appeal to people’s emotional and moral nature too.

In the book Future Ethics, Cennydd Bowles explores the role ethics play in the tech industry and in the work of product managers, designers, and engineers. The book provides guidance on how to think and act ethically when designing products.

InfoQ readers can download an extract ...

InfoQ interviewed Bowles about how ethics is a part of design, mitigating biases, having a Hippocratic Oath for technologists, the environmental impact of the tech industry, how the fourth industrial revolution differs from previous ones, his view on Universal Basic Income, and how a business case for ethics can look.

InfoQ: Why did you write this book?

Cennydd Bowles: As has become painfully clear over the last couple of years, it’s time the tech industry took ethics seriously. Our field has served up too many examples of inconsiderate, harmful decisions, and the public and press narrative around technology has shifted: technology is now as likely to be labelled a danger as a saviour. I wanted to do my part to connect practitioners to the great work on tech ethics that’s happening in places we may not think to look – philosophy departments, science fiction, legal discourse – and inform and excite technologists to bring an ethical perspective to their work.

InfoQ: For whom is this book intended?

Bowles: I have a design background, so perhaps designers are my most natural audience. However, I think a lot of the ethical power in tech organisations belongs to product managers; I’m particularly keen to bring the discussion to them. Software engineers, particularly those working on emerging technologies such as machine learning, will find plenty of value too.

InfoQ: In the book, you stated that “design is applied ethics”. What do you mean with that?

Bowles: The relationship is obvious if you’re designing, say, weaponry. But all forms of design, even of fairly mundane objects, involve making a statement about the future. When we create, we put forward a case for how we should interact with tech in future, and by extension how we should interact with each other. At the same time, we’re discarding thousands of alternative futures. There has to be an ethical component to that, given the potential social impacts of our work.

InfoQ: What is “algorithmic bias”, and how can we mitigate it?

Bowles: It refers to the way that even what appears to be a neutral, objective algorithm can exhibit prejudice. Sadly, these biases often fall upon the people who have historically been the most vulnerable or disadvantaged. AI ethicist Joanna Bryson says there are three potential sources of bias: poor training data, intentional interference, and unintended reflection of human and social bias. The last category is probably the most difficult to spot and to tackle. Fortunately, there are emerging strategies to help analyse and address bias in our systems; some of these are statistical, some are social. Kate Crawford talks about “fairness forensics” such as examining data sets for unexpected gaps or skews. We could also borrow from social scientists and try “bias bracketing”, a systematic approach of listing potential sources of bias before and during our work, to act almost as a checklist to evaluate our conclusions against.

I’m also keen on borrowing techniques from speculative and critical design to look at the what-ifs of our work, to help understand how bias might creep in and how we might reduce it before it happens. By creating what I term a “provocatype”, we can stimulate discussion about the moral rights and wrongs inherent within the technologies we build. A provocatype isn’t an answer to the design brief per se, so much as an object or a story that prompts conversation about whether this is the sort of future we want; think Black Mirror, inside the design studio.

InfoQ: What’s your view on having a Hippocratic Oath for technologists?

Bowles: I’m not convinced. Although codes of conduct exist in plenty of other fields, their strength mostly comes from their enforceability. Disciplines like medicine have professional organisations with the authority to disbar practitioners for malpractice; the tech industry doesn’t. Nevertheless, there have been already been dozens of attempts to posit codes of ethics for the field. None of these have had teeth, and I see no reason why another effort would catch on when the others have failed. Codes are better at censuring bad behaviour than inspiring good; at worst they can create a sense that ethics is a checklist, a set of boxes to be filled rather than a fundamental perspective on our work.

Regulation will be far more effective, and there’s a strong case that our industry deserves it. GDPR was the first significant step toward data regulation; given that both political wings now see the tech industry as essentially harmful to their interest, we can expect more of the same. Approaches will differ between nations, of course. Some analysts speculate we could see the internet effectively split into three major sub-internets: 1) a European internet with heavy user protection and tight curtailment of tracking and other forms of corporate power, 2) an American internet that favours innovators at the expense of consumer safety, and that has long abolished net neutrality, and 3) a Chinese internet, with heavy content controls and deep integration of daily activities within major state-influenced platforms. To me, this seems a highly plausible outcome.

InfoQ: How big is the environmental impact of the tech industry, and what can be done to reduce it?

Bowles: Surprisingly large. It’s hard to get exact figures, but one estimate claims that data centres already consume as much energy as the entire aviation industry. It’s imperative that tech workers pressure their employers to switch their data centres to renewable energy. To their credit, Google, Apple, and Facebook have taken genuine visible steps here: Google’s data centres and offices are now powered entirely by renewables. Sadly, the rest of the industry lags behind. 70% of global internet traffic is routed through Loudoun County, Virginia, where data centres are almost entirely powered by dirty, nonrenewable sources.

Perhaps the most fundamental question technologists should ask is now, “How can we make our work more environmentally compatible?”, but “Should this exist at all?” The most efficient product, of course, is no product at all. We need fewer, better things. But this is, sadly, an idealistic hope. For now, at a minimum, we should strive to make durable software and hardware. Durable technology not only saves the expense of frequent product overhauls; it reduces the environmental impact of unnecessary device upgrades. Handled properly, software can be a material that gets better with age; wood that bears the contours of use, not disposable plastic. 

Technologists can only ever be part of the solution, however. Autonomous vehicles will probably drive more efficiently and carry more passengers, but they will still be inefficient, low-occupancy vehicles that require massive road infrastructure. It’s easy to think technological progress is ecological progress, and to overlook the deeper change that may be essential. We need to partner with activists, urban planners, and governments to take a wider view of conservation.

InfoQ: We're now in a fourth industrial revolution led by developments like AI, robotics, autonomous vehicles, and the internet of things. How does this revolution differ from the previous ones?

Bowles: Sometimes sceptics point out that previous industrial revolutions, while dramatic, ended up with approximately the same level of employment once they were complete; people just moved into new industries. Things might be different this time because the nature of automation is changing. We’re no longer talking about physical automation but cognitive automation. Clearly, this is unprecedented. Even the most high-status jobs may end up decomposed into constituent tasks, some of which might be easily automated. We don’t know exactly what effect this will have on the world of work, but it’s fair to say there’s a chance it could be drastic.

InfoQ: What's your view on Universal Basic Income (UBI)?

Bowles: UBI has a surprising coalition of support. Leftists see it as a means to shrug off the yoke of waged labour; libertarians see in UBI the potential to simplify welfare and reduce government. Personally, I’m excited by the prospect. There’s a compelling ethical case for UBI. Today, important social roles like parenting and family caregiving go uncompensated. UBI would allow people to prioritise this important work over paid labour, and give citizens financial independence to pursue their own flourishing.

But we need more data. Trials in India and Kenya have shown big increases in citizen welfare and a surprisingly small work disincentive; it’ll be interesting to see the results of the trial in Finland, although the government shifted the goalposts halfway through, which might affect the results. But it’s important that we at least anticipate a potential future of full automation and joblessness; failure to prepare for this outcome could prove ruinous, if it came to pass.

InfoQ: How can a business case for ethics look?

Bowles: Theoretically, you can build a compelling case on any of the three axes that underpin any business case: revenue, costs, and risk. Ethical companies can increase revenue thanks to customer loyalty and brand reputation. They may be able to charge more for their product too, increasing margins. Ethical product development can stave off regulatory fines and civil lawsuits, and avoids the brand toxicity and expensive PR bills that can result from moral mistakes. There might also be employment savings; an ethical company is likely to retain its staff better and suffer less burn-out. Finally, proper moral conduct reduces the risk of overbearing regulation, embarrassing leaks from disgruntled employees, and customer rejection of unpopular decisions.

However, I advise people to be wary of relying solely on a business case for ethics. Doing so sends a message that ethics should be secondary to the profit imperative; but often it’s the profit imperative that is the problem. I think concerned technologists should also use rhetorical, logical, and even emotional strategies to convince their colleagues of the importance of doing the right thing. In the book, I outline the three major theories of modern ethics; these give us important ways to try to evaluate ethical outcomes. Having some rigorous frameworks for assessing our actions allows us to get past crude business cases and gut feels; it allows us to advance our ethical discourse and truly understand the potential impacts of our choices.

But while theory is useful, it still helps to use simple moral language. Describing decisions as generous, unkind, right, or cruel demystifies ethics and brings it from the intellectual realm into the human realm. Straight talk helps everyone appreciate their decisions have moral qualities.

About the Book Author

Cennydd Bowles is a London-based designer and writer with fifteen years of experience advising clients including Twitter, Ford, Cisco, and the BBC. His focus today is the ethics of emerging technology. He has lectured on the topic at Facebook, Stanford University, and Google, and is a sought-after speaker at technology and design events worldwide. His second book, Future Ethics, was published in 2018. 

Rate this Article