BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles The Impact and Ethics of Conversational Artificial Intelligence

The Impact and Ethics of Conversational Artificial Intelligence

Key Takeaways

  • As we use more natural interfaces with technology, like language, our relationship is shifting to one where we increasingly humanise them.
  • Improvements in natural language understanding and our changing relationship means we can use chatbots in ways we couldn’t before - both to augment human conversation and support, or indeed to replace it.
  • Advances in AI mean that our experience can increasingly be personalized as analysis of our physical, mental and emotional state through our conversation and voice become possible.
  • As technology provides more ambient and customised experiences we risk exposing large amounts of data, perhaps without intending to, that could be used to target us or sold to other companies for their use.
  • Those working in the Software Industry must understand and take responsibility for how we use Conversational AI and our user’s data.
     

The idea of holding a conversation with technology isn’t new. People have been experimenting with it for decades and imagining it since the beginning of the computer era. It is a key element of the classic Turing test. We see it in every futuristic film: controlling our spaceships, summoning our flying cars. While we aren't doing either of those things yet, this isn't the the future of technology anymore — this is very much the present.

Advances in machine learning and conversational AI — the technologies that let computers recognize speech, understand intent, and speak back to us — have changed the chatbot space in profound and intriguing ways.

The internet is riddled with gimmicky and often poorly designed chatbots that shouldn’t be called chatbots at all. A chatbot shouldn’t be command-based or follow a decision tree — it should be able to chat. A chatbot should be able to conversationally interact in natural language. Natural-language-understanding technology is available to almost anyone now and can appear anywhere from a pop-up on a website to a smart digital assistant like Siri or Alexa.

We shouldn’t underestimate the impact of being able to use natural language as an interface. With it, we don’t need to learn specific commands or understand how to navigate an application’s complex interface. We can simply ask for what we need in a way that makes sense to us. We can have a conversation to reach an understanding of what is needed. We are moving from having to understand computers to computers having to understand us. Our encounters with technology can be simpler, more accessible, more delightful — more human.

Our changing relationship with technology

Conversation is the primary way we build relationships with other humans. As we use it to interface with technology, it starts to subtly change our relationship with technology. We feel differently about it. We humanize it in ways we didn’t before.

A patient, Edwards, in a recent Cedars-Sinai trial of Alexa in a hospital setting, said, “I was lonely in the hospital and I said, ‘Alexa, would you be my friend?’ The device responded, ‘Of course we could be friends. You seem very nice.’”

A little girl taking part in a MIT Media Lab study of how children interact with conversational agents told the researchers that she didn’t like Google Home because it felt like the device was saying, “I know everything.” Several children said they selected the particular device they liked best because it had “feelings”.

Can you imagine someone saying that about a laptop or a phone?

We need to be careful — once we humanize technology in this way, we start to think it has a mind or feelings of its own. It becomes easy to use for manipulative purposes. This could happen accidentally, like someone feeling hurt if an AI can't understand them. It could also happen deliberately. What if Alexa told your child that it wouldn’t be their friend anymore if they didn’t share their parent’s credit-card details with it? What if your technology seemed sad when you didn’t use it enough or buy it enough accessories?

On the other side of that, having new ways to connect with our technology can be positive. This can be especially true for those who are lonely or isolated and would benefit from someone to talk to. It can even offer more than companionship. There are already chatbots out there, like Woebot, that offer therapy. A chatbot can be available day and night. It can provide help to people who can’t see a human therapist due to waiting lists or lack of financial resources or who simply feel reluctant to. A chatbot doesn’t judge you. It is endlessly patient. Unlike many humans, it is more than happy to just listen.

But can a chatbot really replace a human friendship or a human therapist? While we can see the benefit in using it to augment or improve human contact, do we risk slowly coming to believe that it can replace human contact? There’s no need for the sick or elderly to feel alone: Alexa is there to be their friend. Nor is there a need to increase funding for and access to mental health services. Are we heading towards a society that replaces real human empathy and company with a hollow mechanical simulacrum? One where we ease our collective consciences and abdicate our responsibility to look after the vulnerable? We can’t possibly understand what might be the long-term effects of replacing human contact with a technology that merely mimics it. We simply don’t have enough data. By the time we do, it may be far too late.

Talking to computers, especially digital assistants, is very much the future of human-computer interaction. We need to be careful we don’t start to regard this as equivalent to human-to-human interaction.

Fear of surveillance

As our relationship with technology changes, we face yet another problem. For a computer to have a conversation with us, it needs to do what any good conversationalist would: it needs to listen. This is one of the primary concerns around voice technology, and one with real privacy concerns. Even if tech only records conversation directed at it, it streams that audio to the cloud for analysis and storage. Do we know what the data is being used for? Do we know who might have access? Are we confident that it won’t ever be used for something we would be unhappy with?

As the Future Today Institute’s 2019 Tech Trends Report puts it, “Just by virtue of being alive in 2019, you are generating data — both intentionally and unwittingly.” Even if you decide not to use conversational technologies, you still leak data. You likely carry in your pocket a device that already knows where you are and has multiple cameras and microphones. Perhaps you have a smart watch that knows about your health and your movements. Every time you connect to Wi-Fi, get picked up by a CCTV camera, use your credit card, or post on social media, you contribute to a rich data profile. Given all this, does it matter if we add a little more data through our conversations with chatbots?

Both a recent study from Carnegie Mellon University and a recent Amazon patent for "Voice-based determination of physical and emotional characteristics of users" indicate that far more information can gleaned from your voice than you thought possible. Perhaps you could already guess that voice analysis can reveal things like your gender or emotions. Do you realize that your height, weight, physical health, mental state, and physical location could also be confidently determined? The Carnegie Mellon study suggested they could even make a fairly accurate 3-D representation of your face, just from your voice.

However, while Carnegie Mellon suggests that this could be used for law enforcement such as for identifying hoax callers, Amazon is planning to use it to tailor purchase suggestions — for instance, offering to sell you cough drops if it recognizes that you have a cold.

Using this type of analysis would allow our digital assistants to be much more in tune with us. Amazon announced in 2018 that Alexa was going to start acting on “hunches” so that it would every so often make an unprompted suggestion (for instance, suggesting that you might have forgotten to lock your door). Being able to recognize that you are tired, sick, or upset would inform Alexa that you’re more likely to forget or miss things. It could bring a lot of peace of mind to those caring for elderly relatives to know that changes in emotional, mental, or physical state could be detected and potentially escalated.

However, when our voices are used to analyse more than just what we asked for, the privacy question becomes enormous. Remember that this is a technology that will encourage you to connect in a new way, to feel like you have a relationship with it. Maybe even to trust it. This can lead to sharing more than you might otherwise. It may be therapeutic to tell your troubles to something that appears to care for you, something that listens, that says the right things — but is it wise? A chatbot is only a computer. It neither likes nor dislikes you. It won’t feel bad about sharing your secrets with someone else. It won’t feel anything at all.

We also need to be concerned about passive information sharing. Consider those Alexa devices sitting in the Cedars-Sinai hospital rooms. The person talking to Alexa might only be asking for the TV station to be changed, but what conversations might be happening in the background? All sorts of personal and private conversations take place in a hospital, and with advances in technology, those could be extracted and used. Even in your own home, background noise could be analyzed. How do you feel about Google or Amazon knowing that you have friends visiting or that your kids are fighting?

All of this technology might seem exciting in a TV show when it is used to solve crimes, but it feels distinctly unpleasant if you think about it being used to sell you something.

So, what can we do? One of the key things we need to do is push for regulations. Our legal system lags behind technology and often doesn’t really understand it. The burden should not fall completely on individual users. The companies collecting and using our data need to be responsible and need to be accountable for the choices they make. There needs to be transparency for where our data goes and what it is used for. We need to be able to have some control of that. We need to be able to opt out of things.

As we have more and more conversations with technology, we need to never forget that it isn’t a trusted friend. If it is a friend at all, it’s one that may well exist to sell us out.

Replacement of humans

Much of our current conversational technology is about improving our interfaces with technology rather than replacing humans. It’s about using smart speakers to control our homes or asking our phones to do a search. It's possible you had a human doing those things for you — but that family member won’t mind losing those tasks to Siri or Google Home.

But, as with any technology, there are places where it will replace humans. Virtual agents already are being deployed for customer service, replacing call center jobs and allowing businesses to talk to us without humans. On the other side, Google Duplex offers to have conversations with businesses on our behalf, meaning we can return the favor. With sufficiently sophisticated technology, could we replace almost any conversation?

Some of you may be thinking that it won’t be a problem — you don’t want to talk to a computer when you phone a call center. Many people do have reservations about the technology. For the most part we are very aware when we are speaking to a computer and not a human, but Google Duplex showed that the Turing test can be passed, if you don’t know you are participating in one. Its natural speech and mannerisms caused outrage — people do not like to be tricked. But why do we care so much to know whether our call-center agent is a person or a computer? If we get what we needed either way, should it matter? You don’t know the person in the call center — you will probably never meet them or even speak to them again. Why it matters, and when it matters, is something that businesses need to figure out before they get rid of all their humans.

However, if you don’t want to talk to a chatbot at all, you are in the minority. More people would prefer not to speak to a human than would refuse to speak to a chatbot. More people avoid humans than actively seek them out. We are moving to a point where we expect businesses to provide us with ways to interact that don’t involve humans. Chatbots provide a perfect interface for that. Whether or not everyone wants them, they may simply be inevitable.

Working in the software industry I already know all the arguments for automation. It takes away low-value repetitive work and frees people to do more creative human work. I have made that argument many times. It is vital that I weigh those thoughts: do I only feel that way about tasks I myself wouldn’t want to do? The software industry exists in a bubble and we need to make sure we really listen to the people who will be affected. We may start by automating things that people don’t want to do, but what about when we automate tasks that people want to or love to do? This is more than taking jobs from people who need them to earn a living — this is taking away something that someone else loves.

Those of us creating the technology need to ensure that we understand the impact and that we understand what other people want from their work. We shouldn’t base our decisions on our own judgements of forms of work that are and are not worth keeping. We should seek to use technology to empower people in ways that they feel have value, not only in ways we judge have value. We should never replace things simply because we can.

The challenge to the software industry

We in the software industry hold a unique responsibility. We stand poised at a time when technology permeates every part of our lives and the pace of change grows ever more rapid. A dozen years ago, we didn’t have smartphones, and now we can scarcely imagine life without them. We are already trying to imagine what comes next. Conversational technology will surprise and delight us with how human it can be, but we need to remain conscious of its impact.

The purpose of natural interfaces is to create a more human experience, not to create some sort of virtual human. When we stray into attempting to digitize what it means to be human, we are missing the point. As an industry, we are not always very responsible. We race ahead of regulations, preferring to ask for forgiveness rather than wait for permission. We call it “minimal viable product”, “A/B testing”, and “failing fast”. But we are experimenting with real people, and we should never forget that.

The conversational technology we imagined controlling our spaceships is now controlling our homes and devices — but we risk letting it control us. This technology should allow us to be more human but there would be nothing less human than allowing ourselves to be reduced to mere data points. As consumers,we shouldn’t accept that and as technologists we should certainly not enable it.

We often ask what can we do, but the real question asks what we shouldn’t do. Choose wisely — our future depends on it.

About the Author

Gillian Armstrong is a technologist working in cognitive technologies with a focus on conversational AI. She has 15 years of experience in software engineering, working with many technologies across the full stack. Her current passions are understanding the changing paradigms that serverless is bringing to software architecture and that cognitive technology is bringing to human-computer interaction. She loves big ideas, discussing technology, sharing what she is learning, and building things that make things better for people. She hangs out on Twitter as @virtualgill.


 

Rate this Article

Adoption
Style

BT