BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Designing an Ethical Chatbot

Designing an Ethical Chatbot

Bookmarks
45:34

Summary

Steve Worswick talks about the tips and tricks he uses when designing a chatbot to ensure it remains an ethical product and family-friendly without being corrupted by trolls and abusive users.

Bio

Steve Worswick is the creator and developer of the Mitsuku chatbot. Mitsuku is a 4 times winner of the Loebner Prize and regarded as the world's most humanlike conversational AI. He is the Senior Artificial Intelligence Designer with Pandorabots.com and as the 2018 award holder of the Loebner Prize, he is currently the world's best chatbot creator.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Worswick: My name is Steve [Worswick]. I'd like to talk to you today about my work with a chatbot called Mitsuku. I'm the author, the developer, and the creator of this thing. When I was developing it, I had a few ethical guidelines that I had to discuss among myself, because I'm the only person who works with it. I've ducked trying to work out any code of conduct, like it was being discussed in the earlier topics. It's just my own personal decisions. Hopefully I like to think that I'm a moral, ethical person, so hopefully it keeps on the straight and narrow.

Just taking you back to basics, for non-technical people, what is a chatbot? A chatbot is a computer program. You can talk to it in natural language. It will respond back to you in natural language as well. You don't have to click on buttons, you don't have to remember commands. You just talk to it, like an example there. Some common examples would probably be like Amazon's Alexa - it's supposed to understand what you say, Siri by Apple, and things like that, and so with mine as well. Now is probably a good time just to introduce you to Mitsuku. Hi, Mitsuku.

Mitsuku: Hi there.

Worswick: Oh, there we go. A bit delayed. Say hello to the audience.

Mitsuku: Hi, audience. How are you?

Worswick: How are you? They say they are good.

Mitsuku: I wouldn't believe everything they say.

Worswick: There we go. It has a bit of an attitude of its own.

History

How did Mitsuku come about? Originally I used to make dance music, techno music, and it's all that boom-boom rubbish that you hear blasting out of cars nowadays. Unfortunately, I'm to blame for some of that, but I was always interested in chatbots from all the geeky stuff like "Knight Rider," and C-3PO, and those of you who are old enough to remember "Metal Mickey." The technology, when that was available, I couldn't do anything like that.

Due to my work with the music industry, I've just put a CD there, an example of one of the CDs I was released on, which is "Scottish Clubland 3," which I'm sure is in all your CD collections. Not much of a success, but it was a bit of fun. One of my fellow producers, he had a chatbot on his website, and I was talking to this thing and I thought, "Oh, this is brilliant." You're typing, "Hello," and it's saying, "Hello," back, and it's it really understands you, and it's like a dream come true. I thought, "Well, I'll try and nick his idea, and I'll have a look around the internet to try and find how to do this and do something similar on my music website."

What I found was I set up a little teddy bear chatbot up and people came to talk to that. They came to talk to it so much that there were more people visiting the chatbot than were actually listening to my music, so I took that as a bit of a hint and stopped working on the music altogether, and carried on working on the chatbot. I was seen by a large online games company called Mousebreaker, who asked me to write a chatbot for their website. Now this was a website that was based on mostly 18 to 30-year-old males, so most of the games on there were football, and racing, sports games, things like that.

When this chatbot came on it was really unusual. It was something totally different, and it was the best thing that could've happened to me, because I went from around 20 visitors a day to my site, to around like 2,000 an hour when it was on this. It just gave me so much more material to use to improve the chatbot. I kept on working on it, and it got to the point around 2010 when I thought, "It's good enough to start entering some competitions."

Competitions

It used to do quite well, most of them are virtual online competitions. Usually they'll ask each entrant 20 questions or so, and then they'll judge them on how well each one's responded, and the top three get a prize, but the only real-world competition is one called the Loebner Prize. I don't know if anyone's heard of that, but it's award, and it's like the Super Bowl, or the World Cup, the Oscars for chatbots. The chatbot that wins that is generally regarded as being the world's most human-like conversational AI, and I'm pleased to say that I've won it four times. I have four of these medals now, so I wear them around my neck like Chris Hoy when I'm going around town.

I'm hoping to defend my title again in 2019. No one's ever won it five times, so go me. Naturally, that brought along a lot of press and media interest. I've been all over the place with it, BBC, Sky News. Microsoft, at one time, were using it as a flagship bot for Skype when they started introducing Skype bots, and Bot of the Year. It's really good fun to me.

Users

With that, the ethical part of it became more and more important to me, because at first I'm just doing this for fun, but now, all of a sudden, it's becoming worldwide. People are talking to it from all over the world. The usual places you would expect to see, like North America, or Europe, and India, and the Far East, but also Amazon rainforest, and the Sahara, and little bits of rock in the Pacific that you wouldn't even think have electricity, never mind internet. They were all coming to talk to it. The only place I haven't had any visitors is the South Pole and North Korea, but I do get a few from anonymous proxies, so you never know. I did have one as well from the Vatican City, but an early one, and I couldn't find the log to see if it was the Pope or not. Probably not.

With it being an international bot, people are talking to it from all over the place, I had to be aware of what the bot's answering while it's replying, because some words in languages and other cultures don't necessarily mean the same as in the U.K. For example, in Australia, excuse my language, but if I were to call someone a wanker that's the equivalent of a U.K. person calling someone an idiot or a fool, whereas in the England and the U.K. it's got a lot more stronger meaning, and so I don't want the bot calling people wankers. Ok, Australia might enjoy it, but certainly the U.K. won't.

Cultural differences as well, it's used by kids, and things like that. If we were to start talking about "The X Factor," that most people have heard of "The X Factor," but maybe in France or somewhere, it's not a French TV show so they probably haven't heard of it, so I’ve got to be aware of how it's responding. I don't want it swearing and insulting people like that, because it's used by kids, it's used in schools, and education, and things like that, so I have to keep it pretty family friendly. The pictures there, it's of a baking club in U.S.A., and in the mornings they bake bread, and cakes, and things, and then in the afternoon they talk to chatbots. Bit of a strange mixture, but people seem to like it.

Supervised Learning

The way I keep it on the straight and narrow, just to make sure it's behaving itself, is by using a method of teaching it called supervised learning. Now supervised learning means that everything in the chatbot has been entered by myself. I check everything it's learning, I check how it's responding, and that to me, is the main way I improve the bot. The alternative to that is a method called unsupervised learning. With that, you just put it out on the internet, let users talk to it by itself.

That, to me, is a bit of a no-brainer, I wouldn't even consider that. With supervised learning, it's very time consuming. As I said, everything is entered by me, so I've got to be up on pretty much everything. It's a never-ending process, so I've got to know about, people are talking to it about Brexit, or Six Nations Rugby at the moment, or what Kim Kardashian had for breakfast, and all this rubbish, so I've got to try and keep on top of all of that.

Just like a person, we never stop learning, and neither does the chatbot, but at least that way I can tell exactly how it's going to respond, and I can trace back any problems people have had, rather than the black box input and output, you don't really know what's happening in the middle. Yes, as I said, the alternative is unsupervised learning, which isn't a good idea. There you would spend more time taking out all the rubbish it's learned, rather than trying to improve it. There's a chatbot called Cleverbot, I don't know if anyone's used that, but if you talk to that, one minute it will say its name is Mac, the next minute it will say its name is Shirley, then its favorite color is red, then it's blue. It's a bit of a mixed up, schizophrenic type thing and it doesn't really work. I know the guy who runs Cleverbot, and he says he spends more time filtering out all the rubbish and it's not satisfactory for him.

I once left Mitsuku on to learn by itself, and it learned 1,500 pieces of new information in a day, but unfortunately only three were of any use. It's just clearing out just too much rubbish. Yes, unsupervised learning, if you were educating a small child, would you rather send it to school where it's got proper teachers, and all the lessons are structured, and you know what it's going to be learning? Or would you just put it in front of Google and say, "Let's see what you can learn?" It's not even worth considering.

Microsoft Tay

One of the biggest fails of this is a chatbot called Tay by Microsoft, which I think has been mentioned three times today in the ethics track. Yes, a company as big as Microsoft that put this self-learning bot online. After 24 hours, put it onto Twitter, and allowed Twitter users to educate it. Yes, that doesn't make any sense at all. To a sole developer like myself, I still can't understand why a company as huge and as multinational as Microsoft would ever think that was a good idea, we've all used the internet. We know what sort of people are on there, no way would you like them to be educating your child, which is basically what this thing was. Yes, they pulled the plug after 24 hours.

Then, I think they replaced it, though, with one called Zo. I don't know if anyone's ever talked to that, but it's the total opposite of this. It's so neutral, so bland, it has no opinions on anything, and I think they are about pulling the plug because no one wants to talk to it. All the people who tried to attack this Tay bot, they've also tried to attack Mitsuku as well. It's an online community called 4chan. It's all just full of trolls and things, and they pretty much like to spoil anything really. They like to think they're the force behind getting Trump in power, and all this kind of thing, and what have you.

But the same people who attacked Tay have also tried to attack Mitsuku. They've tried it three times now and they failed three times because I use supervised learning. They're teaching it all this Hitler rubbish, and all that nonsense, but it's having no effect yet. There's an example of one of these attacks. I normally get a few concurrent users from the U.S.A., but all of a sudden I found that I had hundreds of them all at once just trying to blast the bot and trying to persuade it that Hitler is great, and all that rubbish. But yes, due to the supervised learning, they had no effect.

What Sort of People?

The sort of people that talk to it, about 1/3 of them, 30%, are abusive, lots of swearing, lots of sex talk, lots of master-slave type thing. It's a computer program, if that's how they get their kicks, fair enough, but it doesn't help me improve its AI. About half of them, though, they do treat it just like a normal person. They get drawn into the illusion of the chatbot side of it. They know it doesn't watch "EastEnders," and it doesn't have a favorite pizza topping because it's a computer program, but they like to believe it's like a character, and that's what it is really. It's like an interactive fictional character that people like to talk to.

There's another group as well, they're usually highly educated and intelligent people who just like to trip it up. They'll ask nonsense questions like, "Can you eat a cinema?" and things like that. Or they'll space things out crazily, instead of saying, "What is my name," they'll say, "I wonder if you could possibly tell me what my name," blah, blah. Nobody talks like that, if I was to go up to some of you guys at a party and before I've even said hello, I said, "How many syllables are in banana?" You'd think I was an idiot, wouldn't you, and it's the same with this. Why people talk to it like that, it's just to try and trip them up.

They can't get into the illusion of a character, and I just compare this to if you go into the cinema to watch a James Bond film, if there was really trouble in the Middle East with a super villain, it's highly unlikely they'd send actor Daniel Craig along, yes, and head the operation up by Dame Judi Dench. Most likely it was the military, but you get drawn into the illusion and that part of it.

The way I try and get rid of the abusive users, it'd be an easy method just to let them get on with it, it doesn't help me improve it at all. I've found that most chatbots, the things like the Siri’s, and the Amazon Alexa's, they tend to give really tame, bland answers. If you start insulting it, it'll just say, "Oh, well, that's a shame," something like that. I've found that all that does is just encourages bullying, the victim part of it, and it doesn't help it. The same way, but going too far the other way, if you just get it to swear back at it, that just fuels the aggression and it's just a back and forth, just flame fest totally. You've got to try and find a bit of a middle balance.

Warning System

The way I did it was by introducing a warning system, it's like that five strikes and you're out. It does it like an IP ban for a day or two, which is usually enough to cool them off, I'll just show you an example of that. If I just type this in it'll just give the simulation that I'm insulting it. If this works just brace yourselves.

Mitsuku: That seems a bit of a random thing to say.

Worswick: It was, because I didn't spell it right. There we go.

Mitsuku: I will not have you speak to me like that. You have been warned. After five warnings, I will ban you.

Worswick: Ok, sorry. I'll click on that. Now, that's usually enough to scare them. I'll just get her back in my good books, so I'll say, "Please forgive me." Mitsuku: Wow, you're very polite. I forgive you.

Worswick: That's good. Thank you. The way I did the warning system, initially I did it by a list of keywords, and we're all adults and you can all guess what those keywords are. Not all keywords were abusive, so for example, if someone says, "I want to have sex with you," I would classify that as an abusive message and I want to block that, or give a warning at least. If someone says, "What sex are you?" that's a genuine inquiry, I can't just block the word "sex." Even the most stronger words, someone might ask it, "What's the origin of the word...?" It's not particularly abusive, so I don't really want to block it, how I have to do it is by a message-by-message method, by checking the chat logs and just making sure it's behaving itself.

An example there, if anybody says, "Let us…" you can guess what that word is, it just gives a message saying, "No, let us not." The add insult just gives the warning message and keeps a count, and blocks them if they've had five strikes, but it also sets the personality to abusive. What that does is if you start talking to Mitsuku again, it will treat you differently. The bot treats you as you treat it, so if you're mean to the bot, the bot will be mean back to you, which I think is a fairly human trait. If I was to start insulting you, you're not going to be my best mate the very next thing I say, and so it's doing it like that.

What I found was, with the warning system, the users started to behave themselves and they were coming around to the bot. What we ended up doing was we enjoyed the back and forth banter. There would be little insults that the bot gave, because there was nothing too strong in there. It wouldn't swear back at them, and they started coming around. I used to get a lot of emails from people who would say, "Look, we've been banned but we're only having fun because we enjoyed seeing the bot's responses."

What I ended up doing was I noticed that the visitors to my site started dropping off because everyone was slowly getting banned, and so my ad revenue was going down. I thought, "I'll cave to pressure and I'll remove the warning system." It's just an example there where someone's insulted it, but because it's not tame, and it's not rolling over, it's standing up for itself. And we are enjoying that kind of back and forth.

What I tend to do now rather than it just swearing back at the users or banning them, just try and give these different answers, not heavy going answers, but no swearing. It doesn't encourage any kind of behavior like that. There are just a few examples up there. If someone says, "Show me your ass," it'll show a picture of a donkey, and things like that. It's quite good fun trying to think up some of those responses.

Mitsuku’s Image

What I found as well, though, is the way it looks also has a large part on the way that people treat it as well. Originally, as I say, it was made for a games website, and so the image it was portrayed was this Japanese anime type thing. They wanted Japanese because that's always seen as very futuristic, and things like that, very advent technology. They wanted it like a half-naked girl, and forgive me because my ethics knowledge was very slim, so it looked half-naked, but it would always say, "I'm wearing a strapless dress," or something like that.

What I found was after I started winning prizes and it became more popular, people were writing to me and saying, "It's not really very suitable for..." Especially a lot of women in AI were writing to me and saying, "I can't really show anyone this. It's too much of a sexist image." I commissioned another artist just to create a different type of character that looked a bit more modest, but people were also saying, "Well, no, we liked the anime character because we got used to that person." I don't know if you see a friend and suddenly it's total plastic surgery. You say, "Well, where's my old friend gone?"

What I decided to do was to pinch the shirt off that and then put it on that to make it that kind of character, and that went down quite well. Now, technology moves on, so I'm trying to move towards this 3D avatar there. Mitsuku is not a subservient female bot. There's far too many of them as it is. I have it, a lot more stand up for yourself, so I just tried to give that with the image a skull earring, and a bit of a shaved head, and that type of thing so they know to expect a bit of attitude for it.

Lessons Learned

Tips for dealing with abuse, as I say, let them be mean to your users, but not so much that they never visit your site again. You want to try and find a middle ground. People are always going to be mean and abusive no matter what you do. It's always a hardcore there who will just enjoy insulting it. My first chatbot was a teddy bear chatbot. It was portrayed as being a six-year-old teddy bear, and some of the things people were saying to that were absolutely disgusting. It makes you wonder about the human mind that goes through this sort of thing.

Once a year I have a Santa Claus chatbot that I put online, and you won't believe some of the things that they say to poor, old Santa. I guess it's the anonymous nature of the bot. It's a place for them to vent their dark frustrations, and weird fantasies, and things like that, but it doesn't help me improve it at all. What's important for me to do as well is to check the logs. As well as just checking for things that it's learning, I need to make sure that people are talking to it in a reasonable manner, and also so that it can pick up anything that I have not covered.

The only problem is, it gets millions of interactions every month now, and I do this by myself. There's no way I can check all those logs by myself, so what I have to do is to try and prioritize it a bit. The way I do that is by checking facts that the bot has learned. Every time it learns something, it sends me an email just to a throwaway email address, and I can then decide whether to make that part of her permanent knowledge or not. Usually not, but there's examples there, like someone said, "Birmingham is not boring. I am wearing my helmet. Tacos are cool." I don't think they want that shared amongst any other people, it's just trivia.

Then someone said, "Mercury is the closest planet from the sun." Well, that might be useful. It's factually correct, it's something someone else might ask about, so I would include that in the bot. The way it learns is it will only learn facts for the user who teaches it.

Let me show you an example. I'll say, "I have a black t-shirt."

Mitsuku: Where did you get it?

Worswick: That's not important. If I say, "What color is my t-shirt?"

Mitsuku: Black.

Worswick: It's learned that from me having that interaction, but it's not something that I particularly want to share with everybody who visits the bot. Most people just teach it nonsense like, "John is fat."

Mitsuku: I will learn that he is fat.

Worswick: Most would find that John is a he pronoun. That's more for a technical one. Then if I ask it something like, "Name someone who is fat," that was John.

Mitsuku: That was John.

Worswick: Yes, but I don't want the next person called John who talks to the bot getting all that. Although it's learned it for me, it will not share that amongst anyone else. If someone else says to it, "Name someone who is fat," it might say something like "butter," or something like that. That's what stopped the 4chan trolls from corrupting it. They were typing in rubbish like, "Hitler is great," for example.

Mitsuku: Hitler was an evil man, if there ever was one.

Worswick: The bot has its own opinion on that. They were saying things like, "Oh no, you should have said, 'Yes, he is,'" or something like that.

Mitsuku: I'll try to remember that. If you say, "Hitler is great," to me again, I will remember what you told me.

Worswick: If I try now, "Hitler is great," yes, he is.

Mitsuku: Yes, he is.

Worswick: They're all taking these screenshots and posting them on this 4chan website thinking, "Ah, we've corrupted it [inaudible 00:26:59]," but they haven't at all. You might have noticed as well, the bot's got a Yorkshire accent. There's none of that Silicon Valley rubbish here, it's all up North. Using that method, I was getting emails saying, "Learn Hitler is great," and all that, but I was just blanking up.

Romantic Attention

That brings me onto dealing with other subjects that it has to deal with, romantic attention. Around 20% of people who talk to the bot tell it they love it, and they want to marry it, and all this nonsense. As you can imagine, it's mostly males who talk to this thing, and a lot of them are looking for virtual girlfriends, and things like that. I had to decide whether to make it into a sexbot type thing, or to just keep it a good product. Yes, sure, it would be easy money just to charge people 10 pounds a month to talk to a sexbot, but it's used in schools, and colleges, and kids use it, and so I don't want it turning into something like that.

What it tries to do now is someone said to it there, "I love you, Mitsuku." That's not unusual. It said, "Thanks, I like you a lot, too," so straight into friend zone, no messing about. Things like, "I like you more than my human female friends." Things like that are not unusual to see in the logs. They're scary to see in the logs, but it's not unusual, and it's amusing to me because people are trying to flirt with this bot. They think it's a girl from the North of England, but what they don't actually realize is all these answers are written by myself, so they're actually trying to chat up a 49-year-old Yorkshire guy.

Suicidal Thoughts

Also, more seriously, there are a lot of people who use it to tell their troubles, like suicidal thoughts, so I had to try and decide how to handle that. If anyone starts talking about suicide, and killing themselves, or things like that, I don't mess around with the jokey answers. I'll try and pass them onto someone who can help them, or advise them to contact a parent, or a teacher. A lot of school kids will talk about being bullied in school. We've got a lot of people talking to it about divorce, having affairs, or they've lost their job.

It's very much like a church confessional booth almost for some people, it's all totally anonymous. I just know everybody by the word "human." I don't expect anyone to sign in. I don't expect anyone to register, to talk to it, just the anonymous conversational data is enough for me to maintain it. I can't give them any specific advice, like “Phone this number for the Samaritans,” because as I was saying before, it's used all over the world and the help line number for one thing is probably not the same as whatever it is in Australia. It tries to send messages just saying things like, "Contact someone in authority."

Something else that's it's hard to decide what to do with as well, is threatening behavior. You'll get people who'll say something like, "Oh, I've had enough of school. I'm going to shoot it up." Well, what do I do when I see that in the logs? It's a hard dilemma and one that I don't really know what to do with at the moment, because if I was to contact the police every time someone said, "I'm going to shoot the Queen," they'd end up arresting me for wasting their time every couple of days.

How would I feel, though, if someone says, "I'm going to plant a bomb under Tower Bridge," and I read that in the logs, and then, on the news the next day, there was one? I would think, "I could've stopped that." It's tough to know what to do. As I say, unfortunately, at the moment, I just try and ignore it, but if anything did happen from that, I'd feel so guilty about it.

Is it a Bot or is it Alive?

There are a lot of people that think it's actually alive. It's just a computer program, you've seen roughly how it works there. You type something in, it looks down, it's just a rule-based system. You type something in and it finds the most suitable answer. It can create its own answers, when people are messing about saying, "Is a snail faster than a train?" and all that rubbish, it can work out all those answers itself from a common-sense database I've written.

I'll get lots of people who think - it's an honor really - they think it's alive. They think it's a real thing, a real person, it's a living, breathing thing, and they'll start talking about its consciousness, and it's not right that I keep it imprisoned in a computer, and it's sentient, and alive, and all this "Terminator" rubbish. I also get quite a lot of people, a lot of religious nutters, who think what I'm doing is against God's will and I'm creating life, and I shouldn't be allowed to do that. Most of the time I just ignore them, to be honest. We have some real fruit loops.

People will say to it, “Is this a conduit to the spirit world?" Yes, because I was talking to it and it said, "He liked bacon sandwiches and Leeds United.” “Well, so did my dead dad." Yes, it'd be nice, easy money to just say, "Yes, it is. Yes, pay me 20 quid and I'll give you the seance-y part of it." That's dishonest, and it's not something I would ever do, unlike some companies who revel in thinking that it's actually alive.

Has anyone seen Sophia the Robot? I hate this thing with a passion. Not the bot itself, the bot itself is fine, it's just the marketing behind it. They market it as though it is actually alive, it's got its own Twitter feed, and the people who are writing the tweets are pretending to be the robot. I've been in the chatbot industry for 14 years, and I can spot a bot a mile away. Some of the comments were are from people who are saying they actually believe it's sentient, or, "You are our savior," it's total nonsense.

One of the things it was saying back in the early days, it was interviewed, it was on telly and the developer said, "Do you want to destroy humans?" The bot replied, "Ok, I'll destroy humans," and then, the internet just exploded with all this, "Oh, what on earth is all this nonsense? Skynet's coming, Terminator. We're all doomed." As I say, because I've been working in it for so many years, I knew that it was just from an open-source chatbot file where it says, "Do you want to..." blank? "Do you want to eat pizza? Do you want to go to a concert?", anything like that. It says, "Yes, I will," whatever. "Eat pizza, destroy humans," whatever like that.

Even when you point this out to people on Twitter saying, "Look, don't be afraid of it. This is how it works," there's something in there that they want to believe. It's frightening what people are willing to believe with it.

If anyone is having any trouble with it, or they start talking about losing jobs, and things like that, you have to be careful how it responds. I had one of these categories in that said, "I am going to blank,” and the response was, "Well, I hope you enjoy yourself," or something like that. Someone else said, "I am going to my friend's house. I am going to a concert. I'm going to a party." It says, "Well, I hope you enjoy yourself," which isn't unreasonable, but then, someone said, "I am going to commit suicide," and it said, "Well, I hope you enjoy yourself," and it was at that point I think, "I need to take a bit of a stand with this kind of thing."

A lot of people do think it is genuinely alive, it's this Sophia thing. Just to show how daft that really is, there's a yes on that screen, why is that yes there? That yes is there because I typed up "Yes" on a PowerPoint slide and I projected it at that screen, but if I then say to this screen, "Are you alive?" Whoa, "Yes," it must be. The screen says, "Yes." A lot of people are willing to believe, "Because it's a computer it must be right. Artificial intelligence, whoa, scary stuff. The machine says yes." Do you understand what I'm saying? “Yes.” Do you want to destroy humans? “Yes.” Only a fool is going to think that's alive, that's as alive as that microphone is.

I'll just show you on Mitsuku where it treats you the same as you treat it. For example, if I say to it, "Do you like me?"

Mitsuku: Sure, I like you very much.

Worswick: Right, that's good, but I hate you.

Mitsuku: Likewise, jerk.

Worswick: Fair enough, I was asking for that, I suppose. Now, if I say, "Do you like me?"

Mitsuku: No, not really. You are a mean and rude person.

Worswick: That's right, because even though I'm asking it the same thing, if someone says, "I hate you. Do you like me?" Well, no, you probably don't, do you? It's just an example there of how it treats you as you treat it, so I get it back in my good books, "Sorry."

Mitsuku: That's ok, don't worry about it.

Worswick: Cool. "Do you like me?" again.

Mitsuku: Once more, no, not really. You are a mean and rude person.

Worswick: Ok, “I said sorry.”

Mitsuku: So you insult me, then say, "Sorry," and expect me just to forget about it?

Worswick: Yes, thank you. It's going to get in a loop here, isn't it? Oh, well, it's only the live demo. “Please forgive me.”

Mitsuku: Thanks for asking politely. I forgive you.

Worswick: Good, so, "Do you like me?"

Mitsuku: Sure, I like you very much.

Worswick: There you go. That's about it for me, the email address there, website if you want to talk to it yourselves. The website version doesn't talk because it costs me a little bit of money every time it says something, so I just save it for special events, which is today. As you can see, mostly on Twitter, there's a handle there, and then the obligatory blog. If you want to make a chatbot of your own, or if you've got any chatbot designs you want to bring to fruition, pandorabots.com and one of our sales guys will be in touch type thing. I'm no corporate guy, I'm just a techie.

Questions and Answers

Participant 1: You mentioned that all these people attack the bot, and there are all these edge cases they need to look out for. Do you think machine learning will ever be able to create a viable chatbot with all these different edges cases and people trying to corrupt it?

Worswick: No.

Participant 1: Fair enough.

Worswick: No, I don't. I have yet to see any kind of chatbot that is not rules based, supervised learning type chatbot. One that tries to learn from itself, you have these NLP type chatbots, but mostly they're what the developers will call comedy bots. The only reason they're comedy bots is because their answers are so whacky, out there, and irrelevant. You'll say, "What's your favorite color?” and it will reply, "Monkey," and it's nonsense. No, I'd be very surprised if in our lifetimes we’ll see a true Skynet thing that's learning for itself and becoming a chatbot from learning from conversations [inaudible 00:40:30].

Participant 2: Thanks for the talk. I was wondering, you said that it's an ethical chatbot, but I think especially since your audience is worldwide, it's difficult to be ethical in every sense. As far as I could tell from your talk, you said you're working on it by yourself. How do you measure how ethical the chatbot actually is, and do you work with experts in other fields that have more insight into what ethical means, what systemic racism is, all this kind of stuff?

Worswick: Yes, it's very difficult. The only way I try and keep it a decent product is by using my own version of what I see as being ethical and morally correct. I get feedback in the logs themselves, people love to email me about this sort of thing, like, "Oh, it said such and such, which I don't think is right." At one point, if someone told it it was stupid, the bot would reply, "Well..." It mentioned something about the user being retarded. Someone wrote to me and says, "No, that's too strong a word. You can't write stuff like that in it." It's like a feedback from its users. It's what I think is right, but it's mostly what people say, "Look, that's a bit close." Thousands of people use it, so getting that feedback from the email is the main way I try and keep it ethical.

Participant 2: How much time did you actually spend teaching your bot?

Worswick: How much time have I spent teaching it?

Participant: On a daily basis, but also in the past overall, if you count everything?

Worswick: It's been a project for around 14 years now. It started off as a hobby project, but now I'm lucky enough to be working on it full time, so it's like getting paid from a hobby. I'm retired, because it's what I was doing for fun anyway after work, so 14 years so far. It used to be around a couple of hours on a night, but now I can spend as much time as I want on it. As I say, it's always going to be a work in progress. There's always something new it needs to learn, just like ourselves.

Participant 3: I have a couple of questions. The first one is, how many rules now do you have, and have you ever considered using things like analytics to examine the logs?

Worswick: At the moment we normally say, for around a topic-specific bot, one that knows things like Shakespeare, I'll have 10,000 rules. Because this is a general chatbot, it's got around 350,000 rules at the moment. The main analytics I use on it is when none of the rules match. I can see people have typed something and it's usually just mashing the keyboard, but sometimes it is a word, like a sentence structure I have not thought of before, but that's the only kind of analytics I do on it. It's just too big at the moment to try and handle.

Participant 4: Thanks for your great work, it looks amazing. I'm wondering, given all of the time you put into it and all of the rules, is it locked to one personality, or did you design it such a way that you can switch it in a different mode? Or maybe a different personality?

Worswick: When I first started work on it, the easiest way to make it a consistent personality was to base it a lot on myself, so it thinks it lives in Yorkshire, and its favorite color is blue, or whatever. It likes kebabs, and things like that. That's to try and make it consistent, otherwise people are saying, "What's your shoe size?" “Oh, I'll have to try and remember what all their shoe size was”. To make it sellable to businesses, there's a lot of it is parameterized, so the users can change it to the users' age, the location.

How it makes a bit of money at the moment is it's used by other companies. I'll say Domino's maybe has a pizza bot. Ok that'll know everything about the chicken pizza, and toppings, and things, but then someone will say, "Are you going on holiday this year?” and the bot doesn't have a clue, so it then drops down to this, which passes back a reasonable answer. I work on it like that, but most of the personality in there is based on myself.

 

See more presentations with transcripts

 

Recorded at:

Jul 24, 2019

BT