Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Podcasts Generally AI Episode 3: the Founders of CS and AI

Generally AI Episode 3: the Founders of CS and AI

In this podcast episode, Roland and Anthony delve into the lives and contributions of two legendary programmers, Alan Turing and Claude Shannon. While the two men met only once, their careers contain many parallels: both did foundational work in computer science, cryptography, and AI.

Key Takeaways

  • Alan Turing's work includes the Turing Machine, a foundational concept in computer science.
  • Claude Shannon's master's thesis revolutionized digital circuit design, introducing Boolean algebra to simplify switching circuits.
  • Shannon's Information Theory has implications for data compression, error-correcting codes, and communication channel limits.
  • Both Turing and Shannon made significant contributions to cryptography during World War II.
  • Both made early contributions to the field of AI: Turing invented the Turing Test, and Shannon helped organize the first academic AI conference


Roland Meertens: Did you know that Charles Babbage---he invented mechanical computers, which could go through all kinds of calculations---he called these Difference Engine and the Analytical Engine, and they would compute all kinds of tables with numbers, which would be useful for all kinds of professions, like people who are sailors, or they would have all kind of tables with numbers like you would have in the past, that you couldn't just import a library.

But here's the actual fun facts. Apparently, he said that he was asked on two occasions, "Pray Mr. Babbage, if you put into the machine the wrong figures, will the right answer come out?" And he always replied with, "I am not able to rightly apprehend the kind of confusion of ideas that could provoke such a question." So Yes.

Anthony Alford: I think we've all been in that situation and he was much more kind than the person who came up with the phrase “garbage in, garbage out,” wasn't he?

Roland Meertens: Yes. But next time your product manager asks you to make a program, which gives the right answer, even though the wrong input was given, just say, "I'm not able rightly to apprehend the kind of confusion of ideas that could provoke such a question."

Anthony Alford: He was a pioneer in so many ways.

Roland Meertens: He was a pioneer in so many ways.

Welcome to Generally AI, an InfoQ podcast. My name is Roland Meertens and I'm joined by Anthony.

Anthony Alford: Hey, how are you, Roland?

Alan Turing and Turing Machines [01:38]

Roland Meertens: I'm doing well. Okay, so today our topic is famous programmers and we both picked a famous programmer. Let's start with my famous programmer. I picked Alan Turing. What do you know about Alan Turing?

Anthony Alford: Well, I think probably just like with Mr. Babbage, every programmer is familiar with his work, the Turing Machine, and of course the Turing Test.

Roland Meertens: Yes, maybe let's start with the first thing, the Turing machine you mentioned. So he, in 1936, wrote the paper On the Computable Numbers with an Application to the Entscheidungsproblem. I have the paper here in front of me. I tried to read it and I got four and a half pages in. And if you try it yourself, you'll see that, I was quite proud of that actually, of getting four and a half pages in.

Anthony Alford: I was going to say, how long did it take you to realize it was, well, I was going to say, was it written in German, but that's actually Hilbert's paper that I was thinking of.

Roland Meertens: Yes, so you indeed can read up on the Entscheidungsproblem. I decided to not do that. I roughly know what it means, but I think just the way he tried to solve it is with what we nowadays call Turing machines. I started reading the paper and I also started working out some of the machines he's proposing in there.

I, of course, went to the store to buy an infinite amount of tape. They were out of that, so I had to do it with an old-fashioned piece of tape.

Anthony Alford: Supply chains are still screwed up, huh?

Roland Meertens: Yes. But Yes, basically he's proposing what we now base a lot of our computers on. He's saying, I can calculate any function if I can just define a program in different states. I think that's really nice, where he always defines different states a program can be in, and he basically imagines this machine to have an infinitely long piece of tape, which has some information, has some digits on it, and the program always reads what's on this tape.

And then depending on the state it is in, it decides what it has to execute. And then it could also write to this tape or it could erase what's on the tape. So the states could also be, there is nothing on the piece you're reading right now, and you can just write numbers. You could write letters, basically. He's not really specifying what's happening.

But Yes, as I said, I think it is quite interesting to read the first couple of pages, simply because then you get the idea of the Turing machine, and it kind of felt to me like it resembles some other esoteric languages I programmed in the past. But his Turing machines are a bit nicer and a bit more complicated because of the different states they can be in.

So instead of having one program, you can kind of switch between programs. So that felt quite nice. Code and data in his case, don't share the same memory we have nowadays in Von Neumann Architectures. So Yes, Anthony, if you want to do something yourself, do you know the programming language called Brainf**k?

Anthony Alford: I've heard of that. I was going to say, I feel like there have been at least one or more actual programming languages that are an implementation of the Turing machine. Is that one of them?

Roland Meertens: Well, it's not really like people are implementing the Turing machine.

Anthony Alford: The interface, I guess it's like you could program the Turing machine directly.

Roland Meertens: As I said. So for me, the language called Brainf**k is a very esoteric language, which only has eight characters, which makes it very difficult to read. You don't really see what's going on. There's no named variables, there's no named anything. So keeping track of it in your head is quite difficult, but at least it scans over memory the same way a Turing machine does.

So underlying, you still have some kind of tape, which you can write things in. Actually, you can't even write things in Brainf**k, you can only increment and decrement things. Yes, it's a terrible programming language. I guess you only do this for a very weird way, weird thing.

The other one, by the way, which I really enjoyed, which kind of felt the same, is Intercode, which was a programming language invented for Advent of Code in 2019, where if you go through the challenge of Advent of Code for 2019, you follow all the days.

Then you basically slowly build up your own interpreter or compiler for this weird programming language, which only has integers for numbers, and you can actually overwrite your own program. But there you also feel like you are having some kind of tape and working on some kind of memory. So that was also interesting.

Anthony Alford: So was that his doctoral thesis that he did this work for?

Roland Meertens: Yes, good question. I actually don't know when or how he did this. I feel that he was already a bit further than his doctoral thesis, but I'm not sure when he finished that.

Anthony Alford: According to Wikipedia, he got his PhD in 1938.

Roland Meertens: Oh, okay. Well then that would indeed be part of his doctoral thesis.

Anthony Alford: I would imagine.

Alan Turing: The Musical [06:27]

Roland Meertens: Yes. I'll also be honest, I based the information in this podcast on the following book, The Imitation Game: Alan Turing Decoded. It's a graphic novel. So if people want to read more and learn more about Alan Turing, I can recommend this. It's a graphic novel about his life.

Anthony Alford: That actually sounds extremely cool.

Roland Meertens: Well, we just wait until you hear what other things I did. I also went to a musical about Alan Turing.

Anthony Alford: Alan Turing: the Musical!

Roland Meertens: Yes, indeed.

Anthony Alford: They sell the soundtrack, but it's on an infinite tape.

Roland Meertens: That joke is way too good. Yes, indeed. So the musical was part of the Edinburgh Fringe Festival, so you had to go to Scotland to be able to see it.

Anthony Alford: So there's that downside, I guess.

Roland Meertens:  Well, the upside is that it is coming to London in 2024, so you will be able to see it, hopefully next year.

Anthony Alford: I should put that on my calendar. And you sent me pictures. I don't want to steal your thunder. You went somewhere else, didn't you?

Roland Meertens: Well, I also decided to walk past the house where he was born or the location he was born. So I did a lot of things, and I guess we can come back to that later. But he was born in London, so you can visit the location where he was born, and there is a memorial, which wasn't on at the time when I was there, but it's close to Paddington station.

Anthony Alford: Very cool.

Roland Meertens: Anyways, coming back to the topic before we drift this into the atlas void of all the ways that Roland tried to learn about Alan Turing, except for reading Wikipedia. If we're talking about programming languages, one thing which people always mention is, "Oh, this thing is Turing complete." 

So sometimes people say, "Oh, if I take a lot of rocks and assemble in a specific patterns, it could be Turing Complete." Or people will say, "Oh, but I don't know, PowerPoint, I can make a PowerPoint slide, which is Turing Complete."

And I'll be very honest, I still don't fully understand what this means. Yes, it's a concept which I have never understood and tried to learn about multiple times. But basically according to Wikipedia or according to the internet, computers are set to be Turing complete if they can be used to simulate any Turing machine.

But when I think about Turing machines, I think that they don't really have a limit. They don't really specify what can be put in, what can be put out, except for that they are working on this memory and can move between memory locations. So do you know anything more about what makes something Turing complete?

Anthony Alford: I don't, but what I was going to say is, so again, I'm like you. I'm not sure I completely understand it, but based on what I think, I know he showed that a Turing machine could solve the, I'm going to say stopping problem because my German is awful. So there was the problem proposed by Hilbert. That's a decision problem. He showed that his Turing machine construct could solve it.

And then if you can therefore show that your thing can emulate a Turing machine, your thing can also solve it. And I think that's what it means. But normally when I see something called Turing complete, it basically means it has certain programming constructs like loops and conditionals and memory. I'm not, again, I'm not 100% sure about that. I know that's not the real definition. But for example, XML is not Turing complete, but something like Forth might be,

Roland Meertens: Yes. Yes. So this is where I start not fully understanding it. I also found the concept of Turing equivalent.

Anthony Alford: Maybe that's what I was explaining then.

Roland Meertens: Yes. Where you could say, oh, I have the same amount of properties as not a computer, and I can solve the same kind of machines, which you can create.

Anthony Alford: That was actually what I was saying. So maybe I don't know what…obviously I don't know what Turing complete means either.

Roland Meertens: Yes, as I said, I really tried to figure this out last week. I read multiple blogs, I read multiple things, and it's still not completely clear to me, simply because I don't understand the limits and the possibilities which a Turing machine can have, and thus the limits and possibilities which we can have with computers. Yes. Anyways, what was the second thing you mentioned?

Anthony Alford: Oh, the Turing Test.

The Turing Test: Can Machines Think?  [10:31]

Roland Meertens: Yes. Okay. So that's actually quite exciting, I think because in 1950 he wrote a paper called Computing, Machinery and Intelligence. I also have that here. That was published in Mind's Quarterly Review of Psychology and Philosophy. And it basically starts posing the question, can machines think? What do you think, can machines think?

Anthony Alford: I guess it depends on the question. Is the question: do we have now any machines that think? I doubt it. Is it possible for a machine to think? Probably, for some definition of think. I think that's the problem, is we don't have a great definition of what think means.

Roland Meertens: Yes. He indeed says it's not about the here and now, where the here and now for him is 1950. It's about can you possibly construct a machine which can think, but he starts the paper by saying, actually the question isn't really good. I mean, you could just ask it in a poll, but he says that's the wrong thing to do.

So what I did is I asked my LinkedIn followers, can machines think? And 22% of my followers, there's 107 votes, said yes, and 78% of my followers said no. And a lot of people started debating what thinking exactly is, what it means, which is indeed where you end up if you think about this.

So he does propose to replace it with a game, which he calls the Imitation Game, where you can interrogate two rooms. So one room would have a machine and the other room would have a human. And he says, well, we have to find a way to communicate with these rooms in which the machine or the human can't communicate something extra.

We don't have to build an entire robot, which looks like a human. We just need to be able to have it communicate, like via some teletype system or something. And as an interrogator, you can ask both rooms questions and you have to decide which room is a machine and which room is basically a human. 

And if you can't do this, or maybe if you, as a human interrogator get it right less than 70% of the time, perhaps, then we could say that machines can think. Or he also says, I'm not really interested in defining thinking, but at least then you could say that there's some equivalence between people. What do you think of this idea?

Anthony Alford: Obviously it's definitely captured people's imaginations for, since 1950. He's got a point. So I saw this in robotics. People were trying to create general artificial intelligence so that they could have a generally useful robot. But then Rod Brooks in the 1980s said, well, let's see if we can make a specific robot to solve a specific task.

And I think actually it depends on what you're trying to do. If you want to solve problems that people have, I think that's the way to go. We don't have a problem that there's a lack of general…well, sometimes it feels like there's a lack of general intelligence in the world, but really there's not. Right? If we need general intelligence, we already have that. We have natural intelligence, we have people.

It's to automate tasks that people don't want to do, is the problem. And maybe I think that's an easier problem to solve, and it's a problem we know how to solve. So all that to say is back to your real question, what do I think of that? He's right. If the problem you want to solve is fool a person into thinking you're another person, then it makes sense.

Roland Meertens: Yes. So maybe we could do a separate episode sometime where one of us talks about what is this artificial general intelligence. How do you define that? And could you then say that machines think? Maybe I can find some other interesting things, oh Yes.

One thing by the way, talking about robots, I had a teacher in university, and whenever a robot would do something wrong, she would basically say, "Oh, look how nice. Your robot has become autonomous."

And he [Turing] goes through a lot of interesting concepts and a lot of interesting arguments in this paper. So for example, he says, oh, well, maybe you could say computers don't surprise you. They don't have any creativity. But when he programs something too quickly, it does often surprise him. It does do something weird. So that's not really an argument.

Some people say computers are always right, but then again, computers could also be wrong. So what is it really, which makes computers think? And so what makes people think and whatnot. So Yes, that's kind of interesting. Oh, what's also interesting is that this paper was written in 1950, so electronic computers are not really a big thing. Let me quickly go through the paper and see if I can find something which alludes to that.

Anthony Alford: By the way, it would be interesting to get Mr Babbage's take on whether machines can think.

Roland Meertens: Well, one interesting thing is that George Babbage worked a lot with Countess Ada Lovelace. I first was thinking about doing an episode on that. I also have a graphic novel about their life. So I have my favorite way of gathering information.

But he also talks about what she thought about whether machines could think and what arguments she had. So this is before there's any electronic computers. People are just building tiny machines, which can calculate by just rotating gears. And she's already thinking about how can we program this and could something like this ever think on it's own? So that's wildly interesting. So we can save that for a different episode.

So as I said, one thing is that sometimes he says, "Oh, what if we would have larger computers?" Because at his time, he basically has computers which store maybe a 1000 numbers. So he has computers with a memory of one kilobytes, and he kind of says, oh, but what if we could store, I don't know, three pages of numbers or 300 pages of numbers?

And he just said, "Oh, I think that if we could have a computer program, which is maybe the size of a 10th of the Encyclopedia Britannica, maybe that's when we could already start seeing this intelligence."

Anthony Alford: And in a way, he's not wrong because think about these large language models that, to be quite fair to them, probably could pass the Turing Test in most cases. Essentially, they're trained on Wikipedia and the rest of the internet.

Roland Meertens: So I calculated at some point he says, "Oh, maybe if we would've multiple of these computers." And I think at some point he said, "Oh, but if we just have something with 20 kilobytes of RAM, that could be a lot."

Anthony Alford: Bill Gates upped that ante, didn't he? Said we will give you a 64K.

Roland Meertens: Indeed. It's just interesting to see the ideas of the future Alan Turing had. The other thing which I thought was interesting is that he mentions random generators because it seems to be a relatively new thing. And some people already say, "Oh, well, if a computer can do random things and decide things on its own, maybe then it already has a mind on its own." So next time you type import random and do something like that, just think about it. You just added minds to your machine.

Anthony Alford: You're unleashing Skynet. So just random fun facts since we're here. You're familiar with the CAPTCHA where you have to find all the traffic lights and things like that. The T in CAPTCHA stands for Turing Test.

Roland Meertens: Yes. Do you know what the whole word stands for?

Anthony Alford: Yes, well, I Googled it while you were talking.

Roland Meertens: That's good.

Anthony Alford: It stands for Completely Automated Public Turing Test to Tell Computers and Humans Apart.

Roland Meertens: Yes, indeed. Do you also know who invented it? Because this is a rarely known fun fact.

Anthony Alford: I did know, it was---just like the two gentlemen that we're talking about today---was someone who had invented a lot of other things, but I can't remember who it was.

Roland Meertens: So I know that: Luis von Ahn.

Anthony Alford: Yes.

Roland Meertens: He worked on this and he later went on to found Duolingo, which we all use to learn languages nowadays. But I thought that was a really interesting fun fact. In his early years, he was thinking a lot about how can we kind of source human intelligence and give them interesting tasks to achieve jobs for computers and with computers among, which is the CAPTCHA.

Anthony Alford: Now, I think probably the computers are winning that war, though.

Turing and Cryptography [18:16]

Roland Meertens: Yes. Let me not talk about all the experiments I did, but it indeed does seem too that CAPTCHAs are getting easier and easier to break. Yes, I thought that at the end of the day, what made us human was our ability to detect traffic lights, but seems that machines are catching up. Okay, anyways, give me one last fun fact about Alan Turing. Give me one last thing you know about Alan Turing.

Anthony Alford: Well, I have some fun facts when I get to my famous programmer. Of course, he worked during World War II for the British government, cracking codes. So: cryptography.

Roland Meertens: Yes. So he worked in what's now called Bletchley Park, which is maybe an hour train ride from London. And Yes, the Germans basically had this machine, which they would use to encrypt all their traffic. And every day the machine would switch its settings, and the settings basically accounted for it.

It was a crazy amount of possible settings this machine could have, because you could have five rotor wheels, which you could put in three different positions, and then you had all these wires which you could plug into each other. And I think later they even added more rotor wheels. It's very difficult to solve.

Alan Turing worked there. He started working there with a couple of linguists. But Yes, he was more interested in the cryptography. Also, a rarely known fun fact that one of these linguists was Tolkien, which wrote Lord of Rings, but Tolkien was more---you're looking very surprised.

Anthony Alford: I did not know that, that he worked on that project. That's interesting.

Roland Meertens: Well, so Tolkien only worked for about two weeks in Bletchley Park, and they apparently already saw that they didn't really need people who were good with language. They needed people who were good with mathematics. That's what he convinced Bletchley Park of.

Anthony Alford: I was going to say, they could have used him to create an entirely new fake language to use for sending messages, sort of like the US did with the Code Talkers, the Native Americans.

Roland Meertens: Oh.

Anthony Alford: That was not a made-up language. That was a real language that nobody knew but them, but they could have done the same with Tolkien.

Roland Meertens: I actually don't know anything about Code Talkers, but okay, tell me.

Anthony Alford: Actually, I think there was a movie, I think it was actually called Windtalkers, but during World War II, the United States---I think the Marine Corps, but one of the branches of the military---used Native Americans to send messages on the radio. They would speak in their own language and nobody else knew it. So it was, in a sense, an unbreakable code.

Roland Meertens: That's quite interesting. Anyways, back to the code breaking in Second World War. So Yes, as I said, there was this machine which would create very difficult-to-crack messages and the setting switched every day. However, they discovered that some messages would start with the same words all the time.

So for example, maybe every hour they would give a weather report and the message would start with Wettervorhersage, weather reports, weather prediction for the next hour. Or they knew that some specific people would always start their message with a certain thing. So basically, they thought if you could take a message where you know what it started with, the so-called "cribs," then you could mechanically try all the possible keys on that and see if it would produce the correct answer.

Anthony Alford: So brute force, essentially?

Roland Meertens: Yes. So they would have this machine which would brute force all the possible combinations, and if a combination would be valid, would produce a valid key and then also would have a valid translation, they would of course check that afterwards. Then the machine would stop. You can still nowadays go to Bletchley Park. Guess what I did? Of course, I went to Bletchley Park. Everything for this podcast!

There you can learn about the war, you can learn about what happened there. And here is my Generally AI podcast tip. Instead of visiting the main visitor center, you can visit a computer history museum, which is one block further, but also still in the same park. And it's the most enthusiastic group of volunteers I ever met.

And here's the interesting thing is that often people think, oh, Alan Turing invented a computer to break this code. So he invented this computer to go over all these codes. That makes a lot of sense in your mind.

But if you walk into the museum and you say, "Oh, so this is the first computer," they will immediately correct you. They will hunt you down if you think that the thing constructed in the Second World War is the computer he made, because it's not a computer, it just brute forces a lot of codes in a mechanical fashion. But you can't program it. You can't do anything with it.

But this is, of course, a fantastic work of art. They also have set up a version there, so if you want to see it working and want to see it rotate and make all the noise, which it made, you can see that. And they also can still decrypt certain codes if they would be for this specific encoding and decoding mechanism. Yes, shout-outs to this computer history museum at Bletchley Park. You should absolutely go visit that. It's a fantastic place.

Anthony Alford: Very nice.

Roland Meertens: Yes, that's everything I have on Alan Turing.

Anthony Alford: Well, let me give you one more tidbit about Alan Turing. If you are a fan of science fiction, there's a science fiction novel called Cryptonomicon by Neal Stephenson. Part of the storyline is the cryptography efforts in World War II and Alan Turing appears in this as a fairly major, not a main character, but a fairly important character. And it's quite entertaining if you like Neal Stephenson.

Roland Meertens: Nice. Yes, of course, if people want to know more about this, I already recommended a comic. I already recommended to go to the musical. I already recommended to go to Bletchley Park, which is all pretty much out of reach. But at the moment, Netflix, at least in the UK, still has the movie The Imitation Game, which is about his life, and also shows what's his life at Bletchley Park. So that's a very good source of information as well.

QCon London [24:14]

Roland Meertens: Hey, it's Roland Meertens here. I wanted to tell you about QCon London 2024. It is QCon's flagship International Software Development conference that takes place in the heart of London next April 8 to 10. I will be there learning about senior practitioners' experiences and exploring their points of view on emerging trends and best practices across topics like software architecture, generative AI, platform engineering, observability and secure software supply chains. Discover what your peers have learned, explore the techniques they're using and learn about all the pitfalls to avoid. Learn more at, and we really hope to see you there. Please say hi to me when you are.

Claud Shannon and Boolean Algebra [25:07]

Anthony Alford: So when you suggested this topic of famous programmers, one of the criteria that I was thinking about was people who have made multiple contributions. So for example, with Turing, he invented the Turing Machine. He also invented the Turing Test. I mean, both of these things are quite important things, and any one of them would've been enough for fame, but no, he had to go and keep doing it.

So when I thought of someone like that, there's a lot of names that come to mind, of course. But the one that I came down on was Claude Shannon, and I didn't really think about it at the time, but the careers of Shannon and Turing have a lot of parallels. So let's first think about their birthdays. Turing was born in 1912, Shannon in 1916, approximately four years apart.

They both started university around the same time. Turing in 1931 and Shannon in 1932. By 1940 they both had PhDs. They both worked on cryptography during World War II. They both made early contributions to AI in the 1950s.

But probably the most important thing they did: they both did work in 1936, 1937, which you could call that a miracle year that laid the foundations for computer science. So do you know any fun facts about Claude Shannon?

Roland Meertens: Well, so I first of all have his book here, the Mathematical Theory of Communication.

Anthony Alford: You like to go to the source material, don't you?

Roland Meertens: But I will also admit that I made it to page 36 and I tried multiple times to continue, but I was out. So what I know about Claude Shannon is that I thought he always worked on something about the minimum amount of space to encode things, like what is the entropy or something.

Anthony Alford: That's correct. Well, that's in that book. That is his work on information theory in the 1940s. So let's talk about Claude Shannon. I used a book also, but this is a Kindle version, A Mind At Play, and it's a pretty good book. I also found several articles about him on the internet, as well as Wikipedia. So I did do some research. So let's talk about Shannon.

So he was born in 1916 in the state of Michigan, and he attended the University of Michigan from 1932 to 1936. He actually earned two bachelor's degrees while he was there, one in electrical engineering and one in math. And then he went to graduate school at MIT.

So he went to work for someone called Vannevar Bush, and his name, I don't know if I'm pronouncing the first name correctly. This guy, Bush, was sort of like the godfather of science at that time and during the war. See, he managed to get a lot of government research programs going, especially during the war. So Bush had created a differential analyzer, probably similar to Babbage's. It was an analog computer, and he hired Shannon to come work for him on this and to do graduate work.

Roland Meertens: So this, for my information, this is just again, just like Babbage's machine, this is a lot of gears, which are just rotating at certain revolutions. And that way they are computing basically an answer to an equation?

Anthony Alford: And in fact, I think the purpose of Bush's machine was he was trying to solve differential equations. So they would have things spinning, and they're doing integrals and things like that off of spinning wheels. If you ever go on YouTube and look for analog computers, you could see some pretty interesting training videos.

Analog computers were used by our Navy warships for a very long time to control the guns. And so it takes in all these variables like how fast is the boat going, the air, how fast is the wind blowing? Where is the enemy ship? And at the heart of it though, there's some ballistics differential equations. And so it figures out where to aim the guns and pull the trigger.

Roland Meertens: Wait, so the machine is kind of inside the gun as a mechanical machine instead of an electronic machine?

Anthony Alford: Not anymore, but certainly up during World War Two and beyond. Yes.

Roland Meertens: That is such an interesting fun fact.

Anthony Alford: It's nuts, but it's also a marvel of mechanical engineering, really. So Shannon was working on that for his master's thesis. His master's thesis is, no pun intended, a masterpiece. So Shannon's master's thesis: he's the guy who showed that Boolean algebra could be applied to electrical circuits, to switching circuits.

So he had learned about Boolean algebra as an undergraduate, I think taking probably a logic class, and old-fashioned “Socrates is a man, all men are mortal” kind of class. So Shannon was familiar with this. He was also familiar with relays and switches. He showed how that the Boolean algebra, which dated from the 1800s, he showed how that could be used to design these circuits to simplify them on paper.

Because beforehand designing these switching circuits, which were used at Bell Labs, in the telephones and electrical power networks, these things were more or less designed by hand. Shannon showed how you could use a simple mathematical algebra to design them and simplify them. So he showed how you could take a complex switching circuit, simplify it to a much smaller number of switches. He showed that switches in parallel were an OR gate switches in series were an AND gate, things like that.

Roland Meertens: And so did he then invent the AND and OR gate, or would he then say, oh, I have these AND and OR gates, and I combine them in this way into a more complicated circuit?

Anthony Alford: So people were essentially using this concept of series or parallel switches before he came along. But he abstracted that into...he said: parallel switches are the function OR.

Roland Meertens: Oh.

Anthony Alford: Series switches are the function AND. And you can do these operations with…all you need is one, zero, NOT, AND, OR, and now you have a complete algebra and you can set up propositions. You can essentially design circuits with this algebra.

Roland Meertens: Yes, it sounds like it's one of these things where you nowadays think, oh, that's so logical. Of course this is the same thing, but I can completely imagine that you have these parallel worlds of the logicians doing Boolean algebra and the mechanical engineers, which are just thinking, oh, we have all this way to combine signals, but having no clue where to go with it. And just putting it into one thing to basically be bigger than the sum of its parts.

Anthony Alford: And that's exactly what happened. And Shannon himself pointed out he was probably one of the few, if not the only person who was familiar with both worlds, probably there were other people, but he happened to be there at that time. So Yes, he invented that. He did some actual complex circuit design with it. He designed a four bit half adder circuit, which if you think about this was 1937, and he would've designed this circuit with just the idea of relays or switches or something like that.

Roland Meertens: What is a half adder?

Anthony Alford: Half, H-A-L-F. Half adder. So that's a digital circuit that can add four bits together. So a half adder, I forget why it's called half, I think because it has a carry bit out, but it's a building block of an arithmetic logical unit that's in processors.

Roland Meertens: Okay, got it. Thank you.

Anthony Alford: It's been a long time since I did this, but what I was going to say, it's hard to overstate how huge this was. My good friend Wikipedia says it's "the fundamental concept that underlies all electronic digital computers." I'm talking about the Boolean algebra.

And anyone, like me, who studied electrical engineering has designed digital circuits using these principles, including learning what a half adder was 30 years ago. And every programmer, of course, has written an if statement with ANDs and ORs, and you can use Boolean algebra to simplify your if statements. I don't know if any of you have ever done that. I've done it occasionally.

Roland Meertens: Yes. I think that in the automotive world, they program in a very weird way, which more resembles it. Also, the one thing I once found on the internet is the NandGame. I don't know if you know that?

Anthony Alford: Nand? Yes. And what about it?

Roland Meertens: So it's just on where you can start with the basic Boolean algebra to create, and you use relays to create these basic logic units. And the further you progress through the levels, the more complicated your designs become. But you can reuse the things you built before. So basically you start with the most basic units, which is just a NAND gate, and then you build an entire computer and arithmetic unit out of it. So it's an interesting game if you have too much time on your hands.

Anthony Alford: You've seen people do this in Minecraft, I'm sure, they've built computer processors in Minecraft out of switches and Redstone and things like that.

Roland Meertens: Yes, it's unbelievable the amount of Redstone I had to mine to get basic building blocks and create memory and kind of store characters for a screen I made at some point. But it's such a cool technology that once you understand the basics and can make these gates, you can be super creative and make a lot of interesting things.

Information Theory: How Many Bits Will Fit? [34:05]

Anthony Alford: So this was just Shannon's master's thesis. Some people have called it possibly the most important and also most noted master's thesis of the century. I doubt anybody's read my master's thesis since I had my presentation back in whenever that was, a long time ago.

But how many people can say that? His master's thesis created this field of digital circuit design. But that was not it. I mean, if he did nothing else, he'd be a legend just for that. Just like if Turing did nothing but Turing machine.

But as we mentioned before, Claude Shannon did something amazing 10 years later, he invented an entirely new discipline called Information Theory. So I mentioned Bell Labs. Shannon, I think, did a summer at Bell Labs while he was working on his thesis, and he began working there full-time in 1940, which we'll take a little detour into the unpleasantness at that time.

He did some work during the war on cryptography, just like Turing did. One of the projects he worked on was speech audio encryption, where I believe it was based on the idea of a vocoder. You can decompose speech and basically replay it, the speech with a vocoder or a voder, I forget which one is which.

Roland Meertens: Well, is this the idea where you can break the speech down using the Fourier theory and then sense the separate waves and then someone else puts them back together?

Anthony Alford: I think it was something like that. And so Churchill and FDR would use that to actually have phone conversations that were encrypted.

Roland Meertens: Oh, already?

Anthony Alford: Yes.

Roland Meertens: Back in the day?

Anthony Alford: Yep. So working on that encryption and just working at Bell Labs in general, he thought a lot about the general principles of communication. So in 1948, he published that book that you showed us, the Mathematical Theory of Communication, and that's credited with laying the foundations of Information Theory.

Roland Meertens: Interesting.

Anthony Alford: Yes, it did a couple of things. One of them, you mentioned, one thing is it formalized the bit or the binary digit as the unit of information, and it introduced that concept of entropy as a measure of the amount of information in a message. And like you said about for an encoding scheme, how many bits you need on average to encode a message.

He also talked about the communication channel and how each communication channel has a maximum bit rate. So basically the maximum speed at which you can transmit messages. And he showed how when you add noise to the channel, that affects the maximum bit rate that you can reliably transmit messages.

Some of the consequences of these ideas include things like data compression, which you talked about, the average number of bits you need to represent something, error correcting codes, that's used everywhere from modems, cell phones, et cetera. So again, if he had done nothing else, what an achievement.

Roland Meertens: Maybe here, I always think when I think about entropy and a number of information per messages, I always think about Morse Code, but he's not Morse, he didn't invent that, right?

Anthony Alford: So no, Samuel Morse invented that back in the 1800s. And so obviously that was sort of ad hoc, but the coding scheme for Morse Code is not terrible. So for example, the letter E is quite common in English. Letter E is a dot, the letter A is a dot and a dash, whereas like Y is dot, dot, dash, dash, I forget. I only know Y and Z. You know why? Because the band Rush has a song called “YYZ,” and the rhythm is the Morse code for YYZ.

Roland Meertens: Oh, that's beautiful. That's so brilliant.

Anthony Alford: Yes. Well, that's the ID for Toronto airport. So the idea is it's the transponder sending out YYZ in Morse Code.

Roland Meertens: Amazing.

Anthony Alford: Anyway, so yes, Morse Code predates Shannon by quite a bit, but the coding scheme, while it's not optimal, it's not terrible, I think.

Roland Meertens: Yes, interesting. The other thing which I got from this book, which I thought was quite interesting, is that different languages also have different information densities. So I propose that for the next podcast, we figure out which language has the best information density. So people don't have to play our podcast at one and a half times the speed, but they just get their podcast content in the best language to basically communicate. And what do you think about that?

Anthony Alford: Well, interesting. So there's a trade-off between that and redundancy. Redundancy is essential for sending things reliably. So if you squish it down to the most dense representation, if you lose one bit, you're just out of luck. On the other hand, there's a paper out now called Compression is Intelligence or something like that.

Roland Meertens: Yes. When they use gzip, they use that as embeddings for predicting classes, which I think is quite interesting.

Anthony Alford: So that kind of leads us to AI and Turing. I mentioned that Shannon did work on cryptography during the war. He claims that his cryptographic work and his communications work were inseparable. So clearly that gave him some inspiration. Some of his contributions to cryptography: one was to prove several properties of one-time pads, such as they're actually unbreakable, but you have to throw them away after you use them.

Roland Meertens: So it needs to be also as long as the message, I believe.

Anthony Alford: Yes, I think that's right.

Roland Meertens: Yes. So you have to have a one-time pad, which is as long as the message, and it has to be completely random, and you shouldn't tell anyone. I think that's also something he mentioned. You should keep your secrets a secret.

Anthony Alford: Very much. He did meet Turing. Turing visited the US in something like 1943, so they did meet and actually spend some time together. It's unclear how much they spoke about their crypto work since you have to keep it a secret. You can't talk to this other guy, even though he's also working on the project. But imagine that, Turing and Shannon together.

Roland Meertens: Yes, I think that Alan Turing also worked for a while on how can we encrypt audio? Also thinking have so many crazy ideas, like what if you just have a physical record like you put in a record machine, which is then the one-time pad. But then if you want to do that, you should drag around record players through your trenches during the war and have these records handed out. There's so many interesting ways you could encrypt audio, and none of them are kind of easy ones.

Anthony Alford: And that scheme actually is in the novel I mentioned, Cryptonomicon.

Roland Meertens: Oh, nice.

Shannon and the Summer of AI [40:18]

Anthony Alford: Yes. So finally, let's get to AI. Now, if you've done machine learning work lately, you probably are familiar with something called the Cross-Entropy Loss. That's that entropy that Shannon described. By the way, the story is that John von Neumann suggested to use the word entropy. Apparently. He said, nobody knows what entropy is, so they can't argue with you.

Roland Meertens: That's a good thing.

Anthony Alford: By the way, if we ever did another famous programmer episode, if we chose John von Neumann, we'd have to spend the entire episode on him because he was such a giant. But anyway, here's a fun AI related story from Shannon's information theory work. One of the concepts was that the amount of information in a message or a word or one of the code symbols is related to its probability of occurring. The more unlikely something is to happen, the more information is in it.

And we mentioned the information density of languages, the frequency of letters and words in a language, obviously in English, "the" is the most common word. So if you see "the," there's very little information in it. So you can in fact probably leave all the “the-s” out of a message and not lose anything.

So Shannon actually developed a generative language model algorithm.

Roland Meertens: Oh, really?

Anthony Alford: Well wait. It was kind of more like a parlor trick. So given a word, so you say the word is flower. You take a book and you flip through the book until you will find the word flower. Then you output the next word right after it, and then you go find that word and then output the next word after it.

And again, that's just what a language model does. It's like: given a word, what is the next word to come? And using this, he could generate reasonably coherent sentences. So here's an example I found. "The head and in frontal attack on an English writer that the character of this point is there." So it's not completely sensical, but it's not complete gibberish.

But here's the best part. Shannon spoofed this idea. The book that I shared, it has a footnote that says, "He wrote an unpublished scenario about this, where this technique is used by a Nazi scientist on the run to automate the production of propaganda by regularly stitching together adjunct prop phrases in a way that approximates human language, a machine could produce an endless flood of demoralizing statements." This is exactly what our modern day LLM creators are worried about.

Roland Meertens: Yes. Yes, yes, indeed. No, I just wanted to say that indeed, this sounds exactly like, oh no, I really hope that the source material is not in the data used by ChatGPT, but.

Anthony Alford: Oh, well, Yes, this guy was clearly ahead of his time, but there's still more, there's still more AI stuff. So in 1949, he wrote a chess playing program: of course, all AI starts out with playing chess. And I think even in the 1980s, Byte Magazine said “there have been few new ideas in computer chess since Claude Shannon.”

Roland Meertens: Oh, it's interesting because I left this out. But Alan Turing was also a very big fan of chess, and he had all kind of weird interesting chess variants that, let me not talk about that before this podcast becomes way too long. But they also, a couple of years ago…he couldn't implement that because no computer was good enough to have his program.

So a couple of years ago, they actually implemented it as a bot, and they showed it off at some chess conference, and they had a human play against the chess bot Alan Turing basically programmed, and I think the human won in like five moves. It was like, oh, sorry, I defeated this, but look up the video online. It's really funny to basically sit there with full of anticipation, like, okay, let's go Alan Turing, let's see what your idea is worth. And then within a couple of moves, you're like, well, it's over.

Anthony Alford: Perhaps if he had more time to work on it with a real computer. Right?

Roland Meertens: Yes, indeed.

Anthony Alford: So Shannon was also a tinkerer, and he loved making little contraptions. In 1950, he built a robot mouse that he called Theseus that could learn to navigate a maze. It had external brains. It was really a big computer with an electromagnet to move a little mouse around, but it actually did learn to navigate. You could pick it up and put it in a different part of the maze, and it would move until it found the place it remembered and then go straight to the cheese.

Roland Meertens: Yes, brilliant.

Anthony Alford: Now, one final question for you. What year was the official beginning of AI?

Roland Meertens: Oh, I think if we think about AI, I would say that there is this one conference or some summer school where they say, let's have a discussion about AI and think about it for a couple of weeks. We can probably solve the problem in one summer. And that sounds exactly like me talking to any product manager saying, this project is totally feasible. Please give me my manpower and let me do my thing.

Anthony Alford: You are exactly correct. So a lot of opinions are that the term artificial intelligence was coined for the 1956 summer workshop at Dartmouth.

Roland Meertens: Okay, nice.

Anthony Alford: Guess who was one of the organizers?

Roland Meertens: I'm going to say Claude Shannon.

Anthony Alford: It was Claude Shannon. And Marvin Minsky and McCarthy and a few others. But Shannon was there.

Roland Meertens: I didn't know that. This is so interesting.

Anthony Alford: Yes, so this guy shows up everywhere!

So I'll try to wrap up. We skipped a lot of things. I didn't mention his PhD work, which was in genetics, a field he knew practically nothing about, but his advisor thought it would help give him some breadth. He didn't break new ground. His PhD work was not as influential as his master's thesis, but I think, you know what, all things considered, two out of three ain't bad.

He did do other work in the war, including fire control, which if you may have heard the term cybernetics, another researcher at Bell Labs, Norbert Weiner, coined that term for fire control systems. And Shannon is also credited with inventing the signal flow graph, which is a very handy little tool for signal processing.

And on top of that, he was also apparently a very fun-loving person. He would ride a unicycle and juggle, and he would build or collect these interesting and useless electromechanical devices, like a calculator that uses Roman numerals.

Roland Meertens: Nice. So I also went to his Wikipedia page today, but I didn't have that much time, so I just spent five minutes. Okay. So don't think I'm now an expert, but I just read, the middle of his Wikipedia page on his life, and I was like, I want to be this guy. So I mean, nowadays, I also picked up unicycling a lot, and just the fact that he was riding unicycles and is a known juggler I think is so amazing. But did you also see that he invented a juggling robot?

Anthony Alford: Yes, I think I did see that. That's pretty nuts.

Roland Meertens: Yes. So this is something I was completely unaware of, that this is a thing. It's also not a very big thing, but if you have time, look up on YouTube, juggling robots, because he has a very interesting design, which just throws balls on the floor and then catches them.

Anthony Alford: Ah, they bounce. Okay.

Roland Meertens: Yes, he said that that was easier than catching them, which I guess he's totally right in. It's a very nice, simple mechanism. He also built it out of, I don't know, here in the Netherlands, we call it Meccano, like some kind of thing you could very easily use to create simple machines and put things together.

Yes, I saw that basically at the final years of his life, he basically still worked for the university, but then he was just sitting at home making interesting things. The one thing which I couldn't find is that apparently he said he invented a rocket powered Frisbee. I have no clue what a rocket powered Frisbee is, but I want one. That's all I know. I couldn't find any, but I want one now.

Anthony Alford: Well, you know what, Roland, now's your opportunity to iterate on that design.

Roland Meertens: Yes. The life of rocket powered Frisbees has stood still ever since then.

Anthony Alford: Well, that's all I had about Claude Shannon, like you said, definitely a very interesting guy. Obviously Turing as well. Both of them giants in their field, both did incalculably groundbreaking work and more than once.

Conclusion [48:04]

Roland Meertens: Yes, very interesting.

Okay. What did we learn today?

Anthony Alford: Wow. Well, I learned that there's a lot of pop culture devoted to Alan Turing: movies, musicals, comic books.

Roland Meertens: I can recommend them all. I really enjoy this. I didn't know that Claude Shannon was at the start of the invention of artificial intelligence. I think that's very interesting. I learned a lot of interesting new fun facts about Shannon today. I didn't know he was that influential of a figure towards our modern day life.

Anthony Alford: Definitely. Definitely. Well, that's what we're doing here. We're providing a valuable service for reminding people that we're standing on the shoulders of giants.

Roland Meertens: I'm very happy we did this episode.

Anyways, if you are also very happy that we did this episode, please like it on your favorite podcast platform. Just give us a rating, give us a couple of stars, leave us a review. Tell other people that it is interesting. I personally get basically all my podcast tips from friends. So if you are the same, please tell your friends that this podcast is worth listening to.

And if you have tips for us or ideas or feedback, just send us a message. And Yes, that's it for the third episode of Generally AI, an InfoQ podcast. Thank you very much for listening.

Also, as another fun fact, which people are not going to know, is that at some point you hear this text in the background, it says, "computers have taken control of the platform." That is actually some recording of the space shuttle, like when it goes up. At some point, some astronauts said, "computers have taken control of the platform." I thought that was quite fitting for the podcast.

Anthony Alford: I love it.

Roland Meertens: It's also in Apple's Library of Standard Sound. So that's why I found it.

Anthony Alford: All right.

About the Authors

More about our podcasts

You can keep up-to-date with the podcasts via our RSS Feed, and they are available via SoundCloud, Apple Podcasts, Spotify, Overcast and the Google Podcast. From this page you also have access to our recorded show notes. They all have clickable links that will take you directly to that part of the audio.

Previous podcasts

Rate this Article