BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Interviews A Conversation with Sr. Tony Hoare

A Conversation with Sr. Tony Hoare

Bookmarks
   

1. I'm Sadek Drobi, I'm here at QCon with Sir Tony Hoare. Can you tell us a bit about yourself and about your long run computer science?

I could start by my career in computer science and with my first employment in 1960 with a small British computer manufacturer, which was quite successful in those days. I was employed as a programmer and shortly afterwards I became a section leader to look after a project for the compilation of Algol 60, which the company was selling with its computers. In 5 years I rose to the rank of chief engineer in charge of the entire software effort. Then I was rapidly moved sideways to a position of chief scientist in the research laboratory where I spent 2 years researching into computer architecture, including virtual memory and caches and other ideas like that.

In 1968 I thought I would move to a university post and I applied for and obtained a job as a professor at the Queen's University in Belfast, which had a small computing department consisting of 4 academic posts and 4 programmer posts. I set up there an undergraduate degree in computing and joint honours degrees with other subjects. In 1977, the post of professor' of computation at Oxford University fell vacant and I - rather reluctantly - decided that it would be fun to move back to the mainland and perhaps more into the mainstream of academic computer life. So, I moved to Oxford and spent 22 years there as a professor in the Faculty of Mathematics, which was quite a surprise for somebody who's a graduate of Philosophy and Latin and Greek, but it was successful and very interesting. I recruited to the department some brilliant people whom I liked very much, very good friends.

About 1999 I was approaching the age at which the University would require me to retire and Roger Needham, who was setting up Microsoft Research in Cambridge, very kindly offered me a job and after thinking carefully for a while and talking to various people about the objectives and the organization of research in Microsoft, I was extremely impressed by the degree of academic freedom which Microsoft was offering its researchers. I hoped that I would be able to do something for them and perhaps indeed even for my academic colleagues that I was leaving behind, by constructing this extra link with industry. I've never regretted it since. I've been working for Microsoft in Cambridge for very nearly 10 years now and I've done a variety of things including a lot of keynote addresses.

My present major concerns are twofold: I have a personal research objective, which I've actually started 20 years ago, which I call Unifying Theories of Programming; and I have an academic liaison objective, which is to try to build good links between the successful and very good work that Microsoft is doing in formal analysis of computer programs and research that is being conducted in the same field by academic researchers. I have a clear idea - it may be an incorrect one - about the proper roles of industry and academics in the furtherance of scientific knowledge and engineering technology.

Just approaching the same nirvana, as it were, from 2 different ends, the industry can work from what exists and has an enormous amount of experience in case material of programs which are of a serious economic and social value, which provide the challenge for the theorists to have to explain why they work. That leads me to the subject of my first keynote address which was the distinction between science and engineering, why we need both and why we need good links, at all points in the chain that separates them or rather links them.

   

2. You mentioned Microsoft Research and there were several projects that are based on your own ideas. One of them is the Spec# project, which is about the typing system or the former system of verification for the program that is based on logical wire, which is yours. Can you tell us about these concepts and then tell us about the difference in having it inside the type system or as an independent system that then plugs into the program to verify it?

When I first started work on a formal investigation of program correctness that was when I moved to Queen's University in Belfast and, in fact, my motivation for moving there was because I felt that the research in this area was very long term and its prospects of actual application and that it was wrong to expect industry to engage in a research of that long term nature. The proper place to do long term research is in the University or at least it was in 1968 and I believe that it still is, that the University should be concentrating on principles and theories, which embody scientific truth, rather than any technology that is likely to have an immediate impact in the marketplace.

I moved to the university in 1968 and I knew that it was unlikely that any of the results of this formal research in the programming languages would be applied in industry until after I'd retired, in 1999. Indeed, that part of my prediction turned out to be correct. One of my motives for moving into Microsoft was I could see what was actually happening in the industry, from the inside and maybe to assist in making the other half of my prediction - which wasn't a very strong prediction, it came true a little bit earlier than it might otherwise do.

I did find that people in Microsoft were using assertions in the code of their programs, but not to prove correctness, but to assist in testing. Because the assertion is tested during testing, it's inserted as a conditional macro, which is actually evaluated at Runtime during the testing and is capable therefore of detecting the symptoms of program failure as close as possible to the place where the error actually occurred. I was delighted to find the ideas were being used and a little bit horrified to find that they were being used by testers because testing - of course - is the great enemy of proof and if you prove things you wouldn't need test. If you can test things successfully, you don't need to prove them either and that's very unfortunate.

I rapidly learned to accept that as a very useful synergy between the testing that programmers are very familiar with and the verification, which they will only become familiar with when the tools are available to perform most of the verification automatically. I also found inside Microsoft that they were already working on tools of this kind. There is a particular tool PreFix and a faster version of it called PreFast, which is being developed inside Microsoft Research and, while I was at Microsoft, it was rolled out into the development divisions to perform improved checking on the code of Windows.

Indeed, it was capable of detecting errors in Windows code that had been in the field for some time, as well as new code that had just been developed. It turned out to be a very valuable tool in the tool set, which the developers were using. Essentially, it was - as you say - an extension of an ordinary type system to deal with more aspects of the correctness of the programs. It concentrated particularly on what I call generic errors, which are always errors, independent of what the program is trying to do - errors like null reference, de-referencing and subscript overflow. Each potential subscript overflow was analyzed by this PreFix program to see whether it really could lead to a runtime error. It would give warnings in all cases that it detected a likely runtime error.

The warnings were advisory. If you give a warning, there is never a guarantee that it is a real fault and there is a great problem that false warnings are as expensive as true warnings to actually debug, as you got to convince yourself that, in spite of the fact that it looks suspicious, the error is not going to happen. This is quite a stressful business, because if you get it wrong and then some malware writer detects this potential of error, the malware technique is to drive the program into the place where the error can occur and then exploit it for nefarious purposes. If you reject a warning as a false alarm and then it allows a virus to spread, a single virus can cost 4 billion dollars to the world economy and it did. The Code Red Virus was estimated to cost 4 billion dollars.

It's very important to reduce false alarms, it's very important also to detect all true errors and there were no guarantees given for that either. It was a purely advisory thing, it was neither really a type, although it used the techniques of type verification in a slightly more flexible form and although it used the concepts of verification, it was all very much an engineering compromise of producing as many true errors as possible and as few false alarms. Nevertheless, what this project did was to deliver a very perceptible benefit from the Research Division into the Development Division and from the moment that that got accepted - it took a while to get accepted - the developers were always open to new ideas from the research divisions and the research divisions also had the challenge that they had to describe their ideas in a form that wouldn't excite too much interest, that wouldn't excite false expectations.

The contact between research and development at Microsoft - something which the management puts a great deal of effort into - really started working and this led to things like C#, Spec# and a more recent tool which is called internally. ESP, which checks against buffer overflows. That was out to detect 100% of all faults and give very high assurance that there are no buffer overflows left in the operating system. Can't be quite 100% and they used verification technology to avoid false alarms. The tool was able to essentially ensure that 94% of all the buffer references inside the operating system were in fact correct, leaving only 6% for manual inspection - which sounds fine, until you realize that 6% of Windows is an awful lot of code.

But they invested an awful amount of money into doing the checking because buffer overflows were a serious drain on the resources of the company and of its users. We are gradually moving from a heuristic style of verification - partial verification - to a more rigorous style of still partial verification. We are not doing total correctness of the program, just avoidance of the structural errors and we can see that what the future holds is the possibility of making gradual progress towards the goal of ideal zero defects software.

   

3. You mentioned, when you were talking, the null references and the buffer overflow. What are the other things that we want to detect in the program? What are other faults that we could or we should detect in a program?

There are all kinds of exceptions that we would like to avoid in the running programs. It's been a very successful application of formal techniques in the European space program. After the Arianne 5 disaster, when they lost rocket on takeoff, they've been analyzing the control code for all subsequent rockets using a tool developed to detect and avoid overflow - numbers getting too large and too small. I think that the next phase is for the programmer to be able to specify in advance which exceptions the programmer wants not to ever happen, particularly exceptions at the interface with the libraries because the conditions for library call exceptions are well documented in a good library.

You just look at the first 20 or 100 lines of code in a library call and you see all the things that can go wrong. I think it would be useful for the programmers to have the option of checking. Maybe not all, but some of the modules are not going to raise any of those exceptions because that's the intention. They probably express some important aspect of the correctness or the structural integrity of the interface between the user code and the library.

But then, you also want to have the option of more rigorous check, where some at least of the functional characteristics of the program. The application dependent correctness can be codified perhaps as assertions and, using the same verification techniques check that they will always be met. As certain parameters were kept within requisite bounds, certain security defects are never tolerated. Finally, again, a scientist is an idealist, so one should be able to express the conditions for the full functional correctness of the program as well. Specify what it was that the programmers intended to do and then make sure that it actually does it.

   

4. In languages like functional programming languages they even try to specify effect in the type system, like in Haskell. What do you think of that? Would it be helpful in programming for the Enterprise?

Certainly, in Haskell, the techniques for controlling effects are just the kind of techniques that one uses in a functional language to reproduce the verification of behavioral properties in imperative programs as well. Interestingly, even in imperative languages, you do need to consider historical aspects of the behavior of the system or the history of the effects that it has on the environment in specifying what a correct behavior is. Effect systems are important in both imperative and in functional languages.

   

5. Having this strong type systems could help you do a lot of things, afterwards, when you implement software. There is this project under Microsoft research called The Singularity and we hear that you've been involved in the project in some way or another. Can you tell us a bit about this project?

I must disclaim having had very much contact with it, but it is based on a theory which I have espoused in the past, that concurrency is most easily controlled in an environment of disjoint communicating processes and that the local workspace of each process is completely inaccessible to any other process and the only method of communication between the processes has been by explicit input and output of messages. It was an idea that certainly pre-existed the communicating sequential process theory, which I worked on myself and has been proposed as a method of structuring operating systems for many years before that, but the difficulty that they have is that it involves copying the messages that are sent from one process to another process, from the store of the first descending process into store of the receiving process.

That extra-copying takes extra time, it's not very good for cache memory and, if the messages are at all large, then you can spend most of the time doing message passing instead of useful work. The solution to this is not to copy the messages, but to pass the ownership of the message from one thread to another, from one process to another. The ownership is passed together with just a pointer to the message and as the pointer is passed by single copying in a register, the receiving process assumes ownership and is allowed to read and update that message in situ, without any copying.

This sending process is guaranteed never to try to look at or read the message until the pointer is - perhaps some later stage - passed back to the sending process or, more likely, passed on to another module in the operating system to do further processing. The checking that none of the processes ever access this shared memory, at times when it doesn't have ownership, is all done by an effect system, a compile time system that guarantees against error. I think that's quite a breakthrough in applications of ideas of message passing.

   

6. In some way, in software, we did some assumptions and then we built on top of these assumptions without even trying to rethink about them and that's how it works, anyway. But then, some time, we get to re-question these assumptions. Do you see a lot of assumptions to rethink about in the software stack?

I think the assumptions are amazingly pervasive in real software. Every line of code makes enormous assumptions about what the preceding line of code did and what the next line of code is going to do with the information when control passes to it. The verification technology using assertions is an attempt to make those assumptions explicit. Every assertion is an interface assertion. It defines the obligations of the program that come before the assertion to make the assertion true and an assumption that is made by the code after the assertion that the assertion will be true when control passes into that point.

The uncomfortable thing, as I said, is the size of these assertions is really enormous. There is an enormous number of implicit assertions, which the program is relying on for its correctness, but which nobody has ever written down explicitly. The verification technology actually requires it to be written down, made very explicit. When you come to look at serious programming errors, the real explanation is not that somebody made a mistake, it's just that they failed to make their assumptions explicit, so that the program that would have worked if the assumptions had been true, just doesn't work.

   

7. You mentioned the Arianne rocket and as I remember there was also this assumption because it was a module that has been tested for smaller rocket and it was okay in that context and then it was used in this new bigger rocket and the assumptions there and the implementation had been OK, but you didn't think about it could be used in a totally different context. Is it possible at all to make all these assertions to be sure that a program can be 100% not getting into trouble?

When you look at the size of these assumptions, you wonder whether a human being could ever realistically be expected to formulate and record them all. If the programmer doesn't record these things, then the computer can't construct or check or prove that the programmer is correct. In fact, this was the one point that the developers in Microsoft felt was a limitation of assertions. It just took too long to write down explicitly what a helpful assertion might be.

Everybody's working under time pressure and I think they rightly estimate that most of the assertions that they actually write would never be useful to anybody, because most of the code works most of the time and therefore, it's not really worth spending the effort in making all those assertions explicit, which are never going to be used. In other branches of engineering I think there is sufficient experience and, indeed, it is sometimes required by law that you document the designs with incredible thoroughness and you hope that nobody is ever going to look at the documentation, but it's there because if something did go wrong and there was an inquiry there was found that the fault was due to a failure to document your design correctly or properly or adequately, then you could go to prison - negligence.

Documentation in other branches of engineering is accepted as an overhead that is necessary; in programming that's not currently accepted practice. The solution to the problem, which is becoming increasingly realistic is to write program analysis tools, which actually infer what the correctness criteria, what the assertions should be, what the invariants of the loop are, or the pre-conditions of the conditional. It does this by using theoretically sound techniques, like weakness pre-conditions to reason from the things that the programmer obviously didn't want to happen to things that he actually wanted to happen and gradually build up or supplement the assertional information in the code by additional assumptions and pre-conditions that make the proof possible. That's an area of research which has made enormous strides in recent years.

   

8. For example, when doing any spell check, there are some mechanisms that are based on the grammar theory and stuff like this. In Google, what they actually do is they have a lot of machines with a lot of text and they just try to find the text somewhere and then they do the spell check and the grammatical check this way. There are 2 different approaches to checking grammar, 2 different ways. Also, in the programming languages, there are systems that are based on proof and type system statically typed, like Haskell and there are dynamic languages where it's all about observation of runtime behavior trying to optimize there and trying to make some assumptions at runtime. What do you think of this difference, of 2 different approaches?

Doing all the checks at runtime is similar to what I suggested for assertions that you use the assertions during testing in order to detect whether there are any regression errors in the code, but, increasingly, developers are leaving assertions in the code to run on the customer side because the machines are now fast enough to make it worthwhile to get that extra bit of security and diagnostic capability actually on the customer side rather then remove it and take the risk of a more serious fault develops later.

That's very similar to the dynamic type checking of the dynamic languages. If the static type checking is too much of a burden or it's not possible, or you've acquired the extra flexibility of a dynamic typing system, then you have to accept the extra overhead of Runtime type checking. It's all on a continuum and you choose whatever technology is most appropriate for your current needs. In other words, it is an engineering problem, rather than a scientific one.

   

9. You mentioned you have this Bachelor's in the classics. How did you get involved in computing? I know you've studied computer science and computer languages in Russia. Can you tell a bit about your history before you started in the '60s with computing? How you began in this, which was a very young field at that time?

It's an interesting story, I find. The classics degree in Oxford is structured in 2 parts: a 4 year degree - just under 2 years you study Latin and Greek language and literature and in the remaining over 2 years you study ancient history and philosophy. You study ancient philosophy and modern philosophy. In ancient philosophy you study Aristotle and Plato and in modern philosophy we were studying modern ideas of logical positivism and linguistic philosophy as well as more traditional philosophers and English empiricists.

I was fascinated by the philosophy of mathematics - how is it that mathematical proof gives to the human intellect a feeling of certainty of truth that far surpasses anything that can be delivered by some poor observation of the real world. The idea of truth of which the evidence that you give is proof, was a very interesting philosophical idea for me. At least in my last terms, my philosophy tutor was John Lucas, who was similarly interested in the potential of computers to illuminate some of the questions of philosophy.

I was set to read Allen Turing's article on non-determination problems for computer programs, which was, after all, published in the Philosophical Journal Mind. That blew my mind! I didn't understand it, but it was amazing stuff. I thought how interesting it would be to move into computing and that would likely be the sort of job that I would look for. That's in fact what I did. Since I was - I suppose - good at languages, I know Latin and Greek and Russian, for example, they put me on to writing a compiler for an artificial language rather than a real language - that's how it all started.

   

10. When was that you made this almost artificial intelligence where we are trying to translate in human language by means of computer programs? When was that?

I got interested in human language translation by a strange chance. I was studying in Moscow State University for a year after leaving Oxford and I got a letter from the National Physics Laboratory offering me a very prestigious and senior job as a senior scientific officer in their computing division to work on a project for the translation of scientific Russian into English, which they were programming on Turing's old ACE computer design - one of the first computers that existed in England at that time.

I started studying what the Russians were doing in machine translation and I met people who were beginning to take an interest in translating from English to Russian and Russian to English, in some of the techniques of grammatical analysis that they were using - in fact, the first article I ever wrote was submitted to the Russian journal Machine Translation, and it was published. It was written in Russian, of course, typed it up on a friend's typewriter and it was published. I think I donated a copy of it to the computer museum in San Francisco a while ago.

After studying machine translation of languages for about 6 month in Russia, I came to the conclusion that it was impossible, so I didn't take the job that they offered me at the National Physics Laboratory. There were 2 other reasons why I didn't take the job: first of all, they said "Oh, not exactly a senior scientific officer, not in fact a scientific officer at all. We'll appoint you as a technical officer, and we noticed you haven't got a scientific degree. You know, that means you could never become a member of the scientific civil service. You would have to be temporary." So as a temporary experimental officer I thought it was reasonable to decline the job.

Anyway, when I was in Moscow, I also - just before I came back - volunteered to help my uncle who was organizing an exhibition in Moscow at each Eliot Brothers was exhibiting a computer. So, here, at last! Because I wasn't allowed to see any Russian computers at all - they were much too secret. There was this computer and I went to the exhibition and I spent a lot of time on the stand interpreting between the English technical stuff and the Russian visitors. In the end, they offered me a lift back in the van, which had brought the computer to Moscow, so I was able to help them through customs and help them in Russian and so on. When I got back, they offered me a job.

   

11. Does that mean that you aspired to be a scientist, but then you had to be an engineer for the Eliot Brothers and work with computers?

I aspired to be a philosopher, actually. It might be noticeable even now because a lot of what I say is rather philosophical, don't you think? When I gave my first lecture at the Queen's University in Belfast, somebody in the audience said "I never thought you could talk so much philosophy about computers". That's really what I've been doing ever since.

   

12. Don't you feel sometimes annoyed that certain ideas of research take a whole lot time to get mainstream?

When I first went to Belfast, I thought it was a good idea that I was working on research that would take 20 years or more to come to fruition - and 30 years actually - because when I thought the reason that follows, that as soon as a subject becomes suitable for research in industry, the industry will devote so many more resources to the subject than you could ever raise in academic research that you would have to give up and move on to something else. So, I predicted that research into the formalization and semantics and correctness of computer programs would last me the whole of my academic life and so it did.

Now, I see that Microsoft and other companies are getting quite interested in program analysis techniques and in improving the quality of their code and indeed making great strides and the resources they are putting into it are indeed well beyond anything that academics could reasonably be expected to. But I think I was wrong in thinking that the academics should now give up; that's actually the reverse of the truth because I see now that industry must - and it's the right thing for them to do - in the initial stages at least apply their new technologies to their existing products, their existing code basis.

They must always go to - we use the phrase inside Microsoft - "pluck the low hanging fruit". Wherever there is an easy win or a killer app, that's what you still have to do first. It may still cost you a billion dollars to install the new technology and it has, but you've got to go for a quick win. There is plenty of scope for academics to take the opposite approach and work from the very long term ideal, still, which is the ideal of total correctness, perfectly accepting the fact that the choice whether to use the technology on any particular project is always going to be an engineering and commercial decision.

The science is going to be the same no matter which way that decision is taken. The scientist goal is to make sure that the option is always available, that nobody ever declines to use the technology because the science is still uncertain. I think science and engineering coexist and further mutually fertilize each other.

   

13. These things we can easily observe in Microsoft, because now we are hearing more and more about commercial products coming out from Microsoft based on a lot of research, like F#, Spec#, Singularity and there is a lot of stuff that might even go see the lights. Do you think this will change the way we perceive software? Will we start having a kind of mind set shift thinking about software? Because we had these assumptions for a long time and now we are starting to see new ideas, new things and it seems that the world of software is changing in some way.

Science loves change, it loves mind set shifts, it loves working on new things and I would like to see and a lot of my personal research and my activities are devoted to broadening the contacts between the industrial and the academic researchers in this area. The big mind set shift that is going to be required and seen in the academic computer science community is the move towards large scale projects, large scale and longer term projects. It's part of the normal process of the maturation of science that starts with a situation in which the major discoveries are made by individual scientists working more or less alone or in a small laboratory or indeed in a large laboratory. But, this side of work, although there is still a place for it in the mature branches of science, is no longer the main paradigm for science.

The physics and astronomy and more recently biology have all become in the last half of century big sciences in which the major discoveries are made by worldwide collaboration on a project, which only makes sense as a large scale and long term worldwide project. Building a nuclear accelerator at CERN is an example; building a satellite, building a telescope and using it - these are all now major international collaborative endeavors, just like the discovery of the human genome was. I think we will need to see aggregation and much more collaboration and joint planning of work in the area of verification than we had in the past, where people are much more willing to write tools and use tools written by other scientists to perform their program analysis and to conduct their research into correctness.

   

14. With your time in the computer science field, I'm guessing that you've seen some common trends, things that have remained consistent throughout time since 1960. What are those trends that have remained consistent and how do you think it will continue to the future?

I'm sorry to say it's the mistakes that remained consistent, not the virtues. The problems that afflicted us in the 1960s were the difficulty of predicting performance of large systems, the difficulty of discovering requirements, the difficulty of implementing code that was coherent across large-scale module boundaries. All of these things are still with us. I suppose I could say that even in 1960 living with legacy code was there. Dykstra once said that every day when he comes into work he thanks providence that he is not charged with the responsibility of living with legacy code - that's certainly still with us.

   

15. You've made a major contributions to the field of computer science for example in concurrency, with communicating sequential processes, you've been influential with Algol. Which one will you yourself say "This is my favorite child!", if that's possible?

The most dramatic one was QuickSort, after all. It was my first discovery. I knew it was publishable and I published it and when I came to move into the academic career. It was probably the main reason why I was appointed, so I have a great affection for QuickSort.

Jun 25, 2009

BT