Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ


Choose your language

InfoQ Homepage Articles Interview with Grady Booch

Interview with Grady Booch


This article first appeared in Objective View magazine and is brought to you by InfoQ & Objective View magazine.



Grady Booch, creator of the Unified Modelling Language (UML), chief scientist of the former Rational Software Corp., founding member of the Agile Alliance and the Hillside Group, and chief scientist – software engineering at IBM Research, talks to Mark Collins-Cope about UML, Agile, XP and a bunch of other things…

Mark: Hi Grady, thanks for agreeing to do this interview.
Grady: Mark, my pleasure. Thank you for the opportunity.

UML and UP

Mark: In the mid ‘90s modeling software using notations was big news, and we were all arguing about which notation to use (Booch, Jacobson, OMT, Coad, Firesmith, Selic, HOOD, RDD, Jackson, there were easily two dozen or more!). Then the ‘three amigos’ - as yourself, Rumbaugh and Jacobson affectionately became known - got together and created the UML - the Unified Modelling Language (see also this document ).

This was good for everyone as it meant we could all communicate using a single notation. Then UP came out, iterative and incremental (mini-waterfalls) became the order of the day and then XP hit the scene and the beginnings of “full agile” began to appear.

Modelling using UML then seemed to lose the focus it had had previously. Modelling in general - to some degree - but mostly ‘big design up front’ (BDUF) in particular seemed - on some discussion groups at least - to become the work of the devil.

Why do you think that was?

Grady: The mid 90’s were a vibrant time...but there was a deeper reason for this, beyond the methodology wars. You must remember that that in the 80’s, the programming language landscape was fundamentally different than today. The industry was making the transition from algorithmically-oriented languages such as Fortran, Cobol, and C to object-oriented ones, such as Smalltalk, Ada, and C++.

The problem therein was two-fold. First, we were building systems of exponentially greater complexity than before, and second, we didn’t have the proper methodological abstractions to attend to these new ways of programming (namely, objects). As such, there was rabid innovation in the art and practice of software engineering as we moved from structured methods to object-oriented ones. Today, objects are part of the atmosphere, and so we don’t think about it, but back then, the very ideas were controversial.

So, we need to separate methodology from process, for the two are not the same. On the one hand, there was a general recognition that we needed better ways to reason about our systems and that led to this era of visual modeling languages. On the other hand, it was clear that traditional waterfall methods of the 60s and 70s were simply not right for contemporary software development, which led us to the predecessors of agile methods. Waterfall (from Wyn Royce, although even Wyn recognized the need for incrementality) begat the spiral model (from Boehm) which begat incremental and iterative approaches, which were always a part of the OOAD processes we at Rational developed. We chose to separate the UML as notation from RUP as process. The former we standardized through the OMG, the latter we made mostly open source.

Mark: It was an unusual period in time. Transitional.

Grady: All times are transitional!

Mark: :) Do you think UP was a transitional approach - in the sense it broke the path for later methods by carving out the iterative & incremental nature of software - but allowing the mini-waterfalls for perhaps ‘comfort’ reasons - rather than fully technical ones.

Grady: I concur that the UP was transitional, but the notion of incremental and iterative as comforting was not the reason. I am heavily influenced by Herbert Simon’s work, especially as described in The Sciences of the Artificial in which he observes that all complex systems grow through a series of stable intermediate states (John Gall in Systemantics says very much the same things). So, the notion of these stable points is quite legitimate. Even agile methods have them...these are manifest in each build. Process-wise, this is what miniwaterfalls provide. Also, Parnas’s A Rational Design Process, Why and How To Fake It applies here as well.

Mark: Coming back to UML - there was certainly a degree it ceased to be flavour of the day. To what degree do you think change of emphasis was justified or not justified? Was it perhaps that because UML was new, it became too much of a focus?

Grady: Sic transit gloria mundi: there is a time and place for all things, and when those things prove intrinsically valuable, they become part of the atmosphere and they morph to meet the needs of the present time. So it is with the UML. The notation retains modest use, but the underlying concepts live on.

The penetration of the UML probably never exceeded 10-20% of the industry, although some domains - such as in the hard real time world - found use much higher. So, honestly, I’m pleased with what the UML achieved to the degree that it did, because it helped transform objects from something strange into something in the interstitial spaces of the software world. That being said, I think that the UML eventually suffered from the standard growing to be overly complex.

The MDD movement turned the UML into more of a programming language. While I celebrate organizations who were quite successful in that use - such as Siemens, who has used the UML deeply in its telecom products - our intended use case for the UML was more modest: to be a language for reasoning about systems. In practice, one should throw away most UML diagrams; in practice, the architecture of even the most complex software-intensive system can be codified in a few dozen UML diagrams, thus capturing the essential design decisions.

Much more makes the UML a programming language, for which I certainly never intended it.

So, to continue, in many ways XP and its successors was an outgrowth of the changing nature of software development due to the Web.

We began to see a dichotomy arise: lots of simple code being written on the edge of the Internet, and smaller volume/greater complexity below the surface. XP flourished out of the dynamics of building these things at the edge, where experimentation was key, there was no legacy of any material amount, and for which we had domains wherein there was no obvious dominant design, and thus required rapid build and scrap and rework...all good things!

Mark: We’ll come back to XP and Agile a little later, but sticking for the moment to UML - was there anything in the notation - as per release 1.0 - that with hindsight you regretted or thought could have been done in a better way?

Grady: Two general things come to mind. First, we never got the notation for collaborations right. I was trying to find the Right Way to describe patterns, and collaborations were the attempt. Second, component and deployment diagrams needed more maturing. Kruchten’s 4+1 view model was, in my opinion, one of the great ideas of software engineering, and being a systems engineer, not just a software engineer, I designed the UML to reflect those views. However, my systems-orientation was not well accepted by others.

Oh, there’s a third one - typically programmer error on my part, off by one! The UML metamodel became grossly bloated, because of the drive to model driven development. I think that complexity was an unnecessary mistake.

Mark: How would you like to have seen collaboration diagrams?

Grady: I think we needed something akin to what National Instruments did in LabView for subsystems, but with a bit of special sauce to express cross-cutting concerns. I had hoped the aspect-oriented programming community could have contributed to advances here, but they seemed to have gotten lost in the weeds and forgot the forest.

Mark: What would you consider to be the most important core techniques of UML, and why? And what modelling techniques have you seen most used in your experience with the wider industry?

Grady: Two things. First, the very notion of objects and classes as meaningful abstractions were a core concept that the presence of the UML helped in transforming the way people approached design. Second, the presence of the UML, I think, helped lubricate the demand pull for design patterns. I was always a great fan of the design patterns Gang of Four, and I hope that the UML and the work I did in this space contributed in some small manner to making their work more visible.

Mark: So in a sense UML was key to introducing the object-oriented mindset into industry. Perhaps without it we wouldn’t have the mainstream adoption of OO as we do today?

Grady: The UML - and all that surrounded it - was simply a part of the journey.

Mark: What notations would you recommend be used on agile projects today.

Grady: Oh, I rather still like the UML :-) Seriously, you need about 20% of the UML to do 80% of the kind of design you might want to do in a project - agile or not - but keep in mind that this is in light of my recommendation for using the UML with a very light touch: use the notation to reason about a system, to communicate your intent to others...and then throw away most of your diagrams.

Mark: Perhaps approaching modeling using something like Scott Ambler’s “Agile Modelling?”

Grady: Scott’s work is good, as is of course Martin Fowlers. I’d also add Ruth Malan’s writings to the mix.

Mark: UML was eventually handed over to the Object Management Group (OMG) who later released version 2.0. Was this an improvement over version 1.0.

Grady: I do celebrate the stewardship the OMG gave to the UML. In an era when open source was just emerging, handing over the UML standard to another body, to put it into the wild, was absolutely the right thing. Having a proprietary language serves no one well, and by making the UML a part of the open community, it had the opportunity to flourish.

Mark: What, in your opinion, does modelling give you that simply sitting down and writing code doesn’t?

Grady: As I often say, the code is the truth, but it is not the whole truth. There are design decisions and design patterns that transcend individual lines of code, and at those points, the UML adds values. The UML was never intended to replace textual was meant to complement them. Consider the example diagrams above, coming from the one million plus SLOC code base of Watson. You could find all these things in the code, but having them in a diagram offers a fresh and simple expression of cross-cutting concerns and essential design decisions.

Mark: So models in UML can assist in conveying a higher level of thinking about the intent in the code?

Grady: Absolutely...and if a visualization such as the UML doesn’t, then we have failed. As I have often said, the history of software engineering is one of rising levels of abstraction (and the UML was a step in that direction). The Unified Process (UP)

Mark: Was UP was the first major software development process that embraced iterative and incremental development, over the older style ‘waterfall’ model.

Grady: Actually, if you read Wyn Royce’s waterfall paper, or Parnas’ classic paper A Rational Design Process: How and Why To Fake It, you’ll realize that the seeds for iterative and incremental processes were already there. Additionally, Boehm’s Spiral Model (and Simon’s intermediate stable states) were all in the atmosphere. We just brought them together in the UP.

Mark: What were the motivations behind that at the time?

Grady: The UP reflected our experience at Rational Software - and the experience of our customers - who were building ultra-large systems. We were simply documenting best practices that worked, and that had sound theoretical foundations.

Mark: To clarify for readers: what is the difference between being iterative and incremental?

Grady: Consider washing an elephant. Iterative means you lather, rinse, then repeat; incremental means you don’t do the whole elephant at once, but rather you attack it one part at a time. All good projects observe a regular iterative heartbeat. What bit you choose is a matter of a) attacking risk, b) reducing unknowns, and c) delivering something executable.

Honestly, everything else is just details. The dominant methodological problems that follow are generally not technical in nature, but rather social, and part of the organizational architecture and dynamics.

Mark: UP was pretty large - especially if you looked to follow it with any degree of rigour. In retrospect is that something that you regret? Or do you think perhaps people got the wrong end of the stick about how UP should be used in practice?

Grady: Here’s how I would say it (and still do). The fundamentals of good software engineering can be summarized in one sentence: grow the significant design decisions of your system through the incremental and iterative release of testable executables.

Honestly, everything beyond that is details or elaboration. Note that there are really three parts here: the most important artifact is executable code; you do it incrementally and iteratively with these stable intermediate forms; you grow the system’s architecture.

The UP in its exquisite detail had a role...remember that this was a transitional time in which objects were novel.

Mark: All times are transitional :).

Grady: Even this interview! :-)

Mark: Touché :)

The four major phases of UP are inception, elaboration, construction and transition. To what degree do you think these phases are relevant in today’s ‘agile’ world?

Grady: No matter what you name something, these are indeed phases that exist in the cycles of every software-intensive system. One must have a spark of an idea, one must built it, one must grow it, and then eventually you must release it into the wild.

Mark: Which aspects of UP that you think agile projects could benefit from in particular?

Grady: Two things come to mind. First, it’s a reminder of the one-sentence methodology I explained earlier - there is a simplicity that underlies this all; second, it’s a reminder of the importance of views and design patterns in the making of any complex system. By views, I mean the concept that one cannot fully understand a system from just one point of view; rather, each set of stakeholders has a particular set of concerns that must be considered. For example, the view of a data analysis is quite different from the view of a network engineer...and yet, in many complex systems, each has valid concerns that must be reconciled. Indeed, every engineering process is an activity of reconciling the forces on a system, and by examining these forces from different points of view, it is possible to build a system with a reasonable separation of concerns among the needs of these stakeholder groups.

For more detail, go look at Philip Kruchen’s classic paper “The 4+1 View of Architecture”.


Mark: XP was the first ‘agile’ approach to software development to gain a really big following. Why do you think that was?

Grady: XP was the right method at the right time led by charismatic - and very effective - developers. This is as it should be: as I often say, if something works, it is useful. XP worked, and was useful.

Mark: What do you think of the practises of XP?

Grady: I think that the dogma of pair programming was overrated. TDD was - and is - still key. The direct involvement of a customer is a great idea in principle but often impractical. Doing the simplest thing possible is absolutely correct, but needs to be tempered with the reality of balancing risk. Finally, the notion of continuous development is absolutely the right thing.

Mark: On the subject of TDD - do you think comprehensive unit test is good thing - and is it necessary to write all the tests before the code? Or to put it another way, are you sometimes tempted to write the tests afterwards?

Grady: I believe in moderation in all things, even in the edict of writing ALL tests first. I also believe in the moderation of moderation :-)

Mark: Before XP refactoring existing code during later iterations seemed to be completely ignored as a major activity. Or was it? Do you think XP has made a major contribution here?

Grady: XP gave a name and a legitimacy to the notion of refactoring. In that regard, XP has made a major contribution. Still, one must use refactoring in moderation. At the extreme, refactoring can become a major contributor to scrap and rework, especially if you choose a process that encourages considerable technical debt.

Mark: Do you think there is a balance to be struck between up front design and refactoring?

Grady: Well, of course, and it all goes back to risk. Remember also that in many domains, the key developers already intrinsically know the major design decisions, and so can proceed apace. It is when those decisions are not known, when there is high risk, that you must rework the balance. It’s ok to refactor a bit of Javascript; it is not ok to refactor a large subsystem on which human lives depend.

Mark: Continuing that theme, are there some design decisions that are more important than others? Decisions that need to be tied down early in the project lifecycle? If so, why?

Grady: Again, it all goes to risk. What are those design decisions that, if left unattended to, will introduce risk of failure or risk of cost of change? This is why I suggest that decisions should be attended to as a matter of reducing risk and uncertainty. In all other cases, where the risk and cost are low, then you process with the simplest thing possible. Often, you must also remember, you may not even know the questions you need to ask about a system until you have built something.

For example, suppose I’m building a limited memory, embedded system. I might do the simplest thing first, but if that simple thing ignores the reality of constrained memory resources, I may be screwed in operation.

Similarly, suppose I do the simplest thing first, just to get functionality right, but then realize I must move to Internet scale interactions. If I don’t attend to that sooner rather than later, then I am equally screwed.

Mark: In what way do you think some techniques of UML might help with the last couple of points?

Grady: The UML should be used to reason about alternatives. Put up some diagrams. Throw some use cases against it. Throw away those diagrams then write some code against you best decision. Repeat (and refactor)

Mark: Do you have a preference for iteration size - and do you think it is necessary to differentiate between different types of releases when talking about iterations?

Grady: It depends entirely on the domain, the risk profile,and the development culture. Remember, I have been graced with the opportunity to work on software intensive systems of a staggerly broad spectrum, from common Web-centric gorp to hard real time embedded stuff. So, it really depends.

In some cases, a daily release is right; in others, it’s every few weeks. I’d say the sweet spot is to have stable builds every day for every programmer, with external releases every week or two.

Mark: I was surprised to see BPML as a separate notation to UML. Was this really necessary?

Grady: I think it was a sad and foolish mistake to separate BPML from the UML. There really is no need for Yet Another Notation. I think the failure came about because we failed to find a common vocabulary with the business individuals who drove BPML. In many ways, there really is a cultural divide between business modeling and systems modeling...but there really shouldn’t be.

Mark: Do you think UML would have been more popular if it had had a Japanese name :-)

Grady: I would have preferred Klingon: Hol ghantoH Unified. Or even a Borg designation (for resistance would then be futile).


Mark: After XP came the ‘Agile Alliance’ - a very talented group of independents seemed to push the gains XP had made even further. Was this a good thing?

Grady: Absolutely. I was a founding member of the Agile Alliance (and would have signed the Snowbird document, but I was working with a customer that week). Don’t forget also the Hillside Group, which at the same time was promoting the use of design patterns.

Mark: One of the big things with the Agile Alliance was a move to make software development more of a collaborative thing - than a contractual thing - is that something you agree with - and in which circumstances?

Grady: Development is team sport, so of course I support this. BTW, I think we must go even further: development as a social activity, with attendant issue of ethics, morality, and its impact on the human experience.

This, by the way, is exactly what we are trying to explore in Computing: The Human Experience, a multi-part documentary we are developing for public television.

Mark: How would you deal with a customer who was insistent they wanted to have the full cost of a system defined upfront?

Grady: I would either bid an outrageously high cost or walk away. Most likely, I would walk away. I don’t like working with organizations who are clueless as to the realities of systems development, for I find that I spent most of my time educating them.

Mark: Scrum seems to have come out as something of a winner in the agile approach stakes. It’s interesting that Scrum itself doesn’t really refer to any detailed software development steps - but seems to focus more in the product management and team working aspects of development . What do you think of Scrum?

Grady: As a former rugby player, I like scrums. Software development is a team sport, and scrums attend to the social dynamics of teams in a (generally) positive way. That being said, there’s a danger that teams get caught up in the emotional meaning of the word and don’t really do what it entails. There’s a lot of ceremony about what makes for a good scrum (and lots of consultants who will mentor a project), but the essence of the concept is quite simple. Wrapping it up in lots of clique-like terminology - scum master, sprints, and so on - makes it seem more complex than it is.

The Cloud and SaaS/PaaS

Mark: Software as a Service is an emerging, or perhaps emerged model. Are you involved with that in any way?

Grady: This is something I talked about over a decade ago, as I projected out the trajectory of software-intensive systems. We first saw systems built on bare hardware, then we saw the rise of operating systems, then the rise of the Internet as a platform...and now we are seeing the rise of domain specific platforms such as Amazon, AutoSAR (for in-car electronics), Salesforce, Fawkes (for robotic systems), and many others. In effect, this is a natural consequence of Utterback and Abernathy’s idea of dominant design: as software-intensive systems become economically interesting, there will arise a dominant platform around which ecosystems emerge.

Mark: Using the SaaS model, the browser - in most cases - becomes the vehicle of the UI. We seem to have quite a lot of shifts in our run time environments over the years…

Grady: It’s been interesting to see the shift o platforms: from systems built on bare metal, to those built on top of operating systems, to those on top of the Web, to those on top of domain-specific platforms. The browser was the gateway to systems on the Web, but even that is changing, as we move to mobile devices and now the Internet of Things. Apps are the gateway to mobile systems, and what the IoT will bring is up for grabs.

Software Development - Hype versus Reality?

Mark: I’ve heard people criticise software development as being more like a fashion industry than a serious engineering discipline. For example we’ve had: Structured Programming; 4GLs; SOAs; CBD; RAD; Agile; Aspect oriented analysis and design; etc. That doesn’t mean they don’t add value, but some seem to come and go quickly...

Grady: If you looked inside other engineering domains, you’d see a similar history of ideas, so we are not unique. Indeed, I’d be disappointed if we had everything figured out, because it would mean that we are not pushing the limits of building real things.

Programming Languages

Mark: Dynamic or scripting languages have gained immensely in popularity over the last ten to fifteen years - it seems they have grown from being ‘simple’ ways to add some functionality - client or server side - to HTML pages, and are now being used for ‘full blown’ applications. Are they fit for purpose in that context?

Grady: This attends to what I alluded to earlier, the notion that a lot of new software is being written at the edges of the Web. In such circumstances, you really do need a language that lets you weave together loosely-coupled components in a rapid fashion. Scripting languages fit this need perfectly. Personally, I use PHP and Javascript the most in that world.

Mark: Functional languages or functional programming seems also to be an area of growing interest. Do you think that offers any major benefits? Or is it perhaps just another ‘fad’?

Grady: I had the opportunity to interview John Backus, just about a month before his death. We spoke of functional languages - he was responsible for a lot of what’s gone on in FP - and something he said has stuck with me: the reason that FP failed in his time was that it was easy to do hard things but almost impossible do easy things. I don’t think that circumstances have changed much since John’s time.

Drivers of Major Change and Open Source

Mark: It seems to be that the real driver of major change to software development isn’t actually software at all, but is hardware/infrastructure driven: increased processor power, increased network speed, the advent of broadband, etc. Software development, on the other hand, has changed rather slowly in comparison - all the current major paradigms (OO, functional, procedural) have been around for over - what - forty years now. Would you agree, and if so is there an underlying reason for this, do you think?

Grady: I disagree. The real driver of major change has been the reality that softwareintensive systems have woven themselves into the interstitial spaces of our civilization, and ergo there is a tremendous demand pull for such systems. In many ways, hardware development is reaching a plateau - it is increasingly a commodity business. We will see more breakthroughs, but consider that all the major hardware paradigms have been around for 60 or more years.

Mark: One final question - perhaps on a lighter note - do you have any predictions for how the world of technology - as it relates to software development - will appear in say 20 years time (we won’t hold you to them :).

Grady: Yes (note that I answered your question precisely) :-)

Mark: Grady Booch, thank you very much for your time on this interview - it’s been a pleasure talking to you.

About the Interviewee

Grady Booch is an IBM Fellow, an ACM Fellow, an IEEE Fellow, a World Technology Network Fellow, aSoftware Development Forum Visionary, and a recipient of Dr. Dobb's Excellence in Programming award plus three Jolt Awards. Grady was a founding board member of the Agile Alliance, the Hillside Group, and the Worldwide Institute of Software Architects, and now also serves on the advisory board of the International Association of Software Architects. He is also a member of the IEEE Softwareeditorial board. Additionally, Grady serves on the board of the Computer History Museum, where he helped establish work for the preservation of classic software and therein has conducted several oral histories for luminaries such as John Backus, Fred Brooks, and Linus Torvalds. He previously served on the board of the Iliff School of Theology. Follow him: @grady_booch See also Computing: The Human Experience .

This article first appeared in Objective View magazine and is brought to you by InfoQ & Objective View magazine.

Rate this Article