Interview and Book Review: How Google Tests Software
"How Google Tests Software" by James Whittaker, Jason Arbon and Jeff Carollo is a book that details exactly what is described on the cover. It is an informative and interesting look beneath the covers of how a large technical organization like Google deals with the complexity of software testing.
The book overviews the approach Google takes to testing software followed by chapters dedicated to the two test engineering roles at Google which are the Software Engineer in Test (SET) and Test Engineer (TE) roles and finally the Test Engineering Manager responsibility. Throughout the book there are sections and interviews from many other Googlers with the final chapter being dedicated to some of the thoughts on the direction of testing at Google. Alberto Savoia gives a good overview of the book in his foreword:
The Internet has dramatically changed the way most software is designed, developed, and distributed. Many of the testing best practices, embodied in any number of once popular testing books of yesteryear, are at best inefficient, possibly ineffective, and in some cases, downright counterproductive in today’s environment... How Google Tests Software gives you a very timely and applicable insider’s view into how one of the world’s most successful and fastest growing Internet companies deals with the unique challenges of software testing in the twenty-first century.
The introductory sections of the book give some interesting background on why Google needed to change the status quo of testing and how they went about doing it, and should be essential reading for anybody working in the testing field or leading people who do, as Patrick Copeland explains in his foreword:
If I was going to change testing at Google, I needed to change what it meant to be a tester...The only way a team can write quality software is when the entire team is responsible for quality... From my perspective, the best way to do this was to have testers capable of making testing an actual feature of the code base. The testing feature should be equal to any feature an actual customer might see. The skill set I needed to build features was that of a developer... Engineers seemed threatened by the very notion that they would have to play a bigger role in testing, pointing out “that’s what test is for.” Among testers, the attitude was equally unsavory as many had become comfortable in their roles and the status quo had such momentum that change was becoming a very hard problem.
One of the key lessons about the Google approach is that they do not have a large number of testers, particularly when you consider the depth of their product portfolio which includes web applications, search engines, operating systems, mobile, social and enterprise solutions. As a result building quality in is crucial to their success.
...stop treating development and test as separate disciplines. Testing and development go hand in hand. Code a little and test what you built. Then code some more and test some more. Test isn’t a separate practice; it’s part and parcel of the development process itself. Quality is not equal to test. Quality is achieved by putting development and testing into a blender and mixing them until one is indistinguishable from the other.
The chapter on the SET role explains the practices of this role and the environments and frameworks that they create and maintain. There is a mix of technical and testing detail in this chapter, including some interesting background to Google's test execution and continuous integration systems as well as their test size definitions. Also interesting is the background behind the the Google Test Certified program, which was instrumental in introducing a developer-testing culture. In particular, one of the benefits was:
They got lots of attention from good testers who signed up to be Test Certified Mentors. In a culture where testing resources were scarce, signing up for this program got a product team far more testers than it ordinarily would have merited.
The chapter on the TE role makes up a large portion of the book and looks into a number of the practices of this role and how they are carried out, including test planning and a description of ACC (Attribute Component Capability) Analysis and Google Test Analytics as well as James Whittaker's 10-Minute Test Plan. It also covers topics such as risk, crowd sourcing, test cases and bug reports as well as some of the different experiments and tools that they have used to solve testing problems.
...few schools systematically teach software testing. This makes hiring good testers a challenge for any company, because the right mix of coding and testing skills is truly rare... TEs are rare individuals. They are technical, care about the user, and understand the product at a system and end-to-end perspective... It’s a small wonder Google, or any company for that matter, struggles to hire them.
After a chapter describing the Test Management roles and their importance in the process, the final chapter is an attempt to look at the future of testing at Google and in particular the focus on products and the growth of dogfooding or crowd sourcing.
Whenever the focus is not on the product, the product suffers. After all, the ultimate purpose of software development is to build a product, not to code a product, not to test a product, not to document a product. Every role an engineer performs is in service to the overall product. The role is secondary. A sign of a healthy organization is when people say, “I work on Chrome” not whey they say, “I am a tester.”
The only criticism around this book is that whilst it was very easy to read and follow throughout, it is somewhat evident that different sections of the book were written by different authors. It was also disappointing to learn that all three authors have subsequently left Google since writing this book.
Overall this is a book that anybody involved in software development should read, particularly as the testing problems that are discussed throughout the book are common to any organisation who is testing web, cloud or mobile based applications. Whilst there are a number of references to the Software Engineer (SWE) role, this book is primarily focussed on testing from the STE and TE viewpoint. There are a lots of lessons in this text for testers and test leaders who are looking for ways to evolve their testing approach.
Recently the authors spoke to InfoQ about the book.
InfoQ: What was the main motivation for writing this book and sharing the Google approach with the rest of the world?
A bunch of folks in the Google Engineering Productivity team talked about doing a book. We already had a conference and successful blog so the demand was clearly there. But talking about a book is easier than writing one so it kept getting put off. The three of us finally got serious about it and wrote the damn thing! It was interesting though, as soon as it was clear that we were actually going to finish it, a lot of Googlers became interested in contributing and we were scrambling to get everyone involved. The "Interviews with Googlers" thread in the book was part of that effort and also a bunch of Googlers were official reviewers for the publisher. Google has always led the way in testing cloud software. This book makes that leadership role official by exporting it.
InfoQ: The book details the focus on engineering productivity and the Software Engineer in Test Role that works with the individual project teams at Google to focus on testability and toolsets. Do you think that was one of the key factors in Google improving its test practice?
Two things were key, one was certainly the centralization of the test role under its own management chain as you point out. This was crucial to keep testing from remaining the second class citizen it was prior to Engineering Productivity being formed. The second was the concentration on the technical role for testers. Google made testing a development task with testers who were as good at coding as their developer counterparts. It earned development’s respect and got them involved in the process. However, before you go too far with this model, read the last chapter of the book ... once Google "grew up" as a quality-oriented developer culture, the need for Engineering Productivity changed, creating a culture that made it no longer necessary.
InfoQ: Throughout the book there is a lot of detail around the different testing techniques and testing tools (a mix of in-house built and open source). What is the driver for teams to build their own tools? How do Test Engineers and Software Engineers Under Test keep up with the latest test approaches?
The driver is simply that the tools necessary to automate testing don't exist on the market. The only "tool" Google adopted whole-heartedly from the outside world was crowd-sourcing. I think open source (which Google has always supported) is the way to go with test tools. Commercial test tools have always lagged behind because there is no community around them. Being part of the open source community, particularly behind Selenium and Web Driver and what uTest is doing with Test Engineering tools, is the best way to stay current.
InfoQ: There is little mention of Agile throughout the book, although much of the approach at Google appears to be based around Agile principles and practices. How does Google see its approach in comparison to the wider Agile community?
Google doesn't try to be part of the Agile community. We don't use the terminology of scrums or bother with scrum masters and the like. We have crafted our own process of moving fast. It's a very Agile process that doesn't get bogged down with someone else's idea of what it means to be Agile. When you have to stop and define what it means to be Agile and argue what flavor of Agile you are, you just stopped being Agile.
InfoQ: The Test Certified Program outlined in the book sounds like a mix of gamification and a testing maturity model. Do you have any advice on how to promote interest in these types of programs for them to be successful and how do you keep this program up-to-date with improved testing techniques and processes?
Giving advice to developers is only slightly less risky than placing judgment on their work. Test Certified tries to do both so tread carefully here. The keys to making this technique work are first adopting the right model, which we believe is the one presented in the book. It's tried, tested and refined, so it makes a good starting point for your own. Second, make sure your very best testers are the ones promoting it and executing on it. You can't afford to lose the respect of the development team when implementing something like this.
InfoQ: The Test Engineer role is probably closest to the traditional Test Analyst role utilised by many organisations, although some would argue it is still very technical. Were there any challenges in up-skilling existing testers to this new role, particularly in relation to technical skills?
Scale is the issue here. There are people available with both sets of skills, but they are hard to find en masse. We had better luck recruiting testers willing to learn to code than developers willing to learn to test.
InfoQ: The book mentions the concept of Google moving towards a model of "free testing", whereby the cost of testing moves towards zero. What does this mean for Test Engineers at Google and how close to reality is this idea?
It is reality for many teams but it takes a lot of objectivity to attain it. How many testers do you know willing to work themselves out of a job? Just because your job can be automated / crowdsourced into oblivion doesn't mean you have the stones to go through with it. As the last chapter of the book answers this question in some detail, I won't spoil that rendering. Let me just say that anyone doing large scale functional testing on cloud / web apps and on mobile platforms is wasting effort and slowing down their team.
InfoQ: Throughout the book you give the advice to "not hire too many testers" and that the future shows the Test Engineer role in decline. How do you respond to organisations that would argue you need more of these roles to delineate the line between developers and quality assurance?
Why would you want such a line? Google has proven that when the line between creating code and making that code better is blurred, the result is code that is developed much faster with fewer latent defects. Hiring too many testers creates a crutch for developers that is bad for the product. It annoys me when people relate too strongly with their role. "I am a tester" is an unhealthy attitude. So is "I am a developer." When people stop focusing so much on their role and start focusing on their product, that's when the magic happens. That's when everyone is focused on doing whatever it takes to build the best product they can.
InfoQ: Some readers of the book may dismiss many of the approaches you outline, perhaps suggesting that the only reason you could achieve these was "because you are Google." How would you respond to such a statement?
How do you think we became Google? By writing software at speed and scale. We didn't become good testers because we were Google. We became Google because we were good testers. That is still Google's advantage. They can create products quickly at the scale of the Internet.
InfoQ: What is the best advice you could offer to current test analysts or new graduates who are considering a role in testing to meet the changing skills of the role?
Treat testing like development. Get a CS degree and get good at CS. Certificates and industry training will only teach you the easy stuff. Learn the hard stuff and get good at it. Testers who take the easy way out will still be griping about being treated as second class citizens until the cows come home. Don't want to be treated that way? Then obtain first class skills.
InfoQ: Do you have any key recommendations for organisations who, after reading this book, "want to test like Google"?
When you are small, create a central test org. Hire people with first rate technical skills. Copy everything Google has done and make it part of your software engineering DNA. Associate with the product, not your role. Never stop thinking about how you can automate the mundane. Crowd-source everything you can.
InfoQ: Who is your key audience for this book and what is the key learning you hope they will take away from reading it?
Anyone and everyone associated with creating software. We hope people understand that it can't be done perfectly, but it can be done a lot better.
The publishers have made a sample chapter of the book available to InfoQ readers.
This Q&A is based on the book, ‘How Google Tests Software’, authored by James Whittaker, Jason Arbon and Jeff Carollo, published by Pearson/Addison-Wesley Professional, March 2012, ISBN 0321803027, copyright 2012 Pearson Education, Inc. For more info please visit the publisher site.
About The Authors
James Whittaker is a technology executive with a career that spans academia, start-ups and top tech companies. He is known for being a creative and passionate leader and in technical contributions in testing, security and developer tools. He’s published dozens of peer reviewed papers, five books and has won best speaker awards at a number of international conferences. During his time at Google he led teams working on Chrome, Google Maps and Google+. He is currently at Microsoft reinventing the web. James also wrote How to Break Software, How to Break Software Security (with Hugh Thompson), and How to Break Web Software (with Mike Andrews). While at Microsoft, James transformed many of his testing ideas into tools and techniques for developers and testers, and wrote the book Exploratory Software Testing.
Jason Arbon has a passion for software quality and analytics. Jason is currently the Engineering Director for uTest.com, a crowd-sourced testing services company where he drives innovation in software tooling, analytics, and automation. Previously, Jason worked on agile products at Google, managing teams in web search pesonalization and managing the engineering productivity teams on projects such as Chrome Browser, Chrome OS, Google Desktop, and Google+. Jason has experience in several various startups, and Microsoft on such projects as Exchange Server, BizTalk Server, Windows FileSystem, MSN, and WindowsCE/IE4.
Jeff Carollo is currently a Software Engineer at uTest.com, a crowd-sourced testing services company. He was previously a Senior Software Engineer in Test at Google. working on Chrome, Chrome OS and numerous server-side projects at Google, including the VoIP platform for Google Voice. Jeff holds a degree in Computer Science from Texas A&M University. He is originally from New Orleans, and is a die hard Saints fan.