BT

Organizing the Test Team

| Posted by Sanjay Zalavadia Follow 0 Followers on May 19, 2016. Estimated reading time: 13 minutes |

Twenty years ago, when software shipped in boxes, there was a standard organization for software groups: a development organization, a test organization, and a documentation organization, each of roughly the same size.

We've come a long way since those days, separating management work from product ownership, including analysts, creating usability and user experience roles and downplaying the role of the documentation. Very few of the companies we work with, sell software in boxes at stores and it is rarer still to see the CD duplication and shipping process as part of a project.

One thing that hasn't changed much, is the role of programmers. Extreme Programming and Scrum have encouraged them to be more involved in analysis and testing. Pairing and mob programming spread knowledge around and tend to create a team of generalists. As popular as that is the media, when we talk to clients, we tend to find programmers actually remain specialists. For career progression, they tend to focus on either front-end, on the user interface or back-end, interacting with the database and web service. In larger organizations, programmers tend to support one to three systems at a time, instead of rotating through systems.

Meanwhile, the tester/programmer ratio continues with more programmers than software testers. There simply are not enough testers to specialize as deeply as the programmers, which leads managers and executives scrambling to find a support, no, an enablement model for the testing group. There are a handful of these, from a fully separate test group, including its own portfolio management office, down to embedding the testers on the team. Which of those is right for your organization will depend on what you are building, how specialized the programmers are, how complex the applications are, and a host of other factors -- so let's explore.

The Models

PMO Model (Project Management Organization) - Most of us are familiar with financial portfolio management; a list of all the dollars invested in which asset classes. If you take a step back and look at the IT organization, you see people as the investment. The classic PMO takes people from teams (the test team, the analyst team, and so on) and puts them on projects with a known deadline. When the organization wants to spin up a new project, the PMO may look at the potential return against other projects, but also looks at who is available, informing management of when a team can start. If that is too late, the PMO may work with senior management to plan to on-board contractors, or, in the case of long-term demand, new employees. This matrixed approach, which keeps line management in place, sounds fine on paper. An IT organization with a few hundred people can be planned with spreadsheets, and software exists to manager larger organizations. The problem is that people are not lines in a spreadsheet, they aren’t plug and play - they have specific skills. The programmers tend to specialize, making the model require technical skill plus understanding of the problem domain. Software exists to resolve this -- to 'tag' technical staff by skill. When that works, it tends to 'pigeonhole' staff in an area. Larger organizations assign some staff permanently to one position or another, and are organized as several PMO's, sometimes by specialty, which leads us to other options.

The Test Group as Portfolio - The desire to centralize test services can be appealing. After all, a single, test center such as the Global Test Center at Barclays Bank allows the company to create an outsourcing-like relationship with software testing for the price of internal employees. Having permanent employees as testers allows the staff to develop expertise in internal projects. The management of the Test portfolio is essentially identical to functional management. The director can assign test managers a business unit to support, or the group can do the same skill and expertise tagging that a traditional PMO might. Like the center of excellence that I will describe later, the Test Portfolio can create standard terms and techniques, to reduce the friction in transferring between groups.

The Federal Model - The appeal to centralized control is strong, yet it has been half a century since Peter Drucker pointed out that a large organization will be crushed by its own weight. Drucker suggested a small, centralized group to provide services (In IT terms, that is probably virtual services and software licenses), and the rest of the company reorganize as small business units. Each business unit gets a budget, and can hire testers or contract staff. Drucker's original study was of General Motors Corporation, where each division was a business unit, such as Buick, Chevrolet, and Cadillac. Today, entire companies are built of disparate business units. McGraw-Hill, for example, has information and media services, financial services, and education as major divisions. Each division has groups, and below each group is a business unit, which might be from fifty to five hundred employees. In a federal model, testers are hired by a business unit. The differences in job function between testers will be smaller because the scope of each micro-business is smaller, making changes within the business unit easy, but transfers outside of the business unit harder -- they are essentially hiring into a different company. There are two major drawbacks to this approach: First, it only works if the business is organized federally, and second, it provides no vision for test leadership across and above the business unit level. The Center of Excellence (CoE) and Practice Management approaches address this second concern.

Center of Excellence - A small centralized group decides how testing is done at the company. That includes some aspects of testing, handling training, coordinating test tools and licenses. At the very least it prevents the company from buying multiple site licenses of products that do essentially the same thing. A center of excellence can also help the company from re-inventing the wheel, providing training and assistance with teams that find themselves with a sudden need for security, performance, internationalization, or other specialized testing. A strong CoE can even include PMO aspects, deploying testers to projects that need additional resources, with a global view of the value of software in the organization.

Embedded On A Development Team - Scrum and most Agile methods suggest an entire delivery team. That means no test managers or functional support organization. This eliminates the handoffs and staff concerns that the PMO rose to address. It does however, tend to isolate testers - a Scrum team of six to twelve people might only have one or two testers. The other testers in the organization work on other projects, using different technologies. Transferring between teams looks more like hiring into a new company. Worse, the team is unlikely to have a proper balance of skill sets, which can cause delays when the team can't get work to the testers fast enough, or when the team does not have enough test resources. The popular fix for this is pair programming and mobbing to spread skills, but pair programming has had a mixed success record, mostly due to culture and adoption problems.

A Practice Manager / Communities of Practice - The pure-embedded model leaves the testers without leadership. A center of excellence is mostly focused on control and standards. The Practice Manager approach is slightly different, focusing on self-development and sharing of expertise instead of control and terminology. A practice manager typically organized a community of practice, open to anyone interested in testing, and not required to attend. The Practice Manager might organize internal conferences, and coordinate training, but mostly focuses on getting the testers to raise themselves up, instead of creating rules or focusing on compliance. The subtle shift in focus makes the practice manager a servant-leader; the practice manager is unlikely to perform annual evaluations. Without coercive power, a practice manager is forced to generate legitimate authority through subject matter expertise, technical skill, and process knowledge. Communities of Practice are also more compatible with Scrum and other Agile methods, that view testing as an activity that many people can participate in.

Moving Toward Organization

"Which model is right for our testing group" is probably the wrong question. "Which model best supports the business?" is closer to the mark.

We suggest a different question: "What is the team doing now?"

Most of the organizations we work with don't even have a compelling vision for the current test practice. It just sort of ... grew, organically, adding a few people here and a few people there, until it found itself a team, then department, then division. Some of our customers have reorganized as Scrum teams and kept the test leadership in place; some of the smaller ones have downsized the test leadership to a small practice manager or Center of Excellence.

Coming Up With the Right Model

Here are a few questions to ask to determine the right model for your organization:

Do executives need to see reports and metrics from test in the same format? If this is the case, the first place to start might be by asking ‘why’. Sometimes a stronger, more centralized approach to test, or, at least the test process, does make sense. Reasons to have these across teams including knowing status, comparing performance, and to make predictions based on large amounts of data. Teams that take a more waterfall approach, where integration and system test make more than a few days, will be especially interested in these metrics because they are running actual test projects. In this type of project, having a set of metrics can be a guiding light. Seeing large differences in performance or quality across releases or teams is like a sign saying ‘look here’. While numbers are a bad way to try to understand and control work, they can highlight places to start a conversation. Organizations that practice delivery more frequently might be able to point to working software.

Are conflicting tools, processes, and terms causing us harm? Some organizations spend so much time arguing about the definition of a 'test plan’ or 'test case’ that they lose actual productive time. In that case, it might be a good idea to declare some operating parameters - rules of engagement - then move on. For example, if a team can agree on what a bug is and what happens when they are found, time can be saved in bug triage meetings and bargaining with managers and the bugs can just get fixed.

Are our test roles similar or should they be? Imagine a large medical group that can have a large number of testers that work on one large system, using a shared language. In that case there could be great value in consistent training, terms, reporting and process. Now imagine a large consulting company that has grown through acquisition, including a few software product companies that were acquired along the way. The acquisitions are all consulting services, but they tend to serve different markets; one in construction, another in media services, and so on. The hospital group has a main campus for 95% of IT; the consulting company is distributed across two continents. In the first case, centralized planning might help; in the second, it might hinder flow.

How can we balance innovation and standardization? When ISO 9000 became popular it made a bold claim; standards can be a wedge that prevents backsliding.

It's hard to continuously improve when you have to do the same thing all the time. We tend to think of standards more like a straightjacket than a wedge. We see standards as valuable when they emerge from practice and are more like guidelines than rules. For example, one of our clients requires evidence that testing occurs periodically, with a preference for executable examples. Each team selects how often this will happen (every sprint / development iteration or every story in production on continuous delivery teams), how to capture those examples, what and if should be automated. Management has delegated a technical leader to work with the teams to see if that evidence is sufficient. Understanding the problem helped guide the choice of innovation and creative chaos or getting more standard.

How to Move Forward

You can think of this article as a dance around test team structure. We've covered the tension between centralized control and self-organizing teams. The idea is not new. Tom Peters and Robert Waterman described the dilemma in their book In Search of Excellence in 1982. The solution that book proposes is something Peters and Waterman call a loose-tight coupling. The authors suggest that you need both; that traditional and instinctive behaviors promote stability while innovation and adaptation promote adaptability. Those traits combine to create what In Search of Excellence calls "resilience."

If you've followed along so far, then hopefully you recognize a vision for how your test team is organized now -- and what it could be.

The question is what to do about it.

This is the point where we usually get a phone call. A company wants to create a center of excellence or PMO and wants to bring in a vendor. Our work in test management and reporting gives us some insight into that area, but we don't do that kind of consulting work, so we are well-positioned to recommend someone. If we've been working with the client long enough, we may have some insights into their particular problems. We know if they look more like the hospital group or the consulting company, we know how they structure IT, and often how they structure the business - and we'll still be working with them in three years after this new restructure initiative is long gone.

In the course of that work, we've found that the quick and expensive fixes rarely work. Seven-figure investments in centers of excellence with rushed deadlines, supported by outside organizations, tend to become cut and paste copies of whatever the vendor did before, combined with a bit of anchoring bias (whatever the first idea was) and availability bias (whatever we can do right now with the team we, or the vendor, has.)

The choices that work tend to have three things in common. First, they are poised as experiments, not permanent declarations that forevermore test will be done in a specific way. Second, they work by invitation; teams are invited to participate or not. Third, they work incrementally. These three attributes tend to create sustainable improvement that we don’t see when one or two are missing. Where many of our peers recommend overnight reorganizations, and there is even a term for this "shock therapy.", we've seen more success identifying the current blocking condition for the organization and forming a small team (a "tiger team") to solve the problem. Once the core issue is resolved, that team can remain in place to tackle the next, as an experiment. They can go back to their old roles at any time - the change is an experiment. As it is time to expand, our new teams look more like tiger teams that what we had before - while, perhaps, a central code stays back, becoming the architects and central leaders. This is an iterative approach to a test organization, one based on small steps, led by people with credibility and permanence - with perhaps a little outside help.

Mike Rother, the author of Toyota Kata, has turned this kind of thinking into a system. Rother recommends setting a long-term vision for improvement, then a target condition. The target condition should be achievable, but challenging. It should be something to work for over time.

So what's your target condition? What experiment can you conduct to move toward it?

Those are questions you can answer.

And please, let us know how it goes.

About the Author

Sanjay Zalavadia is the VP of Client Service for Zephyr, Sanjay brings over 15 years of leadership experience in IT and Technical Support Services. Throughout his career, Sanjay has successfully established and grown premier IT and Support Services teams across multiple geographies for both large and small companies. Most recently, he was Associate Vice President at Patni Computers (NYSE: PTI) responsible for the Telecoms IT Managed Services Practice where he established IT Operations teams supporting Virgin Mobile, ESPN Mobile, Disney Mobile and Carphone Warehouse. Prior to this Sanjay was responsible for Global Technical Support at Bay Networks, a leading routing and switching vendor, which was acquired by Nortel. Sanjay has also held management positions in Support Service organizations at start-up Silicon Valley Networks, a vendor of Test Management software, and SynOptics.

Rate this Article

Adoption Stage
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread
Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Discuss

Login to InfoQ to interact with what matters most to you.


Recover your password...

Follow

Follow your favorite topics and editors

Quick overview of most important highlights in the industry and on the site.

Like

More signal, less noise

Build your own feed by choosing topics you want to read about and editors you want to hear from.

Notifications

Stay up-to-date

Set up your notifications and don't miss out on content that matters to you

BT