There is no argument that Agile methods have made their way to become a popular practice in industry. While I have yet to see a rigorous study proving the benefits of Agile methods compared to other software methodologies, there is a lot of anecdotal evidence showing that Agile methods simply work. And to understand why this is the case, we can examine and analyze Agile methods from a number of perspectives. In this post, I'm only interested in discussing the topic from an execution perspective. This choice is solely opportunistic - as you will see in a second.
A few months ago, I read "The 4 Disciplines of Execution (4DX)" by Chris McChesney, Sean Covey and Jim Huling [http://www.4dxbook.com/]. As the title suggests, the book discusses four disciplines that - if applied properly - any organization can reap huge benefits of very effective and efficient execution. The four disciplines are a result of studying hundreds of organizations and thousands of teams over many years to understand what makes execution successful.
While reading the book, I couldn't help but make a link between the concepts the book talk about and their application in software development as a process. By the end of the book, I came to the realization that one of the main reasons Agile methods work is actually their adherence to all four disciplines - intentionally or unintentionally.
The four principles as the book lays them out are:
- Focus on the Wildly Important
- Act on Lead Measures
- Keep a Compelling Scoreboard
- Create a Cadence of Accountability
Now let's explain (very) briefly what each of the four principles means and how it applies in a software development context.
Focus on the Wildly Important
Simply put, the more you try to achieve, the less you will actually achieve. There are always good ideas that are worth considering, but which ones you should focus on now is the real question. In software development, the inability to control distraction of the many good ideas is called feature creep. Have you ever worked on a project where the client or the business analysts or even the developers kept adding features because they thought such features were absolutely indispensable? This problem occurs when you have a practice that separates the requirement gathering step from actual implementation. That is, you keep gathering requirements up-front to a point where it becomes too difficult to focus on any specific requirement at one time. Agile methods address this challenge by short iterations that only deal with a very few features that the customer deems important - or wildly important. Timeboxing these iterations forces the team to only choose a few goals that are achievable in a short time frame. Choosing between something good and something else that is also good is counter-intuitive, and that is why you have to have a process in place that forces this to happen. At the end of the day, time is a finite resource and you have to make the call as to what is really worth doing now. No one could have said it better than Apple's COO, Tim Cook: “We say no to good ideas every day. We do this to make great ideas happen.”
Act on the Lead Measures
Evaluating performance can happen through collecting two types of measures: lag measures and lead measures. Lag measures are those examined and collected after the fact (e.g. after the project is over) such as sales, market share, customer satisfaction... etc. Lead measures, on the other hand, are those examined and collected during the process in order to impact the lag measures (hopefully in a positive way). For example, the number of usability tests conducted on a product is a lead measure that affects customer satisfaction - which is a lag measure. The two key characteristics of a good lead measure are that it is predictive of the outcome and it can be controlled. For example, the number of usability tests is a good lead measure because it is a predictor of customer satisfaction and we can actually control how many usability tests to conduct on a given product.
The problem that often happens in software development is our inability to clearly define lead measures. For example, stability of a software package is not a good lead measure because once the product is released we have very limited control over its stability. Of course, you can always send service pack 1 and 2 and 3, but let's be honest: this approach is mediocre at best. Also, if the instability problem is hardware-related, things become much more uncontrollable. Agile methods address this issue in a number of ways. First, the number and quality of unit tests (per class) as well as the rate of build failures are widely used as a lead measure to assess quality and stability. Some agile teams use test-driven development as a way to enforce improving this lead measure. Another example where agile methods succeeded is the notion of continuous refactoring. That is, the effort put into refactoring and cleaning existing code is a good lead measure that significantly impacts code maintainability - which is a lag measure. Agile methods have also been focusing on ensuring that we are building the right product using customer satisfaction as a lag measure. This lag measure is usually impacted by increasing customer involvement and feedback loops - which clearly are good lead measures.
Keep a Compelling Scoreboard
It is inescapable that people play differently when they are keeping score. The authors emphasize that keeping score should be the job of the team players themselves and not their manager or leader. This is a game changer because what is being underscored here is the level of engagement and interest you want everybody on the team to have. You want people to care about where the project is going by giving them a chance to see for themselves whether they are winning or losing. As a team. Therefore, a compelling scoreboard should be designed by the team, and should reflect instantly the status of the project in light of the set goals.
Looking at practices in small and big software companies in North America, Europe and Asia, I have yet to see an agile team that does not utilize a scoreboard - though they may not call it that. The team creates the board at the beginning of the project and they update it on a daily basis. At minimum, the board should tell you which features (goals) the team is working on now, which features the team has completed, and which features are still in the backlog. It does not take an expert to look at the board and tell you whether the team is winning or losing. It is, and should be, obvious. If we are one week into our three-week iteration and we have only managed to finish 10% of the planned work, then we are definitely losing and we need to take action. If work in progress is accumulating in the middle of the board and there is minimal or no outcome, we know we are losing. In fact, if it becomes difficult to judge whether we are winning or losing, then most probably we are losing. Gaming numbers on the scoreboard is an indicator of an unhealthy environment and could mean that more effort is needed to increase team ownership of the project, or to lessen managerial intervention in keeping score or treating individuals as winners or losers as opposed to looking at the team as a whole.
Create a Cadence of Accountability
Having specific goals and clear lead measures defined, and having a compelling scoreboard that engages everyone on the team, the team should regularly and frequently hold each other accountable for achieving the goals, improving the lead measures, and winning the game. The authors suggest a weekly meeting wherein everyone on the team should report briefly on whether they fulfilled their commitment to the team during the past week, how well they're contributing to the scoreboard, and what they want to commit to for the coming week. The authors recommend that every team member create their own commitment to have a sense of ownership. It is no longer about obeying your superiors but rather about keeping your promise to the team.
Agile methods promote this exact practice of regular accountability through stand-up meetings (aka. scrum meetings). These meetings happen more often than the frequency proposed by the authors - usually daily. It is a stand-up meeting to remind everyone to be brief. Every team member should report on what they have done yesterday, what they will be doing today, and whether they are facing any obstacles. Typically, team members choose what they want to work on and they might even write the task and put it on the board themselves - again to achieve this sense of ownership and commitment.
Finally…
If there is one thing software practitioners have learned over the past few decades, it is that there is no silver bullet. Agile methods are no silver bullet just like these four disciplines are no silver bullet. However, we have also come to learn that we should never reinvent the wheel by trying to solve problems that others have already faced and solved in hundreds or thousands of organizations. The amount of resources software companies put into fixing their processes is simply huge. Therefore, I see a great value in looking at our software processes from an execution perspective to examine their strengths and weaknesses. Keeping this perspective in mind keeps us at a safe distance from abusing buzzwords like Agile methods without really understanding the underlying principles that make them work. Countless companies and teams worldwide misuse Agile methods and confuse them with undisciplined, chaotic software practices where there is no sufficient planning, designing, or quality assurance. After more than a decade of declaring the Agile Manifesto, some practitioners still face a difficulty drawing a line between agility and chaos. The great news, however, is that we could use the four principles described above as a validation framework to answer the question: Are we truly Agile or are we just pretending?
About the Author
Yaser Ghanam is a software engineer currently working as a Systems Analyst at Schneider Electric - Canada. His experience and interests span a number of areas including agile software development, project management, and usability engineering. Yaser holds a doctoral degree in Software Engineering, a bachelor’s degree in Computer Engineering, and a minor in Engineering Management.