BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles The Case for Software Lifecycle Integration

The Case for Software Lifecycle Integration

Bookmarks

For many years, software delivery has been treated as an ancillary business process; a business process that, though costing the organization a considerable amount of money, does not have the structure, rigor, or focus of other enterprise business processes such as supply chain management, financial management, and even talent management.

But as many organizations look to software as a key business enabler, the practice of creating, manufacturing, and maintaining that software becomes even more important, especially as the very nature of that software becomes more complex with the advent of mobile, cloud and open web technologies. Not only does the software created form a competitive advantage; the ability to deliver it faster and cheaper than competitors has real business value. After all, what often matters is not who has the next idea for a new product or service, but who delivers that product or service to market first.

Software delivery is still a concatenation of several independent processes, not a single integrated process

Software delivery is not only a multi-billion dollar industry, it also is the secret sauce for the majority of business processes. Business innovation is enabled or hindered by a business’s ability to deliver software. Software time-to-market, innovation, and quality are becoming the business variables that define a company’s ability to compete. Compared to other traditional business processes, software delivery is often a poorly assembled collection of immature disciplines.

Portfolio planning, project management, requirements, design, development, test, and deployment individually have well-defined processes and tools, but collectively do not effectively integrate, collaborate, or flow. Work is defined numerous times throughout its life as it moves from the planning stage to definition, development and test, as each group redefines the project artifacts for their own needs and tools. Spreadsheets, email, and wikis are used to glue together processes, but not only are these tools an overhead, often they create another system of record to be kept up to date and integrated.

Process movements such as Lean Startup, DevOps, and Agile have driven organizations to re-evaluate their development practices with the desire to increase its cadence and feedback. Though many organizations have adopted Agile and are trying DevOps, both these initiatives focus on engineering teams and their practices.

Business Agility requires work to flow through engineering, management and customer processes in a seamless and integrated way. But this is far from the situation we have today.

The following two sections outline the realities of what we have today:

Disconnected disciplines getting Leaner, but aren’t quite Lean

Henry Ford and the industrial revolution taught us specialization of labor and departmental hierarchies are the best way of increasing efficiency and focus. The more complex a problem becomes, the more complex the organization becomes to solve it. As IT groups grew in size, so did their processes, management structures, and hierarchies. Each discipline over time was taken out of the developer’s purview, creating separate groups such as requirements, design, and testing. Agile methods cited this separation as a key reason why development projects failed and required cross-functional teams to be created. Even with everyone on the same team, the reality is different roles approach a problem in different ways, applying different practices and tools.

Even within an Agile team, developers and business analysts will use disparate tools, each encouraging a different way of working. Traditional test tools will describe problems from a test perspective, while development tools will view them from a developer’s point of view. These tools will fragment the process of software delivery by introducing different vocabulary, artifacts, and process steps. Process improvement is generally at these functional unit levels, rather than across the entire software development and delivery process.

Imagine a factory were each step in the production process is optimized, but the product is still low in quality and far too expensive. That was the experience of automobile manufacturers prior to the Lean manufacturing revolution. The traditional manufacturing models practiced by Henry Ford were ill-equipped to manage the process variance, product complexity, and product flexibility required for modern automobile manufacturing. The adoption of Lean methods meant a holistic view of the end-to-end process, allowing an organization to reduce waste and increase value at the enterprise level rather than department or job level.

It also led to the creation of clear process ownership, architecture, automation, and measurement - concepts still eluding the software development industry.

Within software development, we still have:

  • A lack of ownership of the end-to-end process. If an organization does not think about the whole, then ownership for the whole process is fragmented into multiple groups. For example, testing is owned by the quality group, requirements by the business analyst team, and reporting by the PMO. This fragmentation keeps anyone from driving holistic change initiatives.
  • No clear architecture or roadmap for its improvement. Fragmented practice and tools adoption decisions abound in the practice of software delivery with individuals and teams implementing new technology and associated practices to help them do their job easier. But without a clear roadmap or plan associated with technology, adoption investments tend to be badly planned and ill-advised.
  • A lack of process automation. In most software development and delivery teams, there still exist too many manual steps and processes. Consider the creation of weekly status reports. These reports, crucial for cross-department decision-making and organizational pivots, often require numerous emails, check-in calls, and the creation of a spreadsheet that is neither accurate nor complete. The fact that manual steps for repetitive tasks exist, is an obvious indicator that process improvements could have been achieved through automation.

End-to-end reporting and traceability is still a dream

Modern business is all about data based decision making. From day-to-day operations to corporate and regulatory compliance, companies rely on both complex instrumentation and reporting capabilities. The process of software delivery, like other business processes, requires traceability and reporting, but unlike other processes does not have a consistent, agreed-upon process or data model. Unless the organization is building a safety critical system, there is typically no agreement on the reports necessary to describe the flow, impact, productivity, quality, or value being generated in the software delivery process.

This is an interesting conundrum. Are there no agreed-upon reports because we don’t know what to measure? Or can we not report upon what we’d like to measure?

As we’ve previously noted, the tools used within the software delivery lifecycle are isolated from each other and there are an abundance of manual processes where automation should prevail. The resulting manual process for creating status and traceability reports takes time and requires the involvement of people. On large projects or projects that are spread out geographically, the amount of time required ultimately reduces the value of the information. For example, a common situation is the use of a defect spreadsheet when working with an outsourced testing partner. The defect spreadsheet is sent at the end of each working day, but often versions are updated by different teams at different times, resulting in many meetings. These meetings often start with the question ‘which version of the spreadsheet are we working from?’ and lead to discussions about differences in each version.

Now, let’s look at connecting the software delivery lifecycle.

The business process of software delivery needs to be automated

As software increases in value and importance, the obvious next step is to treat it as a key business process. By approaching software delivery in the same way as other processes, such as sales, procurement, and distribution; you approach software delivery in a different way and concentrate on not only the processes that it comprises, but also its value, reporting, and analytics. This also results in an end-to-end or holistic view rather than concentrating on each discipline in isolation. Ultimately, the value of the process is not in each discipline, but instead the result: software that is used by customers or the business.

Software delivery should flow from inception to implementation

Software delivery comprises many processes including the SDLC, project management, demand management, quality management, operations, and service management. For many organizations, these processes have guidelines, templates, tools, and artifacts. In fact, many processes include many of the same artifacts, such as defects, requirements and tasks, and for Agile stories and epics. Unfortunately for many organizations, there is no overall process model or formal description of how these processes interact.

In fact, even with organizations striving for Continuous Delivery, the one “continuous” thing is the delivery of the final work product: the code and application. There is still a lot of discontinuity in the flow of project artifacts from one functional discipline to the next.

This situation is made worse by the fact that software delivery is increasingly a collection of suppliers providing code, services, and APIs for inclusion in the product. With the advent of web services and API driven development, often supply chains are loosely coupled with limited control and no consistent process or ALM tool set. Another example is the trend to outsourced testing. Often these testing groups are not a part of the regular stand-ups because of organizational boundaries and the reality of time zones, but their information must be included in the stand-up to determine the real status of the project.

Analytics and reporting should be a first class citizen

If you ask a group of IT project managers what the key metrics are for any software organization you will be greeted with an array of very different measures ranging from the tactical, such as number of defects to the strategic, such as change state over time. Not only do application development professionals need to define the right measures, they also need to see how the data changes over time. For many tools, temporal- or time-based data is difficult to find, with those tools focusing on the immediate flow of work.

Software Lifecycle Integration puts the “L” in Application Lifecycle Management

The element lacking here is not necessarily the desire to manage the application development and delivery process, but the ability to do so. Organizations are prevented from having a holistic lifecycle view because their tool infrastructure is comprised of disparate products intended to maximize the efficiency of one (or two) of the functional teams within the broader organization.

For many organizations, the promise of ALM implies the adoption of one tool, or one tool suite. This allows normalization of information across different disciplines and supports common reporting and analytics. Vendors add to this idea with clear marketing and sales collateral describing how ‘all your development problems will be solved when you move to Tool X’. However for many organizations, the reality is much more complex than any one tool can solve. Add to that emerging platforms such as mobile, cloud, and open source and your tool landscape will always be complex.

Heterogeneous tools stacks are the reality. Integration among them is required.

Still, “integration” isn’t simply a point-to-point connection between two systems. The kind of integration providing the pan-organizational visibility that underlies true process improvement requires:

  • A common data model across all disciplines to accommodate for the disparities in the artifacts they produce.
  • Adding flow to the model – Because the practice of ALM is cross-tool, and work moves between tools, it is important movement from one tool to another is captured in terms of the transitions. For example, if tickets move from a service desk into development, capturing that transition is key for understanding the queue from operations to development.
  • An understanding of projects, products, and releases – In many organizations, the terms application, product, project, and release are often used interchangeably, leading to confusion. With no industry-wide consensus on their exact meanings, each company needs to create its own unequivocal definitions for each. To effectively report on ALM, there needs to be a clear definition of both the assets and the temporal elements of any data model.
  • Resources / users and teams – When marrying the PPM, development, and operations worlds, understanding who is working on what is important for reporting and analytics. For many organizations the structure of user names, team IDs, and even department codes are used differently by each team. Introducing a consistent approach to users across tool boundaries enables reporting and analytics, but also helps ensure governance and controls are more effective.

Enabling this Software Lifecycle Integration across disciplines and tools will provide the kind of infrastructure necessary for organizations to automate and report on their key business process of software development and deployment.

For many organizations, the lack of an integrated software delivery practice means the difference between project success and failure. It is time to apply Lean thinking to the practice of software delivery and make the creation and maintenance of software flow from idea to implementation, removing disconnects and enabling real-time collaboration. It is time to create a discipline focused on connecting the end-to-end practice of software delivery. That discipline must provide the materials necessary to connect the practice of software delivery; creating integration architecture, process, and measurement approaches for software delivery professionals to deliver software faster and remove the waste plaguing the practice.

About the Author

Dave West is Chief Product Officer for Tasktop Technologies. He is instrumental in building Tasktop into a transformative business that is driving major improvements in the software industry. He leads the strategic direction of the company’s product line and helps to define Tasktop’s market position. He is a former industry analyst at Forrester Research. You can reach Dave here.

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • OSLC ?

    by Lubomir Brychta,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    The OSLC standard should help to solve ALM integration. Do you consider the OSLC standard as important solution ?

  • Re: OSLC ?

    by Dave West,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    OSLC is a great set of standards around the linked data spec. But alone it is not an integration strategy. Also, even in the area of linking (which is provides a great set of standards on) many vendor organizations do not follow it. Thus the majority of the tools on the market do not provide an OSLC interface add to that legacy tools, which most companies have and you will need to augment your OSLC approach with other integration approaches.

    Saying all of that I do believe that OSLC is an important 'emerging' standard and the ideas generated by it will help move our industry forward, just the reality of today makes integration with solely OSLC pretty difficult.

    Oh - and Tasktop,who I work for are heavily involved in OSLC, so we must think it is a good idea.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT