BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Q&A on the Book Software Wasteland

Q&A on the Book Software Wasteland

Leia em Português

Bookmarks

Key Takeaways

  • Almost all Enterprise Information Systems now cost vastly more to implement than they should
  • Most of the excess cost can be attributed to complexity 
  • When you have hundreds or thousands of complex applications, you are completely stuck in what we call the Application Centric Quagmire
  • More large firms spend most of their IT budget on integration (without achieving more than ad hoc interfaces)
  • The fix is to become truly data-centric, where an integrated core model precedes the addition of functionality

In the book Software Wasteland Dave McComb explores what is causing application development waste and how visualizing the cost of change and becoming data-centric can help to reduce the waste.

InfoQ readers can read an excerpt from Software Wasteland: A Tale of Two Projects.

InfoQ interviewed McComb about the amount of waste in the informations systems industry, why the application-centric approach has become popular, what organizations can do to visualize their application development and maintenance problems and how to solve the problems, how reverse engineering can help us to reduce complexity, and what organizations that have reduced their software waste can do to prevent falling back.

InfoQ: Why did you write this book?

Dave McComb: It bothers me that we are rewarding bad behavior in the Enterprise Information Systems industry.  The worse a system integrator or application software company’s ability, the more money they make, provided they can convince their clients that anti-productivity is necessary. If you can convince a client that a project should cost $100 million, you have a nice several hundred person project for several years.  If it were widely known that such a project could be done by five people in six months, this waste would evaporate. 

InfoQ: For whom is this book intended?

McComb: It’s intended for the executives who are involved in the sponsoring, selection and management of Enterprise Systems projects.  This book is for the clients, to arm them against the wasteful practices that have become the norm. 

InfoQ: How bad is the situation regarding waste in the Information Systems industry?

McComb: There are an embarrassing number of cases where systems cost more than  1,000 times what they should. Most people have heard about Healthcare.gov.  Some know that to date it has cost $2.1 Billion (against an original budget of $93.7 million) .  Even fewer realize that it could have been built (much better) for under $ 1 Million, which is exactly what HealthSherpa did.  Healthcare.gov ended up adopting many of the design elements that were developed at HealthSherpa. 

The Canadian Firearms Registry was sold as a $2 Million project ($119 million in cost and $117 Million in offsetting revenue).  It cost $2 Billion, and registered 8 million guns before being decommissioned. 

Most of the best stories are in the Government, partly because they are easier targets to the firms that prey on them, but also because so much of it is in the public record. 

Private firms also often pay far more and often get very little from their large application and integration projects.  We know of two large firms that launched a $1 Billion and a $250 million integration project that ended up with nothing to show. 

There once was a big row about NASA paying $600 for a hammer (it later turned out that they paid $15 for the hammer and allocated some of their internal R&D expenses on a per item basis, rather than a per dollar basis which made it appear worse than it was).  The worst excesses of  traditional contracting don’t come close to what is becoming routine for systems contracting.

Most individual application projects are on the order of 10 -100 more expensive than they could be.  But the problem is more systemic than just fixing an individual application project; the ecosystem that the projects live in is the problem.

InfoQ: What has or is causing this waste?

McComb: Many companies, when asked, would like to be data-centric.  Some even profess to be data-centric.  Very few are.  What happens is they may build an individual application that is data-centric (that is the data was designed first, and process and functionality applied later).  But if you build (or buy) 1000 data centric applications your firm will have 1000 centers, which is to say, no center. 

The problem is, with an application centric approach in your mind, every problem looks like another soon-to-be siloed implementation project.

InfoQ: How come the application centric approach is so popular?

McComb: I attribute the continued existence of application-centric thinking to four causes: habit, unawareness, perverse incentives and lack of discipline.  Habit might be the biggest reason, people have been getting systems approved and built for decades the application centric way, and old habits die hard.  Many people are unaware that there is a different way, and you can’t change if you don't know.  The systems integration and application software package industry unfortunately has very perverse incentives.  They make far more money implementing yet another system, no matter how inefficiently.  Finally, many people don’t realize this is not something you buy, or a short term project you implement.  This requires a great deal of discipline over a long period of time (not a lot of money, but a lot of continuity).  In the case studies I interviewed for my next book, all of them took at least five years to get to a reasonably mature data-centric architecture.

InfoQ: In your book you explored approaches for dealing with application development that haven't worked out, examples including service-oriented architecture and APIs, agile, the cloud, and artificial intelligence? Can you elaborate why they haven't provided a way out of our problem?

McComb: The sad thing is, most of these technologies work.  But because they were mostly implemented with an application centric mindset, they ended up sabotaging themselves.  Take SOA.  Data-centric SOA, which is usually implemented by creating a “canonical message model” works great.  Each of the participating applications conforms to the canonical and the architecture delivers benefit.  My observation is that less than 5% of SOA implementations went the canonical route.  Most allowed applications to register their existing API, and ended up with essentially point-to-point interfaces mediated over a bus.

InfoQ: What can organizations do to visualize their application development and maintenance problem and get started?

McComb: The most vivid visualization would be a dashboard with the cost of change by application and change type.  IBM has developed a pretty good starting point with the “Orthogonal Defect Classification,” but you could get started with something much simpler, such as categorizing changes by:

  • Correcting code that causes execution failure
  • Aesthetic changes
  • Adding or removing fields to forms
  • Adding or removing fields to Databases
  • Changing constraints or validation

A key type of change is illustrated by a firm we worked with, which is the merger of two similarly sized firms in the same sub industry.  One of the two firms has a very mature data-centric architecture and culture, the other is a more traditional assembly of hundreds of disparate application systems.  Perhaps the simplest type of system change to implement in an application is to change the drop down list for an enumerated value.  In this case, both companies had to add “South Sudan” in their list of countries.  The data-centric firm added it to the reference data and regenerated any forms or code that were necessary to implement it.  It was done in one place, rolled out to many simultaneously as part of the monthly push.  The other firm spent nearly six months just finding all the references to country codes, and more time than that changing all the affected systems.

This is perhaps the simplest change you might make to a suite of applications.  More complex is to add new data fields, and therefore have to change forms, reports, transactions and APIs.  We worked with a Workers Compensation Insurance Company who lost a lawsuit that required them subsequently to pay injured workers a portion of the health insurance they lost if their injury caused them to lose employment.  In a data-centric environment this would be a simple change; it would probably take several weeks to add the new data elements (how much was your employer paying for your health insurance), add that element to some forms and reports, change some processes (verifying this information with the employer) and add an algorithm to determine how much of the insurance would become part of the claim (were you part time, seasonal etc).  The actual change cost several million dollars, due to the complexity of the systems affected.

As I say in the book, a firm that understands its cost of change will become wise.

InfoQ: What helps organizations to solve the problem? Why do they work?

McComb: We have seen firms become data-centric, and overcome the lure of application silos without semantic technology, but it sure seems like the hard way to do it.  The backbone of this approach is a central core model, simple enough to be understood, complete enough to cover the shared concepts, and flexible enough to evolve over the enterprise.  To us, semantic modeling plus graph databases seem to be the most expeditious way to achieve this. A semantic model provides a formal (machine readable) definition of all the concepts in a domain.  Unlike traditional conceptual models, the semantic model is based on web standards, and can be implemented directly.  It does not need to be translated into Logical and Physical tables.  The digital native firms, such as Google, LinkedIn and Facebook, have all their core data implemented in graph databases (such as Google’s Knowledge Graph and Facebook Open Graph).  Graph databases are inherently more flexible.  You do not need to have all your schema defined ahead of time (as is the case with relational) and every class does not need to have the same set of properties.  Semantic models can be implemented directly on standard compliant graph databases (Triple Stores) such as Allegrograph, MarkLogic, Virtuoso, Stardog and Oracle 12g.  

InfoQ: How can reverse engineering help us to reduce complexity?

McComb: The ground truth is that most of the legacy systems that exist have very little essential complexity and a huge amount of accidental complexity.  To put it another way, a legacy system with 10 million lines of code probably has a few thousand lines of what anyone would consider to be algorithmic code.  There are probably tens of thousands, maybe even hundreds of thousands, of lines of validation and constraint logic that could easily be replaced with parameters and model driven development.  The problem is this trivial bit of complexity is marbled throughout the ten million lines of overburden.  Overburden is the low grade ore you have to remove before you can start open pit mining, and seems to be an apt metaphor. If you use reverse engineering software to find and isolate this, you’ve done yourself a great favor.  Unfortunately, often reverse engineering is used to help migrate from a legacy system to a neo legacy system. 

InfoQ: What is your advice to organizations which have reduced their software waste to prevent falling back?

McComb: No firm that has achieved this wants to fall back, but some do.  What often happens in a management change or an acquisition will result in management that is unaware of the economics of information systems.  Worse, they often have deeply-held erroneous beliefs, such as the belief that implementing package solutions is always the most effective thing to do (it is occasionally, but few managers predict how often it is more expensive than the build option, and none consider the added complexity to the firm as a whole and the impact on systems integration).

The main defense is not technical, but political.  You need to over communicate the value that new approaches bring, and  make sure people are aware of the complex trap they have avoided and are in danger of returning to. The Data-centric approach is still far from mainstream, and will have many detractors, inside and outside the firm.  Many people will be threatened by this.  They will not go quietly into the night.

About the Book Author

Dave McComb is president and co-founder of Semantic Arts.  Semantic Arts is a professional services firm, specializing in helping enterprises adopt elegant, semantic, data-centric solutions to their Enterprise Architectures.  Clients have included Morgan Stanley, Dun & Bradstreet, Sentara Healthcare, Procter & Gamble, LexisNexis, Goldman Sachs, Sallie Mae as well as a dozen State and Federal Agencies.

Prior to Semantic Arts, McComb led major enterprise application projects for Andersen Consulting (the part that became Accenture) and pioneered model driven development at Velocity Healthcare Informatics.  McComb has four Patents in the area of model-driven development, and in addition to Software Wasteland, has written Semantics in Business Systems, and speaks and writes prolifically.

Rate this Article

Adoption
Style

BT