Virtual Panel on Bimodal IT

| Posted by Manuel Pais on Aug 31, 2016. Estimated reading time: 34 minutes |

Key takeaways

  • A strict separation between 2 modes of working (a predictable mode and an exploratory mode) is limiting and can prevent good practices being applied in the “predictable” mode.
  • Bimodal reflects organizations whose applications present significant variance in terms of delivery speed and other IT capabilities. Bimodal IT is seen by many in the industry as a description of the current problem in these orgs. A long term solution requires convergence to a single work culture and system of belief.
  • Legacy systems might not require frequent releases from a business perspective but they can still be an obstacle to (agile) development of other applications. Bimodal can make sense as a palliative measure in order to build bridges between those two IT landscapes, on a need-to basis. However, there are mounting examples from large, high-performing organizations that it is possible to apply modern practices to their full extent, even in mainframe systems.
  • The value in the systems of record can’t be realized without the systems of engagement. As customers engage more and more via digital experiences, their expectation in terms of quick response time increases and will ultimately drive organisations to abandon safe but slow modes of working.
  • Bimodal IT does not imply higher security in systems developed in “safe mode”. Small batches of changes are safer than having massive security reviews after a large change. Also, audit trails produced by automation are more consistent than those produced by checklists. However, current obstacles in enterprises include the need for approval from security groups and integrating enterprise security solutions in the pipeline is often hard.



Bimodal IT (also know as multi-speed IT) is a strategy thought out for enterprises moving towards faster delivery of their applications (by applying Continuous Delivery and DevOps practices) while having to deal with core legacy systems that are business critical and slow to change.

This strategy has been supported by many, but also criticized by many. InfoQ reached out to a diverse set of experts and practitioners to dig deeper into the pros and cons of this strategy and how/when/if is it applicable.

The panelists:

  • Margo Cronin - open group certified master architect and certified program manager
  • Damon Edwards - co-founder of DTO Solutions consulting
  • Rob England - self-employed IT consultant and commentator
  • Mirco Hering - Accenture’s DevOps & Agile practice lead in Asia-Pacific
  • Matthew Kern - enterprise architect and systems engineer
  • Bridget Kromhout - principal technologist for Cloud Foundry at Pivotal

InfoQ: What’s your understanding of what bimodal IT means?

M. Cronin: Bimodal IT is where 2 different methodologies, way of tooling, and organisational structures are enabled to maintain existing systems and to achieve new solutions quickly in the current environment.

D. Edwards: It’s a Gartner thing, so I’ll use their original definition: "Bimodal IT is the practice of managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility. Mode 1 is traditional and sequential, emphasizing safety and accuracy. Mode 2 is exploratory and nonlinear, emphasizing agility and speed.”

There is a new definition that adds several qualifying statements and stretches to 4x the length of the original.  But at the heart of it, the new definition is same basic thesis as the original definition.

Bimodal IT debates will often focus on “multi-speed” IT. This is a distraction that misses the key point, the “modal” part. It is “Bimodal IT” not “Bispeed IT”. “Modal” means “a way or manner in which something is done”. Bimodal IT is, by definition, about having different modes of working within the same organization. That is the core idea from which all of the arguments, pro or con, develop.

Gartner is saying you are supposed to have 2 distinct groups of people working in the same IT organization and on different ends of the same systems (no “system of engagement” lives without a “system of record” and vice versa). These 2 groups are going to be working under completely different mandates and belief systems about quality and speed. Yet somehow, if you follow Gartner’s advice, that is going to produce a coherent result that is better than what the status quo is today. I’ve yet to see that be true. The outcry against Bimodal IT from a wide variety of well known practitioners is clear in its message: different modes of working in the same organization lead to more silos, higher costs, more pain. I wish Gartner had it right, but the independent evidence against Bimodal IT is quickly mounting.

R. England: "Bimodal" suggests that there are two ways of working in the organisation: the traditional industrial approach and the new-fangled digital approach, normally characterised as waterfall and agile.

B. Kromhout: I’ve been quoted as calling it “sad mode” and “awesome mode”. Gartner pitches it as “reliability” vs “agility” - the problem is, humans can’t create a failure-proof system, only one that’s easy and fast to fix when the inevitable faults occur. Reliability and agility go hand in hand.

M. Hering: Gartner propagated a definition of bimodal IT which provides a pretty strict separation between Mode 1 (predictable) and Mode 2 (exploratory). In practice within Accenture we find that there is a broader and more graduated scale of release cadence and governance necessary. We also see that, in many circumstances, new IT practices such as Agile, DevOps and Continuous Test bring benefit and reduced-risk which is equally beneficial to systems of record, as they are to systems of innovation. Rather than the two modes of operation implied by Bimodal we tend to favour the term ‘Multi-speed IT’ which allows for the spectrum of approach and governance necessary to optimise the business value generated by each IT service. I made an attempt at a definition of the different terms in this InfoQ article. Bimodal has two distinct modes: Agile and Waterfall. Multi-speed means you have potentially multiple flavours of Agile and Waterfall suitable for the different speeds of delivery in your organisation which allows you to address the different needs much better than two speeds could.

M. Kern: Bimodal IT was intended to provide one mode of development for big legacy systems, and another for small projects. Large organizations really have three kinds of IT: (1) commodity IT, and office applications fall in this group. You do not code them. (2) IT for support functions. You buy that as CRM or ERP, etc. You only code little add on utilities, if you do it right. This software does not improve enterprise performance significantly. (3) Enterprise class systems automating core mission functions. These you code from scratch, and tend to be large database driven systems.

So yes there is limited validity. But a lifecycle view makes more sense for two modes.  Such a view unifies legacy, commercial product and custom large system development.  In legacy and commercial development someone else has already performed the overall architecture function.  In all three, the later stages can be DevOps.  In none of these can the early overall architecture function be skipped, nor total scope left undefined.

InfoQ: Do you agree that large organizations require some kind of bimodal IT way of working? If so, how and who decides on which projects or systems require one mode or the other?

M. Hering: Yes. Any scaled organisation is going to have a significant variance between their application, both from a perspective of the business need for application agility, and also from a perspective of the technical debt inherent within the applications (e.g. lack of test automation).  Having a Bimodal (or Multi-Speed) IT approach is necessary to deliver the right level of agility within the constraints of the current capability of the IT team. Ideally the decision for electing on what release cadence and methodology would be based on the needs of the business for agility.  In many cases, especially where legacy systems are concerned, there is a significant investment in technical debt reduction required necessary to achieve the desired level of velocity.

M. Cronin: I believe the requirement for bimodal in large distributed enterprises is very similar as the requirement for hybrid. Hybrid exists as organisations embrace the cloud. They gradually move their payload from dedicated datacenter to the cloud, some remains in the legacy datacenters due to regulation, budget & contracts and organisational restrictions (dedicated teams).  As organisations evolve and become more comfortable with cloud models for production the need for hybrid will become dramatically reduce. Bimodal is similar. In our production landscapes we have large monolithic systems. They are project driven. I have seen in organisations that these programs are sometimes assigned incredible internal names (Hercules, Picasso, Yeats)! These systems have taken on personas and been revered! Now a new agile product driven approach is rapidly being adopted. It’s spreading across our production landscapes. It’s changing how we interact with our customers (customer journeys, behaviour driven design), how we deliver IT (DevOps, Scrum) and how we do technology (serverless, kubernetes & containers). Therefore Bimodal has come about to allow the monoliths & corresponding methodologies to co-exist.

Rather than identifying modes, I would recommend an organization explicitly identifies which projects are the monoliths and what products are agile. I have talked more about that on EntArchs.

D. Edwards: I was part of a working group at Pink Elephant’s recent annual conference. That conference is one of the biggest and best gatherings of ITIL and ITSM practitioners. It is wall-to-wall full of battle tested enterprise IT veterans. You would think it’s a place where Bimodal IT would thrive. Instead, the skepticism was high! Why? The best conclusion they arrived at was that Bimodal IT described the broken current state in which they were already living. In other words, Bimodal IT describes the problem and not the solution! Bimodal (or really "many modal”) IT is already here and it’s the status quo of the low performers and not the high performers.

This realization was immediately clear to those people who spend their days close to the keyboard fix and coordinating all of the work and refereeing the conflict across today’s silos. What do they want instead? They want a way of thinking and working that is consistent across the organization. Let the process, tooling, and speed differences flourish. But, if you want solve today’s problems, get everyone to approach their work with the same “mode”, the same general recipe for success, the same understanding of what brings quality.

R. England: There is a model I use to understand the evolution of an organisation in this context:

In the first phase an organisation is in an experimental mode exploring the possibilities of agile, lean, automation, and new ways of working in their situation.

In the second phase the organisation incubates the new capability, DevOps, developing something custom that works for the organisation.

The third phase is about adoption of the capability widely, and the fourth phase is the resulting transformation of the operating and governance models to the new way of working across all parts of the organisation.

Given this model I would say that it makes sense to be bimodal in the experimental phase, to give the innovators freedom to explore, but in order to incubate and adopt a capability it is necessary first to converge everybody to a single way of working.

If you do not converge then the continuing presence of two cultures within your organisation is extremely damaging.

B. Kromhout: I think it’s a false dichotomy. Every part of a transformation needs to proceed at the pace that works for that product or that part of the organization. Going for the lean-style MVP means creating value right away and then continuing to iterate. Be realistic and make small changes in your org just like you do in your software. You’re not going to run a Markov bot on the front page of Hacker News and continuously deploy whatever you find there into production; same incremental changes apply to your org.

M. Kern:  There are three kinds of needs in large organizations. (1) Commodity software, like office applications, need no particular mode for any incidental small add on. (2) Organizational support (not core business) functions should buy “off the shelf” software, as improving support functions above what everyone uses  does not measurably affect organizational performance, top line growth, bottom line growth, service or product quality. Examples are ERP or CRM. These can be modified, within reason, by small additions in a fast mode. Excess modification is unwise, and leads to large unjustified costs. (3) Core mission operations tend to need custom “enterprise software”, which usually involves one or more large enterprise databases, maybe geospatial systems, maybe data analytics. These are expensive and require cost justification, measurement of ROI, and description of overall scope to satisfy fiduciary responsibility. They also require a round of higher level test and evaluation – of the organizational performance using the tool, not the tool- to assure goals have been met and base scope is secure. These are not automated.
The decision is automatic, based on the business case and the lifecycle phase.

InfoQ: Is bimodal IT an attempt at dealing with large systems that are ill-suited for Continuous Delivery (CD) and DevOps practices, partially due to the legacy technology they were built upon? And do you believe that is a good, bad or “least bad” approach?

M. Cronin: I see Bimodal IT as a transition step in our organizations. A necessary transition. It enables the following in a large distributed organization:

  • Large, monolithic legacy systems to live in the same space as agile, rapidly developing systems while not impeding their progress. 
  • It can enable the monolithic project to evolve into an agile system while not disrupting the production landscape with releases. I particularly like the strangler pattern approach described by Martin Fowler.

The reason for Bimodal IT is because the legacy system cannot or does not want to adopt more agile methodologies like continuous delivery and scrum. Due to integration challenges the Bimodal aspect of the landscape allows the legacy system to integrate with the agile system without causing delays. A simple example might be a payment app (agile) integrating with a backend payment system (monolithic). The stakeholders of the monolithic payment system are very happy with that system in production.They have a release once every 18 months and the business requirements do not change rapidly. The payment app needs to do daily releases to an app store based on feature requests and bug reports (effecting the app rating in the store). The app is independent from the payment system (different teams, different methodologies, different technologies) but relies on data from the payment system for testing each sprint. The rate at which this data is made available by the payment system is not acceptable to the app team, impeding the app quality. Solutions are found.  The monolithic payment system could look at automation or a separate test data suite could be created in the cloud replicating the payment system test data etc. These solutions are the bimodal nature of that architecture.

D. Edwards: There is a common excuse that just doesn’t hold water: “we have these legacy systems so we can’t use the same mindset and methods as the high-performers”. The evidence shows that just isn’t true. Want some proof? Go to the DevOps Enterprise Summit and hear from Ticketmaster, Nordstrom, SIX, Equifax, and more. Legacy systems and mainframes? They use the same principles and techniques to improve lead times and quality. Even IBM —the temple of the mainframe— clearly lays out that there is no reason why you can’t do modern practices like Test Driven Development on the mainframe. Again, where are the high-performers going? In their organizations they foster one common mode of working, with local freedoms to run at the appropriate speed and make process and tooling variations as necessary. 

R. England: I believe that it is a very bad approach to try to separate off legacy systems for long.

I call this Chernobyling: taking the legacy systems and putting a huge concrete dome over it, trying to pretend it's not there anymore, leaving the workers to die slowly inside.

M. Hering: I think it is reality, some systems were not built with modern development practices in mind. But then there are still many modern systems that cannot be automated easily either. Some of the SaaS solutions have challenges for at least some aspects of the lifecycle. We have not yet reached the point where everyone is working with the same principles. Over the last few years CD and DevOps practices have become more popular but in large organisations maturity is still growing and it is not yet mastered. We work a lot with those systems that are not easily automated and Accenture’s DevOps practice specialise in finding creative ways around the limitations while working with the software vendors to highlight required changes. It is a long journey as there is a lot of technical debt in those systems. So dealing with not-so-fast delivery methods for systems that are not build for DevOps will be a reality for a while and hence is the least-bad approach until we can solve all our automation challenges.

M. Kern: The little add-ons for your COTS (Commercial Off The Shelf) apps can be DevOps alright. So can the Operational and Maintenance phase of custom enterprise apps automating core mission functions. But fiduciary responsibility and process transformation demand some other approach before MVP (Minimum Viable Product) or IOC (Initial Operational Capability) - in any dialect.

B. Kromhout: Bringing Continuous Delivery into the discussion is a straw-man argument. If it’s not necessary to make changes to a legacy system, it certainly isn’t necessary to iterate on it quickly. In reality, functionality is being split off the monolith following the strangler pattern. Just because you aren’t going to change the previous implementation doesn’t mean you aren’t going to change the end result of what it’s delivering.

InfoQ: One might look at bimodal IT as a transition state - not a permanent way of working - whereby the organization acknowledges some systems will require a longer adaptation period to get up to speed with modern IT practices. Do you agree? Why or why not?

M. Cronin: I agree. Bimodal is for applications, what hybrid is for technology. I believe in a 5 to 10 year timeframe neither will exist (or will be minimal) as the application landscape is fully product driven and the technology landscape cloud based.

D. Edwards: A phased transformation of an organization is a lot different than "managing two separate, coherent modes of IT delivery, one focused on stability and the other on agility.” Nothing in the Bimodal IT definitions I’ve seen say “attention: this is a temporary transition strategy". So either this is a new revisionist definition or it is people putting their own definition onto the term Bimodal IT.

R. England: It is a transition state while you are still experimenting to understand the new ways of working. Typically this experiment occurs in fairly uncoupled digital systems i.e. cool new web stuff.

Once you converge, it is essential to promote the concept of DevOps to everybody: culture, automation, flow, feedback, and sharing.

The headwind of legacy is dealt with by understanding that different parts of the organisation will be at different levels of maturity and working to different cadences, but that is not the same thing as having two different ways of working.

M. Hering: As of my comments before, I think bimodal or multi-speed is a transition state to an “all fast” state. The question is whether or not organisations have the rigor and grit to stay the course when it takes longer and turns out to be more complicated. Very few organisations can ditch the legacy and so have to deal with the more complex multi-speed for a while.

B. Kromhout: Giving people explicit permission to stagnate is a mistake. Everyone should be challenged to see where they can keep adding value to an org. And nobody wants to feel like they’ve been moved to Team Rest’n’Vest.

M. Kern: Creation of a new piece of software around an enterprise database to automate the core mission, unique, not available as product, requires some thought and architecture. Once the groundwork is laid small increments can be used effectively in Operations and Maintenance phase.

InfoQ: Is there a danger of bimodal IT leading to “legacy teams” where people in the “safe mode” hold on to old practices and tools as they lack exposure to DevOps and CD practices?

M. Cronin: I think customers expectations and behaviour will mean that enterprises cannot fall into that trap. As customers behaviour evolves and the way feedback and data is communicated, the enterprise will have to more rapidly respond. A simple example is customers sharing data about health using fitbit, and their expectations on their health insurer based on that information. Not only is the customer sharing the data, they expect the insurer to have reacted to it and rapidly provide feedback or a service based on that data. Twitter is similar, a customer shares information about an organisation over twitter, they expect that organisation to respond. It is the customer expectation here that will drive organisations to abandon “safe modes”. The monolithic system will fast become a bottleneck.

Enterprises should engage support & help to actively avoid they do not fall into this trap.

D. Edwards: Yes. How could there not be? You’ve set different parts of the same IT organization — people who are working on different ends of the same business systems — with opposing mandates and philosophies. To a Mode 1 mindset, DevOps and CD is going to sound like crazy stuff that "might work ok for those risky Mode 2 toys but not for our Mode 1 apps that run the real business".

R. England: Yes.

M. Hering: This is a risk with all changes, if you isolate a group from the change, they don’t learn and don’t get exposed to new thinking. I think it is important even in bimodal or multi-speed settings to expose everyone to the change and gradually change the overall culture and practices. DevOps is a change program and if you consider this in your plan you will not get into this problematic situation. But from experience I can say that many organisations underestimate the change management aspect. In Accenture we are building DevOps practices into our delivery approach, and training people from all technologies in DevOps practices. Many of the benefits which arise from DevOps are as applicable to legacy enterprise technologies as they are to modern web application.

B. Kromhout: First of all, having no ability to push necessary fixes to production on demand is the opposite of safe. And yes, this is what I’ve been saying in conference talks all year. This just increases organizational silos and resentments.

M. Kern: The older practices needed to think through and create an initial, huge system should not be discarded. They remain relevant. No one really starts with tiny increments and does not look at the whole in practice. Even SAFe now has architecture. It was a fallacy.

InfoQ: We now hear of organizations reaching a point where their backend systems are now the bottleneck to increasing delivery speed, after successfully automating the pipeline for smaller applications requiring little to no integration. How does bimodal IT handle (or not) the fact that services (both internal and external to the organization) are becoming more and more interdependent?

M. Cronin: Previously techniques like Gartner pace layering explicitly split monolithic systems in the systems of foundation, Agile (or Fast IT) in the systems of differentiation and innovation and customers in the “external layer”.

These lines are now rapidly blurring. Customers are in the systems of interaction (fitbit and Twitter examples in my previous answer), their behaviour and expectations drive technology changes.  Bimodal IT is a way to enable this to happen and not let the monolithic systems be the bottleneck, moving these systems out of the foundation layer into the differentiation layer.

There are many small and large companies out there specialising in this space (automation for monolithic systems), some specific to product vendors, some generic. There are architectural approaches to support this (the strangler application pattern example).

The key step however is for organisations to plan how their portfolio is managed, what is a product and finally how products are managed and evolved.

Interestingly some large financial organisations are planning to have very few monolithic systems and purposely intend to avoid the Bimodal IT model.

D. Edwards: This touches one of the biggest logical fallacies of Bimodal IT. Anyone with actual hands-on experience in the enterprise knows that the “systems of engagement” and “systems of record” divide is both arbitrary and falls apart in any view under the 10,000 foot level. Your systems of engagement are useless without the systems of record. The value in the systems of record can’t be realized without the systems of engagement. In reality, it is all one system! 

Companies who put the Mode 1 label on their systems of record are giving permission for it to be the boat anchor that slows down the Mode 2 teams. Of course, Gartner will say that is not the case. But how could it not be? You’ve told the Mode 1 teams to be slow and stable (which we now know doesn’t make you more stable) and optimize for cost. All of those API changes, new data types, and capacity for the new load patterns, and more that your Mode 2 is waiting on? Get in line.

Again, this isn’t something new. This is how most enterprises already work today. I can’t say it enough, Bimodal IT describes the problem and not the solution!

R. England: Bimodal fails to handle it by allowing the industrial systems to continue as usual.

By adopting a DevOps culture universally we can still improve legacy systems to higher levels of throughput and quality fairly quickly, and in the long term we sail towards the star of granular architecture (SOA), standardised infrastructure, and integrated supply chain.

Horses take a long time to grow into unicorns - decades. Horses don't grow horns but they can learn to tap dance.

M. Hering: Realistically it has always been the case that the slowest application in a stack determines the speed of delivery. There are two interesting factors at play here: First, the faster application got faster and faster through CD, DevOps and Cloud and secondly new architecture principles made it possible to decouple systems through API layers better than before. To me the decoupling will continue and will initially enable bimodal and multi-speed delivery. As all systems speed up the old paradigm of the slowest application determining the speed of delivery will become less and less relevant hopefully.

M. Kern: Placing Bimodal IT’s slower mode at the front of big, enterprise application development allows all the subsystems and apps to be small, produced by nimble teams in parallel. Database back-ends always unified these little subsystems, and still do. They handle the interdependence with a unified schema.

B. Kromhout: I’m assuming by “backend” you’re talking about legacy data stores. And sure, that shiny new “stateless” app was fast and easy to get out the door. Guess what? Any place you have customers or money, you have state. Successful organizations aren’t punting on the question of how to modernize the parts of their state that touch core business value. They’re embracing frameworks like Spring to modernize their Java backends and methodically moving data into modern data stores where it makes sense. Like Deming says, it’s not necessary to change; survival is not mandatory.

InfoQ: Safety is another main argument for bimodal IT but at the same time we’ve seen a notable increase in open source security tools that integrate well with automated pipelines. What are, in your opinion, some remaining obstacles for automated delivery to increase application security as it has been proven to increase application quality?

M. Cronin: While open source tooling is maturing in the security space, distributed enterprises often have dedicated security groups, with pre-approved security technologies. These technologies are frequently not easily integrated to the continuous delivery and integration pipelines. The main obstacle here for automated delivery is to get the new security tools approved internally by these group security teams. I recommend that DevOps and Agile teams engage their security group very early and identify what tools can support CI & CD.

D. Edwards: I think it is just a matter of more time, more examples, and increasingly mature tools. The patterns and the results are obvious. The same somewhat counterintuitive principles that are showing the way to better quality also work for security. Working in smaller batches, quicker turns on the full lifecycle to get faster feedback, automated scanning/testing that starts at the earliest point in development, “shift-left” of responsibilities. It is all there and there are good efforts underway to educate people (Rugged DevOps, I am the Cavalry, etc.)

R. England: Security is not my field.  From my more general point of view, security is simply another non-functional requirement and DevOps drives higher quality of all non-functional requirements. "We don't build functionally ready code; we build operationally ready code."

M. Hering: I am not a security expert so I am probably not the best person to answer this. From what I have seen security is very specialised and a lot of money is being spent to try to stay a step ahead of the “competition” from hackers. I think in security more than in other spaces the feeling of “a safe pair of hands” that comes from using commercial software and having someone to hold accountable might be part of the reason. At this point open source is often used to supplement commercial products, which is a good first step.

M. Kern: Development methodologies from (ISC)2 and others increase security, and are consistent with that slower mode up front. So long as the system boundary does not change, small incremental upgrades are routinely handled in the operations and maintenance phase. This is consistent with a lifecycle view of Bimodal IT.

B. Kromhout: There’s nothing safe about being trapped on an 18-hour conference call when the quarterly deploy has gone wrong and the rollback failed. Small batches of changes are far safer. Audit trails produced by automation are more consistent than those produced by checklists. You can gate the actual delivery made possible by a CI/CD pipeline, whether it be in the pipeline itself or via feature flags.

InfoQ: Large traditional vendors have (seemingly) been slow to open up their proprietary systems to allow for modern DevOps practices. Do you agree and if so to what extent has that played a role in the (perceived) need for bimodal IT?

M. Cronin: Most vendors talk about DevOps and have “DevOps tooling“ but their biggest failure is that they talk about their tools in isolation. I believe they should actively create partnerships with one another to help organisations put together their CI and CD stacks.

Organisations like Uber and Yelp have “toolsmiths” - these are teams that create and manage the CI and CD stacks for the organisation. They bring together the complementary strengths and new features from the various tool vendors. Allowing the DevOps team to start running with development.

I think tool vendors should actively create toolsmith communities, to achieve this goal for large enterprises.

D. Edwards: I think for the most part, the traditional large vendors are jumping on the DevOps bandwagon. In many cases, I think perhaps a little too fast. It’s not uncommon to see a vendor come across the DevOps movement and think “hey our tools can do x also”. So they slap the word DevOps onto their existing marketing and then wonder why they get backlash.

I don’t think it’s because of a lack of wanting to help their customers or anything nefarious. I just think they are in a tough place. They have huge investments in existing products that were created and sold to longstanding tool categories and labor divisions. In fact, they built big sales and marketing empires and many careers on these old categories and divisions. Now they are in a bit of an existential crisis since DevOps is reshuffling the old categories and divisions underneath their feet.

Are some vendors using Bimodal IT as an opportunistic way to sell to organizations who don’t want to change? I don’t know.

R. England: Technology is even less my field, sorry.  Tech is the easy part.  Systems, people, and culture are the hard part.  Certainly it is not my perception that the large vendors are unwilling - they seem to be piling in.

There is an ideological perception in the DevOps community that an open multi-supplier tool-chain is necessary, preferably all open source. This is part of what I call the hippie side to DevOps. The reality in large organisations is that integrated suites from a single major supplier are still considered a safe option.

M. Hering: I think the challenge here is that not enough organisations are asking for this. It’s difficult for product vendors to justify the investment to make their application “DevOps-ready” when clients usually ask for new functionality instead of asking for better engineering practices. The value that the market puts on the openness of those platforms for integration with DevOps tools etc is just not there yet. But the pressure is increasingly and slowly we see movement in this space.

B. Kromhout: A large traditional vendor could be motivated to extend the lifespan of a product they don’t want to update by telling you that you don’t have to update or replace it. That’s not a reason to listen to them. They aren’t the ones facing your competitive landscape; they have their own incentives. You’re going to be better served working with vendor partners who pay attention to the needs of your org.

M. Kern: Commercial software has a role. There is no need to create maintenance branches of unsupportable code in commercial products.

InfoQ: Finally, would you say there’s any truth to ITIL characterizing the “predictable mode” while DevOps would characterize the “exploration mode” - or not?

D. Edwards: No. There are many valuable lessons to learn from both ITIL and DevOps. In reality, people may chose to selectively apply these bodies of knowledge in mutually exclusive areas and end up suffering from all sorts of crappy or misguided outcomes. Saying one is exclusively for one part of your org and the other is exclusively for the other part of the org is a foolproof recipe for reinforcing silos and making today’s Bimodal IT reality even worse.

M. Cronin: I would not see ITIL as one mode and DevOps another. I would see it as a difference in how an organisation perceives and manages change in their environments.

ITIL defines processes around event management, i.e. incident, service mgt etc. It creates structure around asset mgt, DB configuration mgt, etc.  With ITIL you control & manage change with standard change processes. Change is discouraged, something to be wary of and frequently happens in a “big bang” manner. Mean Time Between Failure (MTBF) is tracked and encouraged to be high.

DevOps enables developers to work without knowledge of infrastructure and changes (especially in serverless environments or good PaaS architectures).  The changes are regulated and audited, but happen more rapidly. Change is encouraged, Mean Time to Recover (MTBR) is tracked and encouraged. High values of MTBF are also encouraged.

It is a difference in perception rather than mode.

M. Kern: I see the reverse.  DevOps is best applied in the O&M phase of large systems.  Architecture and forethought are required for new, large systems.

R. England: ITIL and DevOps are not alternatives. They are different lenses on the same reality and both lenses are useful - they coexist. ITIL needs to adapt to accommodate DevOps but ITIL has always been about "adopt and adapt".  Equally, DevOps needs to learn from and accommodate some of the concepts of ITIL in order to be fully operationally ready in a legacy enterprise. It is amusing in recent times to watch some of the DevOps community rediscovering business continuity, enterprise architecture, change management, problem management, and other traditional ITIL disciplines.

I prefer to characterise ITIL and DevOps as two horns of the dilemma of "Protect and Serve". IT has always wrestled with the conflicting roles of being the custodian of the organisation's largest asset, its information and associated technology, while at the same time being the provider of new value by changing that asset to serve the needs of the business.  ITIL has traditionally put too much emphasis on the Protect side of the balance and Agile has traditionally put too much emphasis on the Serve.  DevOps needs to find that balance to deliver operationally ready code.

M. Hering: ITIL is actually not really incompatible with DevOps. When you look at some of the principles you notice that there is a lot of similarities. I think it’s the interpretation that is standing in the way of many people. We see ITIL and DevOps as being complementary. ITIL provides a useful framework or reference which covers the majority of disciplines necessary to operate an effective and stable IT service. The majority of practices and principles align really well within the ITIL framework. A good example of this would be how a continuous delivery pipeline can be considered as a standard change.

B. Kromhout: DevOps is a cultural practice of cross-team cooperation, and tooling in the DevOps space leads to far more predictable, repeatable, idempotent results. The artisanal hand-crafted servers of yore are never going to be as predictable as a modern platform, no matter how many analysts say so. Discussing ITIL vs DevOps is a waste of time; think about what you need to accomplish and then examine the tools, practices, and principles that will get you there. (Spoiler alert: the answer is “it depends”.)

About the Panelists

Margo Cronin is an open group certified master architect and certified program manager. She worked for 10 years for IONA Technologies as a middleware consultant before moving to the FinServ sector in Switzerland. There, over the last 10 years, she has worked for Credit Suisse and Zurich Insurance in both delivery and enterprise architecture roles. Margo Cronin started working with DevOps to enable scrum and agile to be successful in large distributed organisations. She is co-founder of an enterprise architecture platform in Switzerland called EntArchs.

Damon Edwards is the co-founder and managing partner of the DTO Solutions consulting group. Damon is also a frequent contributor to the Web Operations focused blog, the co-host of the DevOps Cafe podcast series.



Rob England is a self-employed IT commentator and consultant. He consults in New Zealand on IT governance, strategy and management. Internationally, he is best known for his blog The IT Skeptic and half a dozen books on IT, and he speaks widely at conferences and online..


Mirco Hering leads Accenture’s DevOps & Agile practice in Asia-Pacific, with focus on Agile, DevOps and Continuous Delivery to establish lean IT organisations. He has over 10 years experience in accelerating software delivery through innovative approaches. In the last few years Mirco has focused on scaling these approaches to large complex environments. Mirco is a regular speaker at conferences and shares his insights on his blog.

Matthew Kern holds a certificate in electronics from Community College of the Air Force, a bachelor’s degree in electrical engineering from Penn State, two postgraduate certificates in enterprise architecture from Cal State East Bay, and a master’s degree in engineering management from National University. He also holds ta Project Management Professional certification from PMI, a Certified Information System Security Professional certification from (ISC)2, an Information System Security Architecture Professional Certification from (ISC)2, A Zachman enterprise architecture certification, two black belt level enterprise architecture certifications from FEAC Institute, and a certification in ITIL. Over his nearly forty year technology, career he as acted as Chief Scientist, Chief Systems Engineer, Chief Architect, CEO and CTO.

Bridget Kromhout is a Principal Technologist for Cloud Foundry at Pivotal. Her CS degree emphasis was in theory, but she now deals with the concrete (if ‘cloud’ can be considered tangible). After years as an operations engineer (most recently at DramaFever), she traded in oncall for more travel. A frequent speaker at tech conferences, she helps organize tech meetups at home in Minneapolis, serves on the program committee for Velocity, and acts as a global core organizer for devopsdays. She podcasts at Arrested DevOps, occasionally blogs at, and is active in a Twitterverse near you.

Rate this Article


Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread
Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread