Key takeaways
|
The primary objective of DevOps is to increase the speed of delivery at reliable quality. To achieve this, good configuration management is crucial as the level of control at higher speed of delivery becomes more and more important (while riding a bike you might take your hands off the handle bar once in a while, but a formula one driver is practically glued to the steering wheel). Yet commercial-off-the-shelf (COTS) products often don’t provide any obvious ways to manage them like you manage your custom software. This is a real challenge for large organisations who deal with a mixed technology landscape. In this article I will explore ways to apply modern DevOps practices when dealing with COTS products.
COTS products are going to be an important part of enterprise landscapes going forward
Hopefully this contribution helps some of those out there who are looking for how to get started. But why should we even deal with COTS and other systems of record?
Ultimately for me this comes back to the two gear analogy that I keep using. If you can just ditch your “legacy” applications completely, then congratulations you don’t have to deal with this and can probably stop reading this article. For the rest of you who cannot do that, you will eventually have realized that while your digital and custom applications can deliver at amazing speed now, you are still somehow constrained by your “legacy” applications. Hence speeding up the latter will help you achieve the ultimate terminal velocity of your delivery organisation.
For example, for an organisation that uses a COTS product for their customer relationship management (CRM) that provides information to both a digital channel (like an iPhone app) and to their Customer Service representatives (CSR), the speed of providing new functionality on the iPhone app is somewhat limited by the speed that the back end CRM system can provide these services. Increasing the delivery speed of the CRM system in this case speeds up not only the enablement of the iPhone app but also puts new functionality in front of the CSR much quicker.
Besides speed those COTS products might also require a lot of effort to support several codelines at once (production maintenance, fast releases, slow releases) which has become a very common pattern in organisations these days. With each codeline the effort required to branch and merge code and do the required quality assurance increases. I have seen code merge activities consume up to 20% of the overall delivery effort and adding weeks and sometimes months to the delivery timeline. From my own experience, this merging effort can be reduced by up to 80%, thus saving millions of dollars. This figure was measured by comparing the proportion of effort for configuration management activities before and after the implementation of the practices outlined in this article.
Unfortunately many COTS products have not yet made the shift to the DevOps world
You might wonder whether COTS vendors have understood the need to operate in a DevOps and Continuous Delivery focused world. In my experience there is a realisation that this is important but most of the solutions provided by the vendors are not aligned with good practices yet (like bespoke SCM solutions and the lack of development tool APIs). The below guidance is mostly in the grey area for vendors where they don’t encourage you to do it that way as they prefer you to use their solution, but you are not breaking anything that will compromise your support arrangements. I think as a community it is our responsibility to keep pushing for COTS vendors to adopt technical architectures that makes them more fit for a DevOps context. In my experience the feedback has not been strong enough yet and vendors can continue to ignore the real needs of DevOps focused organisations.
I would love for vendors to proactively reach out to our community but so far I have not seen this happen. It will be up to us in the industry to demand that they are doing the right thing or alternatively start voting with our feet and slowly move away from those solutions. Personally where I have the choice and it is economically reasonable I avoid introducing new applications that don’t meet the following minimum requirements for DevOps:
- Can all source code, configuration and data be extracted and stored in external version control systems?
- Can all required steps to build, compile, deploy and configure the application be triggered through an API (CLI of programming language based)?
- Can all environment configuration be exposed in a file which can be manipulated programmatically?
So how do I approach COTS ‘code’ and get its configuration management under control?
Step 1 – Find the source code
COTS applications can be pretty annoying when it comes to finding the actual source code. Many of them come with their own “Configuration Management” and vendors will try to convince you that those are perfectly fine. No, not just fine. They are more appropriate for your application than industry tools. They might be right, but here's the thing: it’s very unlikely that you only have that one application and it's very likely that you want to manage configurations across applications. I have yet to find a proprietary configuration management solution that can easily be integrated with other tools.
Imagine a baseline of code. You want to be able to recall/retrieve the configuration across all your applications, including source code, reference data, deployment parameters, automation scripts. Unfortunately this has so far not been possible for me with COTS provided configuration products. They also usually don’t track all the required changes very well, but mainly focus on a subset of components.
Last but not least they don’t deal with parallel development and the need for branching and merging very well. While I am certainly no fan of branching and merging, more often than not it is a necessary evil that you need to deal with. In my experience this process can be extremely costly and error-prone with COTS products and improving this alone will lead to some really meaningful benefits. I have seen organisations who tracked which modules were changed in a release in an excel sheet and the merge process required comparison of those sheets, followed by some manual activity to resolve the conflicts. Not only is this error prone but also quite labour intensive. By being able to store your code in a standard version control system you can reduce the error rate to nearly zero and achieve a reduction of effort of up to 95%.
Here is an example of the effort required for a single merge activity:
So what can you do if you don’t want to use the proprietary source control tooling? First of all, identify all the components required by your application. The core package and its patches are better managed in an asset management tool so I am not going to discuss those. What you want in your configuration management tool are the moving parts that you have changed. For example, in a Siebel implementation I was dealing with, the overall solution had over 10.000 configuration files (once we exposed them), but our application only touched a couple hundred (about 2% of all files).
Storing all the other files will just bloat your configuration management with no real benefit, so try to avoid it. Especially when you want to run a full extract and transfer later on this can become a hindrance and the signal to noise ratio gets pretty low. If you measure percentage of code changes between releases this is only meaningful if you analyze the code your application changed rather than the full code of the base product.
Once you have identified all the components that require tracking, you have a few ways to deal with them:
- Option A) Interfere with the IDE
The most effective and least error prone way to do this, is by integrating the developer IDE with the version control system in the backend and intercept any changes made to the COTS application on the fly. For example, for one of our Siebel projects we created a little custom UI in .Net that intercepted any change made through the Siebel Tools IDE, thus forcing a checkin into our version control system with the required meta-data. This UI used the temp storage of the Siebel Tools IDE to identify the changed files and pushed them into the version control system in a pre-defined location to avoid any misplacement of files.
(Note: When manually storing COTS config files in version control they often end up in multiple locations because the repository folder structure does not matter to the COTS product. When importing files back into COTS products only the filename and/or the file content is important, not where the file is located on the filesystem. As a result, developers will often store a (duplicate) file in a new location when they are not able to quickly identify that the file already existed somewhere else in version control. Controlling the location of files via the mechanism described above also solves this problem.)
In the following picture you can see the custom IDE, which supported the following:- requiring username and password for the developer to login
- automatically assigning a location for the file
- allowing the developer to search for the right work item
- allowing checkin comments
- providing feedback on the status of the check-in
(Click on the image to enlarge it)
Figure 1- Custom IDE to intercept any code changes
- Option B) Extract on a regular basis
Where you cannot easily interfere with the IDE, you might want to use a regular extraction utility to pull configuration files out of the COTS application and push it into version control. This could be done every night or even more frequently. For the same reasons as explained in option A above, you should look for a way to identify only recent changes and not push every file into version control every time. Furthermore, while many version control systems ignore check-ins of exact copies, the performance of this solution would be very much impacted by the number of files. - Option C) Force outside creation of files
A few years ago I worked with some smaller COTS products that didn’t allow me to follow any of the above two processes as there was no programmatic hook into the UI of the IDE and it provided only import functions, not export. In this case we changed the process for developers to basically develop outside of the COTS IDE and built automation that upon check-in to version control imported the files into the COTS product. This is clearly the least favourite solution as it requires additional effort by the developers and increases the risk of overwriting recent changes in the environment when developers didn’t adhere to this process and used the COTS IDE instead. For this solution to work we had to automate the deployment process and keep control over the environments.
Step 2 – Make good practices easy for developers
Many times developers working with COTS or legacy applications are just not used to modern development practices. Enforcing these can feel like extra overhead and can make the adoption much harder than necessary. Look for opportunities to make new practices easy to adopt.
For example, don’t force mainframe developers to move their files to a different filesystem to check code into your preferred configuration management system.
Don’t make developers switch context to use JIRA for tracking work items. Integrate any additional tooling into the natural steps of a developer. For example use an IDE that can provide basic coding checks (e.g. in COBOL to start commands from column 8) and integrate with a ticket system like JIRA.
For mainframe developers used to develop in a text based system the ease of getting feedback this way may improve adoption. As mentioned above, in Siebel you can create steps in the IDE that automatically commit code into your chosen configuration management system and make it easy to identify the work item(s) you are currently working on.
All these changes will increase developer adoption of appropriate practices. Not because they are better for the team, but rather because they make life easier for themselves. Good processes that are difficult to follow will hardly be followed.
Even obvious improvements can be hard to implement. At one company, I was trying to convince developers to use an IDE for Cobol development rather than using a text pad, as the latter would only identify basic coding issues (such as a command starting in column 7) once code had been uploaded to the mainframe. After proposing this change towards an IDE, the team didn’t come around until I proved that my code when uploaded failed significantly less than the average developer’s code.
Step 3 – Support intelligent merges
Developers who are used to native COTS products are often not familiar with 3-way merges. Even if they are, traditional tooling might not provide the necessary support. I will showcase this in the case of Siebel code. We will have to dive a little deeper here to explain the idiosyncrasies of Siebel code and the Siebel Tools IDE.
If you want to merge code natively, Siebel tools basically compare two versions of the file and show you the differences, without identifying an ancestor. You then have to judge what to do with it. Siebel tools do not know about 3-way merges. The below graphics demonstrate how a 3-way merge can help identify conflicts.
When trying to use a common configuration management tool to enable 3-way merges for Siebel, I ran into a new problem. Developers did not trust the configuration management tool. I was surprised, but a closer look at the Siebel code showed me the problem. Let me explain this by first showing you a code sample:
(Click on the image to enlarge it)
As you can see Siebel stores some meta-data in the source code (e.g. user name, timestamp of change). When using a configuration tool that is not context aware, it will show you a conflict if your files differ only by timestamp but are otherwise the same. If you open the same files in the Siebel Tools IDE, it does understand that this difference is not relevant and shows you no conflict between the files. If you run a report you will hence see a large amount of false positives when using traditional configuration management tools.
This leaves you with the choice of Siebel Tools that avoid false positives but do not provide 3-way merges or a traditional configuration management tool that provides 3-way merges but shows a lot of false positives. Here is where better merge-tools make all the difference. Tools like Beyond Compare allow you to define a custom grammar that identifies parts of the code that are not relevant for the merge. Look for such grammar for all your COTS configuration and use the merge tool that is most appropriate.
Below are the results from my project which show a significant reduction of merges that required manual intervention. There's also an example's break down of the different kinds of file merges.
(Click on the image to enlarge it)
Step 4 – Close the loop by enforcing correct configuration (aka full deploys)
COTS products and other legacy systems often suffer from configuration drift as people forget to check-in code into version control. Because configuration management is not something COTS developers traditionally deal with, the chance is higher that someone makes a change in the environment directly without also putting the code into version control This means that the application or environments do not match what is currently being stored in configuration management. If something goes wrong and you need to restore from configuration management, you will miss out on those changes made directly in the environments. We want to minimise this risk and the associated rework.
The most practical way to deal with configuration drift is to redeploy the full application on a regular basis (ideally daily, but at least every week). This will over time enforce better alignment and minimise the amount of drift.
Conclusion
In summary COTS and legacy can behave a lot more like your normal custom code if you put some effort into it. This means you can leverage common practices for code branching and merging, reliable environment configuration, increased resilience to disaster events and as a result more predictable delivery of functionality to production.
Some creativity is required and the bar is a bit higher to enable the cultural shift in your development team but once you get there the productivity and predictability of development will pay back significantly.
In one project we were able to reduce non-value added development time by over 40%. In another project the configuration related defects and outages were reduced by over 50%. And if that is not motivation enough, the COTS and legacy teams will be able to work much closer with your other teams once the practices are aligned and the legacy teams don’t feel like they have been left behind.
You can all move forward on your DevOps journey together!
About the Author
Mirco Hering leads Accenture’s DevOps & Agile practice in Asia-Pacific, with focus on Agile, DevOps and Continuous Delivery to establish lean IT organisations. He has over 10 years experience in accelerating software delivery through innovative approaches. In the last few years Mirco has focused on scaling these approaches to large complex environments. Mirco is a regular speaker at conferences and shares his insights on his blog.