BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Approaching Lock-In from a Consultant’s Perspective: An Interview with Nicki Watt

Approaching Lock-In from a Consultant’s Perspective: An Interview with Nicki Watt

Bookmarks

With the fast-pace of cloud changes (new services, providers entering and exiting), cloud lock-in remains a popular refrain. But what does it mean, and how can you ensure you're maximizing your cloud investment while keeping portability in mind?

This InfoQ article is part of the series "Cloud and Lock-in". You can subscribe to receive notifications via RSS.

 

Key takeaways

  • When looking to leverage cloud capabilities, bringing your own software stack may increase portability, but may also mean a longer development time than if you used native services.
  • It can make sense to own certain aspects of the solution stack, such as VPN access.
  • Increase portability by creating more modular architectures on top of cloud infrastructure.
  • Be careful when embracing tools that "do everything." For instance, when building cloud infrastructure, look at tools for provisioning, and others for configuration. This gives you more flexibility when swapping out services that address individual capabilities.
  • Many costs come into play when delivering software, so consider not just implementation costs, but the cost of maintaining custom solutions and baked in support costs.

Consultants play a major role in helping companies deliver software. How do these consultants tackle lock-in and build portable solutions? In this interview, OpenCredo's Nicki Watt tackles this topic.

InfoQ: How often does the topic of "lock-in" come up with clients when you're planning solutions, and what is the typical context? 

Nicki:  "Lock-in” often manifests itself in a few ways such as software or product lock-in, as well as supplier lock-in. With larger projects and customers, such as enterprise and government organisations looking to engage in longer term transformations and solutions, the topic of "lock-in" does come up quite a bit. More often than not, this is because the customer either had, or is still having, a bad lock-in experience of one sort or another, and they are very keen on not repeating this moving forward.

InfoQ: When building systems hosted in the public cloud, how do you decide whether to use native services in that cloud, versus bringing an outside (open source) software stack?

Nicki:  There is no straight forward answer to this question, although I wish there was! Some clients are happy to embrace a particular cloud provider, and given their time and budget constraints, are willing to trade off the fact that they may have a certain degree of “lock-in” by a consuming certain cloud provider specific services, in order to gain a faster and more integrated development cycle, as well as an ability to consume more integrated cloud service offerings. Others are not. However, they recognise that bringing your own stack often means a longer development time and effort required, but are happy with the longer term flexibility payoff. Where “lock-in" has been a real pain point in the past, assessing the core areas where the customer really wants to retain control is often a key factor. For example, having a consistent way to be able to VPN into different cloud providers (by bringing your own VPN solution as opposed to consuming the cloud providers) means the customer retains full control over this aspect, irrespective of whether the cloud provider even offers it as a service or not. Depending on the problem space, other aspects may not be considered quite as key. For example, if a solution requires the ability to interact with a “relation database” of some sort, but is not precious about the specifics, consuming this via a (cloud provider) “as a service” offering vs bringing and building your own still allows for the same business outcome but via different routes. Crucially however, as long as you only consume the basic relational functionality via the “as a service” offering, moving to a provider where you may need to bring your own is still feasible and relatively pain free if required. In all cases, any such decisions are taken in conjunction with the client explaining the options of going one route or another, and then allowing them to decide what makes best business sense in their case.

InfoQ: You've done some recent work with multi-cloud solutions. What's the typical motivation for this, and how do you minimize the coupling that would force you to maintain significant per-provider configurations?

Nicki:  As mentioned before, often the motivation is that customers have been tied to specific vendor(s) in the past and are not keen to have all their eggs in one basket again. Just having the capability to actually move providers often strengthens the customer's negotiating position in these matters, and having competition is a good thing! Price is often cited as an initial driving factor, although there can also be regulatory reasons requiring certain data to only reside in a particular geographic region, which may or may not be offered by all providers. Larger customers starting their journey to embracing cloud computing, are often a lot more comfortable with having their dev and test workloads run in the public cloud, whilst still choosing to run certain core production workloads in more specialised internal clouds or data centres. 

At the end of the day, not all cloud providers are created equal! If you only opt for supporting the lowest common denominator, sometimes you can miss out on many other value add features offered by cloud providers. So completely trying to eliminate custom configurations and options is not necessarily feasible. That said, my approach to minimise coupling is more geared around designing solutions and approaches which try to take advantage of a set of modular, API driven tools and offerings, rather than a single one size fits all type solution. I feel that this approach, coupled with sticking to principles such as always separating your config data from code, and looking to automate everything, gives you a fighting chance of being able to swap out different parts of your solution as required, and adapting to changes in the ever evolving cloud space.

InfoQ: Can you give us an example of where you applied this modular approach?

Nicki:On one of my recent projects, a key goal was to be able to spin up fully automated, self-contained secure environments in different cloud providers. This requires the ability to execute automated code to provision the actual infrastructure in the provider, as well as then applying config management on top of that to install and configure any additional software which may be required thereafter. Rather than opting for a tool which combined both provisioning and config management functionality, we standardised on one tool (Terraform) to perform the actual infrastructure provisioning, and used cloud-init as a bridging mechanism to hand off to a dedicated config management tool within the VM to perform the final software installation and config. This allowed for the scenario where we were able to swap out the config management solution used (Ansible <--> Puppet) without impacting the codebase responsible for the infrastructure provisioning. If there was one tool responsible for handling both roles, this would have been much harder. 

InfoQ: You mentioned price being a factor in these solutions. How do you approach that discussion when considering this sort of architecture? Not using built-in services from a particular provider may increase implementation and runtime costs, but provide a lower switching cost later on. What cost components do you try to make clear when someone has a more modular architecture, as opposed to a traditional monolithic one?

Nicki: A saying which holds just as true in our software development context as any other is “there is no such thing as a free lunch”. The general sentiment behind this phrase, is typically the message I try to weave into most cost based discussions I have. There are obvious costs, and then there are more obscure costs. Obvious costs such as the actual prices paid on demand for consuming provider services is more easily understood and reasoned about. Less tangible costs, such as the additional time and effort taken by developers to sometimes retrain, implement, customise and maintain "bring your own" type solutions, do often need to be called out more explicitly as needing to form part of the overall cost benefit decisions which need to be made. Other areas to highlighting can be the fact there is inevitably costs involved in actually “moving” from one provider to another, even if this is fully automated. Providers make it easy to get data in, less so to get it out, so if you have a lot of data to deal with, this needs to be a consideration.  Additionally, some monolithic style solutions may have incorporated, or wrapped up multiple support and licensing costs as part of their single cost models. Sometimes, customers will want to move to a more modular architecture to gain flexibility, yet retain the use of certain software products for various reasons. Highlighting the fact that running older style products in a new modular (often cloud native architecture) can sometimes prove to be quite costly, is also pertinent to highlight in these cases so that the most appropriate software choices can also be made.

InfoQ: In another InfoQ article, I talked about "switching costs" being something that customers should care about more than "avoiding lock-in." What components do you find have the highest switching costs, and how does one mitigate that (if they care to!)?

Nicki: Non-automated components: Where components or solutions are not deployable and/or testable in an automated manner, this can have significant costs in terms of time and money when trying to move them elsewhere, irrespective of whether provider specific components are used or not. Such snowflake setups will often rely on having to find that "one guy who knows how it works” to help explain context, dependencies etc in order to even begin moving forward. To mitigate - automate everything!!! 

Niche provider specific products: Where any provider specific features have formed part of a solution, depending on how integral this is, there can be a significant redevelopment effort required to adapt and re-engineer this for the new solution. An obvious mitigation is to try to not use custom features in the first place, if possible or actually desirable. Failing that, making a conscious decision to at least try and minimise or contain their prevalence in the solution is the next best thing in my opinion, hence my preference for a modular, API driven architecture which makes it easier to swap out smaller chunks of the solution at a time rather than the whole shebang!

About the Interviewee

Nicki Watt works as a hands on lead consultant at OpenCredo. A techie at heart, her personal motto is “Strive for simple when you can, be pragmatic when you can’t”. Nicki has worked in a variety of different development and architectural projects with her current focus lying in the cloud and continuous integration and delivery space. Nicki is also a co-author of Neo4j In Action, a comprehensive guide to the graph database, Neo4j.

 

With the fast-pace of cloud changes (new services, providers entering and exiting), cloud lock-in remains a popular refrain. But what does it mean, and how can you ensure you're maximizing your cloud investment while keeping portability in mind?

This InfoQ article is part of the series "Cloud and Lock-in". You can subscribe to receive notifications via RSS.

 

Rate this Article

Adoption
Style

BT