BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Virtual Roundtable: The Role of Containers in Modern Applications

Virtual Roundtable: The Role of Containers in Modern Applications

Bookmarks

 

JP Morgenthal recently published a controversial DevOps.com article entitled “Containers Are Designed for an Antiquated Application Architecture.” After observing a lively debate on Twitter, InfoQ reached out to some of the most influential (and opinionated!) tech leaders in the industry – including Morgenthal himself – and asked them to participate in a virtual roundtable. As expected, this discussion proved lively and informative as the participants challenged assumptions and brought clarity to a popular, and misunderstood, topic.

Participating in this roundtable are:

  • JP Morgenthal - Director, Cloud & DevOps Practices at Perficient
  • Brent Smithurst - Vice President of Product Management & Marketing at ActiveState
  • Krishnan Subramanian - Director of OpenShift Strategy at Red Hat
  • Dan Turkenkopf - Senior Director of Strategic Research at Apprenda
  • Andrew Clay Shafer - Senior Director of Technology at Pivotal

InfoQ: Let's level-set first. In JP's article about containers and legacy architecture, he describes cloud-scale apps and how they differ from traditional n-tier apps:

  • "Cloud-scale applications by nature are stateless with any application state being managed by cache or database services. The compute unit of measure is the process not the CPU, which enables greater scalability. "
  • "They are typically built on languages with runtimes that operate across multiple operating system and platform-as-a-service environments."
  • "Cloud-scale apps leverage HTTP/S as their primary means for communication."
  • "Cloud-scale applications actually require fewer custom components to deploy making them easier to manage over time"

Do you agree with these characteristics? How else would you classify a cloud scale application? What about management aspects?

Krish: I call applications that are stateless and the state managed by cache/datastores as cloud native apps. Cloud-scale apps, in my opinion, confuses people with a new term. We already have web scale to denote apps that are cloud native and have a scale like Google, Amazon or Netflix.

I categorize apps as follows:

  1. Cloud Native Vs Traditional based on how state of the app is managed (Stateless vs Stateful)
  2. Monolith vs Microservices based on functional distribution

Having made this clear, I agree with JP that cloud native apps are stateless and can be scaled out seamlessly, polyglot and uses REST for communication. However, I am not sure if I agree with his characterization that such apps require few custom components. With right automation and suitable management tools, cloud native applications with custom components can be easily managed.

Even though I would claim that PaaS is a suitable level of abstraction for cloud native apps (of course, with my OpenShift hat on), there could be instances, especially at web scale, where IaaS+ approach might work better.

Dan: I agree with Krish that "cloud-scale" isn't necessarily the right term. There are many applications that can and do benefit from running on cloud that don't need tremendous scale.

That said, JP does a great job in capturing the cloud application patterns of statelessness, HTTP/S and location transparency (which I'll define a little bit more broadly as the ability to run on multiple providers).

On the topic of microservices, I'd like to distinguish between the packaging/deployment aspect and the architectural aspect. It seems the packaging and the containerized deployments get most of the press, but the real value to me is driving a loosely coupled, componentized system with each component having a single responsibility. That's what allows you to maintain a flexible, adaptive application. As long as you have the right automation and process, it doesn't really matter if you deploy a bunch of individual services or a large monolith (see Etsy).

JP: In retrospect, "custom" was a poor choice of words here, but it was a blog, not an article (the difference being I put more journalistic integrity into an official article versus a blog which is my opinions). What I meant was that it uses more cloud services versus having to build, configure and bundle those services with the application. With few moving parts having to be deployed with the application, the application becomes easier to support in a production setting. Of course, this is predicated on the fact that the service provider and the service is highly reliable. For example, many applications deployed with Docker will contain its own MySQL instance versus leveraging the MySQL RDS service from Amazon. The latter provides redundancy, availability, backup, etc. all as part of the service, but if deployed as a traditional n-tier application using Docker, all those non-functional requirements continue to burden the operations groups. If you start to peel away these dependencies that make Docker attractive you're left with a core executable which can easily be delivered using something like CloudFoundry or Elastic Beanstalk far more effectively and with lower operational overhead.

Brent: I agree with Krish and Dan that “cloud-scale” is a confusing and unnecessary term. “Cloud-native” seems to be the more generally accepted term. If we substitute “cloud-native” for “cloud-scale” below, then I agree with JP’s characteristics. With regard to his first bullet, we’ve seen many enterprises struggle in their attempt to move a legacy application to the cloud. This is often due to problems managing state. Product plug: that’s why we introduced a filesystem service three years ago.

JP’s last bullet is potentially the most controversial. I agree with it as an ideal, but perhaps not yet in practice. To me, a true “cloud-native” application should have very few (if any) custom components. The ideal of a cloud-native application is that it uses standard, pre-made, “off the shelf” components and is only customized in the business logic and presentation layers. In practice, we’re not there yet.

JP’s definition is missing several management aspects. I assume this is because he kind of stopped at the design stage and left off the operational requirements. I’d certainly add the ability to automatically scale instances up or down as required to be performant under load while using up as few resources as possible. Of course, there are also monitoring, logging, automation, and other operational requirements as well.

Andrew: I partially agree as this represents a current workable approach to building things, but also hope we avoid too much self limiting codification.

N-tier apps could have and should have always been stateless down to the data tier. People just got caught up adding features and complexity that required state in the other tiers. I see what is being described here as a cloud scale approach attached to the legacy of the n-tier model. Other data-centric approaches and paradigms that invert this problem are starting to emerge.

HTTP/S is the convenient protocol more than the optimal one. The request-response cycle can be a bit limiting, but can be made to work for most scenarios with a few tricks and HTTP is accessible to most developers. There are opportunities for improvements and some thought has already been put into protocols like SPDY, SCTP, and SST.

Once the system is a bunch of endpoints passing messages over HTTP, the runtimes and languages don't matter so much. Developers and organizations can choose the tools that best fit their understanding and familiarity.

I don't agree with the fewer custom components point, particularly when you start to look at microservice architectures, which explicitly creates more services. The big trend is building platforms that push the fix cost of deployment down in terms of time and effort, with monitoring, management and other day 2 operational capabilities built in as a primary concern. There is a much longer answer that involves fault tolerance and gracefully handling partial failure. In a service oriented world, code spends the vast majority of time and cost being operated. Organizations with fault tolerant applications and first class operations have a competitive advantage at any scale, but as you approach true cloud scale this becomes existential, you either solve these problems or you fail to provide a reliable service.

InfoQ: Dan pointed out that a real key to modern cloud applications is loose-coupling and single-responsibility components. While this pattern can be realized in many different runtimes (e.g. virtual machines, traditional PaaS), do containers (positively) steer you towards this architectural style?  What role do containers play in facilitating modern cloud solutions?

JP: I don't believe containers have a direct correlation on architecture. They can be used to package an entire application or a single component. The point I make in the blog is that whichever way you decide to use them, you are carrying around the necessary operating system and application dependencies scaffolding to make that package operational. Containers are designed to be task-based, which is why you always need a foreground process active or it will terminate. We turn them into daemons by adding simple dummy loops to keep them alive. These are not mini-servers. They are complex processes and as such their work could much better be carried out by a combination of PaaS platforms and using existing cloud services.

Dan: Containers might make you think a little bit more about how to separate your application into smaller execution units, but, as JP says, that's not a hard and fast requirement for using a container. You can still build a monolithic application in a container as long as you have a single process to launch (which really could just be a script that launches a bunch of background processes).

And, while containers are great for runtime isolation of components and portability, they are absolutely the wrong level of abstraction for application development and deployment.

Developers should identify the capabilities their applications need and not care about how the implementation is provided. The container patterns on the market force an understanding of the full stack that delivers those capabilities.

As Krish's colleague Thomas Qvarnstrom points out - once you build a container, you essentially own everything underneath the application. Yes, you can leverage existing images to assemble a stack, and there are ways to ensure a new deployment will get a new version of your dependencies (woe unto you if the versions aren't backwards compatible though), but the developer is now responsible for maintaining what's really a full application hosting environment.

To be frank, most developers don't have the experience, and more importantly, the desire to fulfill that responsibility individually or in small groups. A runtime platform lets developers worry about their application, and identifying their needs, and lets other developers or operators supply the pieces that fulfill those needs. While cross-knowledge and empathy between all team members is clearly important, specialization of labor arose for a purpose. A platform allows for the accumulated experience to be applied where necessary, and creates a lingua franca to translate across expertise boundaries - in a way that would be extremely difficult in a container based system.

Brent: I think containers do steer you in the microservices direction of loosely coupled and single responsibility components. As Richard said, you can accomplish this without containers, but containers make it easier, and even obvious. Creating a monolithic application in a single container might make sense for packaging and distribution, but you’d really be fighting against what the tool is intended to do. I’d simply say that containers promote loosely coupled and single responsibility components.

However, even though it is the microservices ideal, it’s not yet happening in the real world at scale (aside from a relatively small number of unicorns). As Dan says, most developers don’t (yet) have the experience for this. (I’ll leave the “desire” question for a possible follow-up question about DevOps.) To be clear, I don’t believe that it should be the responsibility of a developer to create management systems to tie all of the required containers together. That is clearly the job of a PaaS; any good, modern PaaS should encapsulate application instances inside a container and allow for independent updating of individual application components while also orchestrating updating of multiple components. A PaaS takes on the “off the shelf” responsibility I referred to in the “cloud-native” question #1 for the operational and management aspects of the application.

Krish: This is a good question Richard. There is quite a bit of confusion in the industry about the connection between containers and microservices. My peers in the panel had done a great job explaining how containers do not drive the microservices architectural style. You can run your microservices on top of any computational unit/fabric, including mainframes. I don’t think the panel has any second opinions on this topic.

Similarly, I agree with my fellow panelists that developers just want to code and not worry about the nature of the underlying components. PaaS offers the necessary abstraction to the developers where they just push their code to the platform interface and magic happens underneath. I would even go an extra mile and claim that PaaS is the ultimate DevOps nirvana where developers and operations focus on their strengths and not worry about acquiring cross functional knowledge. PaaS removes the friction between developers and operations by standardizing the environments across development, testing and production. I don’t think the panel has any second opinions on this topic either.

But …

I see a clear role for containers in the modern IT. When I talk about containers, I am not talking about all the container variants that has been in existence since the early days of Linux. I am talking specifically about Docker based containers. Docker is fast emerging as a market driven standard for packaging applications on containers. There are two distinct advantages of Docker which I want to highlight in this discussion. The biggest problem Docker has solved is the UX problem for developers. Docker makes it not just easy for developers to have a simple way to package their code but it also makes the DevOps workflow leading to production deployment seamless. I have seen two different breeds of developers. One group like the abstraction provided by PaaS and the other group (this is a new trend in the developer community in the past year) want to package their application using Docker on their Laptops and then push it to the release pipeline. A PaaS supporting Docker meets the needs of developers belonging to both these groups without having to worry about what happens underneath the platform.

Now let us talk about the second advantage of Docker. As my fellow panelists pointed out, microservices architecture provides a way to develop applications using loosely coupled set of functionally independent services. Docker is not at all necessary to build such services. However, if such services are encapsulated by Docker based containers, it helps IT to avoid lock-in. The key to modern enterprise is not just a loosely coupled set of functionally independent services but also avoiding lock-in with not just the underlying infrastructure but also the underlying platform. A composable enterprise increases the agility of the organization many-fold. A modern enterprise (which I define as composable enterprise + no lock-in) not only increases the agility but also future proofs the organization for newer innovation. The portability (or no lock-in) advantage is not about not getting locked into a single vendor. It is key for innovation. If you see the history of enterprise IT, the biggest hindrance to innovation was always the costs associated with the tight coupling to the underlying platform. Such a tight coupling slows down enterprise adoption of newer technologies, thereby, hindering innovation. If the microservices are loosely coupled to the underlying platform, it makes it easy to embrace newer technologies, especially in this era of exponential technology evolution. This is the future-proofing nirvana which every business leader expect from their IT organization. A standards based container like Docker offers this advantage. Clearly, this is not important from a developer point of view but it is critical from the organizational innovation point of view.

Andrew: Almost everything people think they like about containers are even more true about unikernels or whatever you want to call things like OSv and OpenMirage, but putting that aside containers definitely have a role to play for the foreseeable future. I also want to point out is when people say containers now they really mean Linux containers, and any portability benefit is coming from the fact that all these things are running on a Linux kernel.

I think the answer to the future is in the history. Google has to be considered a cloud pioneer and their work in this regard is responsible for the container functionality being added to the Linux kernel. The utility containerization provides is more granular control over the resource consumed in a fabric of computational infrastructure. The tooling to share and socialize images is also nice, but that doesn't always lend itself to modern cloud solutions. From my perspective too many people are treating containers like they are VMs which will hold them back from truly going cloud native but that is what people are used to and mostly works so it is hard to fault them.

I also want to point out the question is being asked like traditional PaaS is some different thing from containers, when in reality most of what you would put in that category are container based and have been for years for the reasons already mentioned.

JP: I have to call BS on the play here. The greatest agility came from businesses committing to a single platform and just bought into using its services. Great examples of this are Sybase Powerbuilder and Salesforce.com. The issue is this misnomer that lock-in is bad long-term for businesses because it limits their ability to negotiate better terms with vendors. And it could be costly to exit at a future point in time. That may have been an issue in the 90's, but with the likelihood that most new code probably won't survive beyond 2-3 years going forward, lock-in is less of an issue today than it was in the past. This innate fear of lock-in is old world thinking. Committing to a cloud service provider today may incur some expense to port in the future, but the speed to market and an ability to meet business' needs in weeks instead of months is invaluable. If we choose to stay in a world where the CIO's role is negotiate pricing of contracts with IT vendors, then I suppose lock-in is an important issue. If we're moving into the digital age where the CIO is more concerned with customer experience and delivery of function in a timely manner with high-quality, then lock-in should be the lowest metric used to choose platform.

The Docker motif maintains the operational overhead for the entire application stack and now has the potential to transfer that overhead to the developer. This is the exact opposite direction we should be moving as an industry if we desire agility and the ability to deliver capabilities faster. Leveraging existing cloud services may "lock" the application to a particular platform requiring redevelopment to move to another cloud platform, but it delegates the burden of maintaining those cloud services to the cloud services provider freeing up the developer to focus on delivering business capabilities.

Krish: For the first point raised by JP, I expected such response from either other panelists or readers and already answered the question. I made it clear that I am not talking about vendor lock-in. I seriously don’t care if you want to spend all your money on a single vendor. I am talking about the leverage loose coupling to the platform offers in terms of innovation (a.k.a embracing newer technologies to add business value).

For the second point, he seems to be implying that the use of Docker on containers adds additional overhead for developers. It is the case only if you use DIY Docker approach or some of the container engine services in the market (akin to IaaS+ approach). A PaaS platform abstracting away containers (either Docker or others) will take away any need for developers to manage the underlying components. My point is about hooking your applications in a smart way without getting locked into the abstraction layer. It is doable and there is no significant overhead as JP seems to imply.

JP: Krish, did I misunderstand you when you wrote "The key to modern enterprise is not just a loosely coupled set of functionally independent services but also avoiding lock-in with not just the underlying infrastructure but also the underlying platform" ?

Krish: You wouldn’t have if you have read this below :-)

The portability (or no lock-in) advantage is not about not getting locked into a single vendor. It is key for innovation.

Andrew: You have to make commitments. The biggest lock-in and barrier to innovation for most orgs is the technical debt they have accumulated. With the caveat that you want to be careful with putting stateful services in containers, as was pointed out in the first question, the scaling characteristics are awful.

InfoQ: Let's talk about the reality of technical debt and where developers are today. I think it's fair to say that many (most?) enterprise app portfolios are dominated by commercial, packaged software from the likes of Oracle and Microsoft (not to mention all the small ISVs that solve specific problems). Should orgs write those apps off when it comes to plotting out new, scalable architectures, or is there a way for legacy applications (and delivery models) to co-exist with modern apps in a more agile, container-based, microservices world?

JP: There's a lot of complexity in this question. For one, I don't believe there's a dominant pattern for application portfolios across enterprises. I have found businesses that exist solely on one end of the spectrum (all custom or all COTS) as well as a mix of both. When it comes to technical debt COTS raises some interesting dilemmas. Many businesses have spent millions (tens of millions?) on customizing these COTS packages to do what they need. It's a difficult investment to walk away from. Yet, there is significant technical debt captured within the bounds of those implementations. Some of that debt belongs to the software vendor and some belongs to the businesses themselves. It would be great if it was as simple as delivering microservice abstractions to reduce reliance on these systems and enable the business to move to a more modern platform. However, the underlying systems themselves don't make this simple. Moreover, in many of these cases the choices for replacement aren't significantly better. That said, I know of one business that underwent an IT Transformation and paid off their technical debt by switching to SAP from Oracle. Their choice allowed them to deploy using all virtual servers running on a cloud architecture with expected costs savings of $66 million versus what they would have paid if they continued on the old platform over the same time frame. In this case, moving to a scalable and more maintainable architecture was a difficult choice, but well worth the cost.

Technical debt continues to accrue interest and will continually eat away at monies that can be used to drive competitiveness and agility. It will be difficult for businesses to focus on investing in new modern scalable architectures while having to maintain their legacy debt.

Brent: I’m not sure you can generalize that commercial, packaged software dominates enterprise app portfolios. Well, perhaps you can, but you’d need to point me to a study proving this. Maybe it depends on what type of software you’re referring to, or even in which department — obviously, commercial database software from Oracle, Microsoft, and others are extremely common. The enterprise (Fortune 1000) developers we speak to have huge numbers of custom applications written in various languages. Those may need to access data and store data in a commercial package, but they’re still custom applications that can be written or re-written to take advantage of a more cloud-native architecture.

Even with a commercial CRM or ERP system, many dozens of applications probably exist to extract and process or report on data, or to integrate with other systems. As JP says, these can be customizations to commercial software or they could be completely custom for just that organization.

If I’m understanding the question correctly, then I’d say, yes, legacy applications (such as a commercial ERP) can co-exist with modern, microservices based applications. In fact, re-writing some of those integration points to be more loosely coupled would be a good way to start eliminating technical debt and open the door to migration to a more modern platform. However, this isn’t going to be easy!

Dan: I'll add my voice to Brent and JP questioning the fraction of the application portfolio represented by packaged software. But I'm also a little confused by the question. Buying something to fulfill a function that's not core to my business is not what I would consider technical debt for the purposes of this conversation. Maybe it's a decision I need to revisit (and move to a more nimble SaaS vendor), but my degrees of freedom to "fix" the problem are relatively limited.

The question of legacy and technical debt is much more meaningful in the custom application development realm. There, we should separate application design and agility from hosting and operational efficiency. Robust existing application systems that may not be amenable to rapid changes to their internal logic or structure may still benefit from aspects of the "modern" world. If they can be hosted on shared infrastructure, their developers given self-service deployment capabilities, and their operators given a common mechanism for managing all applications, then the operating efficiency can improve - reducing cost of ownership.

I also want to take some issue with the presumption at the end of the question that the modern approach will be or even should be containers and microservices. I mentioned in my previous answer that I think containers are the wrong level of abstraction for app development (even if they're the right approach for application hosting), and I think microservices will work on the micro level but not the macro (see what I did there?) By that I mean that I think they're an intriguing way to design applications within organizational boundaries, but I think we're a long way away from the organizational change needed before one LOB takes a development AND runtime dependency on another LOB. Companies might get there, but I don't think it's a given.

Krish: I agree with my fellow panelists on the fraction of the applications represented by packaged apps. In the cloud world, SaaS had an upper hand with enterprises before new breed of PaaS offerings showed them that it is easy to develop custom applications on cloud. But I still see large number of custom applications inside the enterprises. Moreover, PaaS has shown the smaller companies that they can also take advantage of custom applications at a much lower cost than anytime in the past.

Rip and replace may be the feel good factor for many of us who get excited by modern software platforms but it is not practical for enterprises to embrace the approach. Legacy systems will co-exist with cloud native applications for quite some time. But, if an organization want to embrace microservices architecture, they need to have a clear strategy on how they are going to integrate the microservices architecture with legacy systems. Without a clear strategy, the technical debt will accrue more as business pressures will push them to continue with the “patch and go” approach.

An enterprise with huge legacy investment and who want to embrace microservices architecture should look out for a trusted vendor who can help them with their strategy. Most of the focus on enterprise use of cloud lies on offering bimodal IT at the infrastructure level. It is only half of the problem and the other half requires enabling similar approach in other layers of the stack. The first step is to embark on organization wide data virtualization. Bring all the data stores under one umbrella making it easy for modern services to use the data stored in legacy systems. You also need a good integration tool and other middleware services that could easily blend the two worlds. More importantly, traditional middleware tools may not be suitable for working seamlessly with cloud native services. It is important to make sure that the higher order middleware services are also cloud enabled and doesn’t act as bottleneck to innovation.

Once an organization develops a clear strategy for coexistence of legacy and microservices and find a trusted partner who can help them on all layers of the stack, their journey will be smooth.

Andrew: I was mostly in agreement until the assertion that if one picks the right partner, this will be a smooth journey. From my observation, the enterprise tends to celebrate failure by declaring victory which makes getting reliable information about project failure a challenge. If we're being candid, I'm sure we could all tell some incredible stories. Solving hard problems is hard.

In Ward Cunningham's original use, the label 'technical debt' represented the difference between what one understands when implementing a solution and what would actually solve the problem, with the idea that one would always have to make choices with uncertainty. As one layers more and more features, the effort required to maintain and extend the solution gets greater and greater unless one invests in refactoring the solution to match our understanding. As systems become more and more complex, understanding becomes more difficult therefore implementing the solutions does as well.

Bringing this back to the question, I cannot in good faith recommend anyone decompose working systems into microservices without modeling the first order assumptions of risk, effort and value. If a commercial package solves a common problem that is inherently well understood and stable, that might be an optimal solution. Organizations should only invest in building systems that differentiate the business. These systems can co-exist. The key to microservices is finding the right bounded contexts and contract that facilitate rapid iteration. Those boundaries can interface with legacy systems.

The last generation of enterprise software tends to be monolithic, and platforms are being marketed to support custom development, but as the platforms standardize, expect to see a new generation of enterprise software designed to run on those platforms. Expect to see more and more COTS packaged as microservices running side by side with the custom development, leveraging PaaS to deliver SaaS behind the firewall.

Krish: A trusted partner is important because one of the biggest drivers for technical debt is the presence of unnecessary packaged hardware/software/services sold by vendors to lock-in their customers for long term. The key to future proofing your organization is to avoid these unnecessary lock-ins and a trusted partner can help here.

Andrew: Avoiding 'lock-in' is a red herring and a weak value proposition. Optionality has value, but also a cost. Technology selection should be about what commitments will be made at what cost for what value.

Krish: As I mentioned in a previous question, avoiding lock in is not about vendor lockin but about easy portability to future proof against new technologies which is an important value position in this fast evolving technology world.

InfoQ: Dan's opinion has been clear that containers are the wrong abstraction for application development (and deployment). Taking this back to the original topic -- are containers designed for an antiquated architecture model? -- what's the best abstraction for developers to use when building/deploying scalable modern applications? Is a PaaS? A PaaS with support for Docker-style containers? Something else?

Krish: Containers are not the abstraction for application development or deployment. It is the right encapsulation for applications and the deployment is aided by orchestration and other components. Most of the modern enterprise PaaS offerings use containers as the encapsulation component underneath. The way I will look at the question is “Should one use an equivalent to IaaS+ for application orchestration and management or take the PaaS route?”

The answer is it depends. There are some instances where IaaS+ kind of approach to containers might be the right approach and, for most cases, PaaS is a good abstraction. Technically, one can achieve the same end goal with the container services offered by Amazon and Google by using the right tools along with it. What PaaS does is to take out the operational complexity and provides a more standardized abstraction. I also want to quash a myth advocated in sections of industry that the use of containers (whether it is Docker or some other form of container underneath) takes away the abstraction provided by PaaS. It is pure FUD and there is no evidence to support such claims unless you categorize it under marketing.

Whether it is Docker or some other container format is the choice of different vendors. Every vendor will have their own reasons and their own marketing reasons to justify the choice of the underlying container. As an end user, it is up to you to decide what fits your organizational needs and go with it.

Brent: I think I’m generally in agreement with Krish, though our points are somewhat different (I read his response just after completing mine):

This depends on the purpose of the application. A microservices architecture is not necessarily suitable for everything under the sun. Honestly, if you are responsible for a single, monolithic, application, then PaaS is probably overkill for your use case. PaaS is a very efficient abstraction layer for applications made up of multiple components, assuming those applications need to scale up and down automatically; PaaS is particularly suited for organizations with multiple applications, regardless of their architecture.

I firmly believe that it is the responsibility of a good PaaS to use containers for encapsulation. As I noted in a previous answer, cloud-native applications should use as many standardized, off the shelf components as possible. Containers, particularly with the rise in popularity of Docker, provide an excellent method of allowing this — just look at the list of “Dockerized” applications, services, and components available in the market! At this point, it’s bordering on irresponsible for an application platform to *not* support Docker. The role of a PaaS should be to orchestrate, scale, integrate, and manage those containerized pieces.

Back to the questions, I don’t really understand the first one — "are containers designed for an antiquated architecture model?” As answered previously, containers are suitable for a microservices architecture. Is that antiquated? I don’t know — is UNIX antiquated? Maybe the answer to both those questions is the same…

JP: Brent, the antiquated architecture is referred from my original blog post where I state that containers (and I clarify here as this clarification was not stated in my original blog) used for production deployment supports continued use of n-tier architectures versus cloud-native architecture.

I like what containers can do for supporting development and QA. They deliver consistent environments for development and testing and facilitate easy maintenance that the entire team can pull on demand. Containers in production as the primary endpoint for packaging of an application or one or more of its components has issues that I related in my original blog post.

We need to be moving developers away from developing applications at the operating system level and toward cloud-native architectures. Otherwise, they cannot leverage the value these environment bring, such as inherent scalability, mobility (moves to available resources, not across providers), availability and fewer configurable components. Application containers are great for supporting this abstraction. How that application container chooses to implement scalability and effective resource usage is mainly the concern of the container provider. That said, somewhere the rubber will meet the road and someone will need to integrate storage, compute and networking to provide consumable resources for the application container. Moreover, the makeup of those consumed resources will directly impact the overall performance of the final product. However, in the future using this model, the likelihood is that four to six infrastructure engineers should be able to support the needs of 20 to 30 applications, which is drastically less than the current ratio that ranges between 1:2 and 1:6.

Dan: Let me clarify my stance. I don't think containers are the wrong way to deploy applications - quite the contrary. Containers are a great way to ensure application isolation on shared resources. I just don't think that developers should be the ones dealing with the containers. Which then of course speaks to the introduction of a layer in the middle to translate between developer artifacts and the containers. I believe you can make this platform approach work for both existing and new applications - you'll just get more capabilities as a new style application.

Brent: This point of view makes complete sense to me. Developers should not have to deal with containers, but containers are the preferred way to deploy and run applications.

Wouldn't it be great if an application platform existed that allowed a developer to push application code and have it become automatically containerized? Or, even better, Dockerized (for standardization reasons)? One that allowed for use of any language or framework, handled service binding, scaled instances across availability zones automatically, automated logging and monitoring, and allowed for automated versioning and rollbacks while connected to your Jenkins instance?

That would be awesome! That's absolutely what a PaaS is for. You can get all those capabilities and more with Stackato today! Try it out, Dan! ;-)

Andrew: Containers aren't really an abstraction, at least not a new one and certainly not a development or deployment abstraction. At best, what we are talking about here is a static packaging of configurations and code artifacts, with a potential for dynamic injection of environment variables at runtime. The actual development can use this packaging to create artifacts for deployment but that's an implementation detail, not an abstraction.

The industry has also been conflating the packaging of images with the process management. This might be convenient, but loses some opportunities for optimization. Containers are essentially just processes that can't see or impact each other, but the current approach to image management drags an incredible amount of cruft forward. Even with the best possible hygiene and frequently rebasing against a common base image (though my anecdotal experience most people are not doing this with discipline), the images still carry all the baggage of a full general purpose operating system. Since the packaging is agnostic with respect to this past, containers can be used to build many different architectural styles, some of which are antiquated.

I'm always skeptical about 'best', but putting that aside the question about the 'best' abstraction cannot be answered without more fully developing the idea of what we mean by scalable modern applications. There are a lot of different answers to what should be considered the appropriate application architecture. What is the nature of the application? The hardest questions inevitably run into what is required and acceptable for the data. In lieu of really solving this problem well, people fall back to what they know and there is a lot of intellectual investment in n-tier architecture across the industry.

PaaS does represent an abstraction for deployment, but being an abstraction the implementation may or not use containers. The same PaaS interface could conceivably drive VMs, unikernals or even something else. The same interface could also potentially mask drastically different architectures, schedulers, routing, logging and a longer list of capabilities. In my opinion, every developer wants the deployment experience of a PaaS, what is unclear is the optimal tradeoffs with respect to the capabilities and limitations of the implementation choices. Finally, people over emphasize the developers. Code will spend the vast majority of its existence running, and the impact of the architectural choices and platform capabilities on ongoing operations, and therefore ongoing cost, cannot be overstated.

About the Panelists

JP Morgenthal is Director, Cloud & DevOps Practices at Perficient. He is an internationally renowned thought leader in the areas of IT transformation, modernization, and cloud computing. JP has served in executive roles within major software companies and technology startups. Areas of expertise include strategy, architecture, application development, infrastructure and operations, cloud computing, DevOps, and integration. You can follow him on Twitter at @jpmorgenthal.

Brent Smithurst is Vice President of Product Management & Marketing at ActiveState, where he works on the Stackato PaaS. He enjoys using his experience in IT Management to help make life easier for IT staff everywhere through better software tools and more efficient processes. You can follow him on Twitter at @brentsmi.

Krishnan Subramanian is Director of OpenShift Strategy at Red Hat. OpenShift is Red Hat’s Platform as a Service offering helping enterprises embrace DevOps. Prior to joining Red Hat, Krish was an industry analyst and founded Rishidot Research, a research and advisory firm focused on modern technologies. You can follow him on Twitter at @krishnan.

Dan Turkenkopf, in his role as Senior Director of Strategic Research at Apprenda, explores the bleeding-edge of cloud and distributed application technologies. Prior to a sojourn in the front office of the Tampa Bay Rays, he filled a number of technical roles at Apprenda and worked as a solutions architect at IBM. Dan holds a B.S. in Economics with a concentration in Management of Information Systems from the Wharton School of Business and a B.A. in Mathematics from the University of Pennsylvania. You can follow him on Twitter at @dturkenk.

Andrew Clay Shafer, Senior Director of Technology, Pivotal continues a career built on helping organizations deliver software with better tools and processes. Andrew is widely recognized as an expert on open source, agile software development, and large scale web operations. He has had many adventures developing software and operating services over the last decade including the team that spec'd what would become Fusion-IO flash drives, co-founding Puppet Labs, and implementing high profile Cloudstack and OpenStack projects as the VP of Engineering at Cloudscaling. Andrew also has a passion for events and organizational learning. He's currently a core organizer for devopsdays and a member of the program committee for several other conferences. You can follow him on Twitter at @littleidea.

Rate this Article

Adoption
Style

BT