BT

John Willis on the "State Of The Union" for DevOps

by João Miranda on Jun 22, 2014 |

John Willis, one of the leading lights of the DevOps community, addressed the "State Of The 'DevOps' Union" at DevOpsDays Amsterdam. He started by mentioning the findings of the 2014 State of DevOps Report, went on to discuss Software Defined Everything and asserted that the future will be built around "consumable composable infrastructure".

The 2014 State of DevOps Report, which got more than 9000 responses, suggests that IT performance has a direct correlation with business performance. It confirms that DevOps practices increase IT performance and so the future of DevOps is of unusual importance.  It found that job satisfaction is the number one predictor of organizational performance.

John asserts that we are witnessing the decoupling of hardware and software. Everything is becoming  programmable.  This, in turn, gives rise to the concept of Software Defined Everything (or Software Defined Data Center).

John also sees the trend towards a consumable composable infrastructure. That infrastructure is composed of small, independent modules, which do not require extreme expertise to use and so are accessible to a wider community. The state of the art is approaching that ideal.

Software Defined Everything

John Willis thinks that we need to reach a point where an application just needs to declare its infrastructure requirements, such as CPU, memory or network security. With those requirements, future tools and the supporting infrastructure should both create and configure the servers that contain the application as well as the virtual perimeter around them. This virtual perimeter should include all the expected bits, such as firewalls and load balancers.

Software Defined Everything (or Software Defined Data Center) consists mainly of Software Defined Computing, Software Defined Networking and Software Defined Storage.

Software Defined Computing, a.k.a. virtualization, has been the focus of the IT community and tools like Chef or Puppet. John Willis believes it is being disrupted by containers and commented that Docker must be the fastest growing technology he has seen in a long while. As an example of this trend, Google has committed to using Docker, despite having been using containers for a few years now. Computing was also the first instance where hardware and software decoupled.  The advent of x86 servers fostered the birth of a myriad of OSes.

On the networking side, Software Defined Networking (SDN) requires the decoupling of the data plane from the control plane. The data plane manages the packet traffic. The control plane determines how a switch or router interacts with its neighbors. OpenFlow leverages this decoupling and, according to John, is to SDN what HTTP is to the Web. OpenFlow is a communications protocol that allows remote programmability of network switches and routers. The rise of bare metal switches, pared-down switches that are meant to be controlled by software, reinforces that decoupling.

Even on the storage arena, the hardware/software decoupling is taking root. IP-based storage uses IP protocols to simplify the management of storage infrastructure, thereby facilitating Software Defined Storage approaches.

Consumable Composable Infrastructure

A few years ago a physical server might require 8 weeks to provision. That time has steadily decreased with a succession of improvements: from the virtual machines, through IaaS and Paas, to the containers that take 500 ms to launch.

Docker has brought several significant improvements. It commoditized containers and uses software development metaphors for infrastructure management. Git-like workflows drive the container's changes. John considers portable container images, which work bit like binaries, "game changers". A portable image is infrastructure packaged as an application. Docker lowers the barrier and makes infrastructure both more consumable and composable. John also suggested that hypervisors might not be needed sometime in the future, as containers might replace them.

Docker containers need to run on a server, but which one? Applications like Apache Mesos, used by Twitter and AirBnB, or Google's recently open-sourced Kubernetes perform that orchestration or scheduling. John Wilkes has a presentation on how Google manages its container's clusters.

In a containerized, cloud-enabled world, operating systems need also to be adapted to that context. CoreOS and Project Atomic are instances of these new realities.

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Tell us what you think

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread
Community comments

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Email me replies to any of my messages in this thread

Discuss

Educational Content

General Feedback
Bugs
Advertising
Editorial
InfoQ.com and all content copyright © 2006-2014 C4Media Inc. InfoQ.com hosted at Contegix, the best ISP we've ever worked with.
Privacy policy
BT