[Note: please be advised that this transcript contains strong language]
Transcript
Ertman: With a title like this, "Are we really cloud-native?" it is probably obvious that I'm going to talk about cloud-native computing. As the spin doctors in the marketing departments of large vendors are starting to own this term and apply it to their products and their offerings, I think it would be best to start off with some sort of a definition to basically figure out what is cloud-native computing actually.
Don't worry, I did my homework. I scanned the entire Internet, collected all the definitions, and finally came up with this common denominator, which is probably something like "Blah, blah … Kubernetes." In case you don't know what Kubernetes is, Kubernetes is the Greek god of spending money on cloud services. This quote is not mine, it's from Corey Quinni - he's a great guy, follow him on Twitter. This is kind of the direction where we're going right now. Somebody coined a term and now it's the spin doctors trying to get ahold of it and applying it to products.
While we're on doing introductions, let me introduce myself. My name is Bert Ertman, I'm from the Netherlands, I work for a company called Luminis. I'm a fellow and VP of technology, and basically, I consider myself to be a Java/cloud postmodernist, as I think that we've entered the postmodernist age with Java. We have seen classic Java in the '90s, we've seen more modern version of Java in the zeros and the tens. Now we're in the age of mixing Java with all sorts of other stuff, and everything goes again, so it's postmodern Java.
Cloud Computing
With that out of the way, let me talk to you about cloud computing. This is us - or me in this case - wondering what is happening with this mythical beast called the cloud? I can tell you a lot has been happening over the past couple years. The cloud is obviously not something new, but in recent years, we've seen a bunch of analyst papers showing up, where companies like Gartner and also others have predicted and then confirmed the rise of cloud computing, as opposed to more traditional non-cloud on-premise development styles.
Basically, they are saying that the cloud services market is really growing fast, and it's growing at the expense of traditional offerings, which is Gartner speak for Java EE is dead and long live the cloud. I'm a Java guy, so I put Java EE there, which has nothing to do with the whole Jakarta Java EE thing going on right now, but that's only making it worse. Still, if you replace Java EE with something like spring, or even .net, it's probably still true. This cloud thing is coming at the expense of something else. I think the only thing you can't replace it with is probably JavaScript, because as we know, JavaScript is not dead, it's the undead, so that will probably never go away.
What is cloud computing to people in general? What is cloud computing to enterprises these days? I call in the help of a friend of mine, I consider him to be the typical Dutch entrepreneur. If I asked a typical Dutch entrepreneur, "What is cloud computing to you?" Then he will probably tell me something like this, "It's like computers on the Internet" We even designed our own bumper stickers for it. There are people wearing this with pride; there is no cloud, it's just somebody else's computer at the other end of the Internet. This was probably true, say, five or more years ago. That was the state of cloud computing, it was just virtualization, but then taken elsewhere.
Now, it isn't really doing the cloud any justice. If you have to explain the cloud to enterprises right now, what I would do is something like this. I created some experts out of some screen scripts slide from some Microsoft presentation where they put it up like this. They say, "There's always something like on-premise developments." There will always be some on-premise component, whether you call that edge computing or just plain old on-premise, that doesn't really matter.
Then if you have to explain the cloud, the cloud is basically comprised these days of three main components. Of course, there's the old stuff that we now know, like infrastructure as a service, which are the primary building blocks for compute, networking, storage, stuff like that. Then we have PaaS, which is more of stuff with an API stuck on to it, so we can just wire it into our application development. Think of web servers, databases, message queuing systems, stuff like that.
Then the third component, which is still a little bit newer and which also breaks with the as a service tradition, is called serverless these days. Which is kind of a funny name, in my opinion, it is one of the most interesting things happening in the cloud right now.
The Evolution of Computer
When I talk about it in this way, then the typical enterprise customer would say, "What is this serverless witchcraft? Can you elaborate a little bit more?" I would say, "Yes, this serverless thing, you can see it as the evolution of compute, or you can see it as the evolution of virtualization, if you will." If you think about it, not that long ago, when we were done developing a piece of software, we were basically installing it on a real physical machine. That was like 15 years ago. Some people are still doing it.
Then virtualization took over, which basically meant we bought even bigger physical boxes, and then we started to emulate multiple virtual computers on top of that. Then at some point, somebody decided, "Let's move these computers to the other end of the Internet," and this is how cloud computing started. That improved for a little while and then we discovered something in the Linux kernel and we got containers. To some people, containers are still the pinnacle of virtualization or the pinnacle of compute. There's a little bit of news there because now serverless is taking over, which basically means we take a smaller piece of software that we want to run, and then we glue it to some sort of compute at the moment you want to run it.
This might seem funny at first, but there's one thing that has been holding us back for many years now. This is that maybe after 50 or 60 years of doing compute and developing software, we are still holding on to the concept of a computer. Whenever we create a piece of software, in order to run it, we need a computer. This is a strange thing, because computers in some ways compare to pets. Look at it, it's adorable. If you get a new computer or a new pet and you are really fond of it, you give it a name, and you take really good care of it. But then at some point, it becomes a little sluggish, and you also have to walk the dog when it's raining outside. The maintenance becomes a little bit of a drag. It gets even worse if your pet goes belly up, and the same thing can be said for computers. Then you get really upset.
Instead of thinking about computers as pets, we should learn to think about them as cattle. In this example, I have a group of cows. With a group of animals, I am not so much interested in the individual animal, I am interested in what the group of animals produce. In this case with cows, it could be milk; in the case of a bunch of computers, what they produce is compute. Serverless, to me, it's like a saw, like an instrument to saw off the last letter of the word computer. This will free me from thinking in terms of the deploying my software on an actual computer. If you can get rid of this idea, a whole new world will open up as I will explain it in a minute.
Middleware as Managed Services?
My little Dutch entrepreneur friend is like a little bit confused here, but what I'm actually talking about is reimagining computing without computers. Yes, spoiler alert, also in the serverless world, there will still be computers. There will still be servers that will eventually run your software, but I no longer care about them. The thing is, it all started off with running like a function on top of some sort of compute, but I think the direction in which this is going is far more interesting, because if you dare to take this one step further beyond compute, I would say, what if we can reimagine our middleware, or even higher-level services, as completely managed services which we can consume as an API? This will free up an enormous amount of time and investment that we usually have in installing and running these kinds of things on hardware, on infrastructure.
If we no longer have to take care of installing, running, patching, and managing our database, our web servers, our storage solutions, whatever, that will free up an enormous amount of time and effort that we can put into developing software. In a way, you can compare this whole trend of serverless with what happened to the steam machine back in the days when electricity was introduced. If you think about serverless in this way, then in a way, we can now start to take both compute, networking, storage, but also middleware services and even higher-level services, out of the wall socket, only when we need them.
It comes with like a utility and you also only pay for them when you actually use it, which is maybe the economic model, which was evangelized along with the whole idea of cloud computing, but never became a reality because every time we fired up a virtual machine in the cloud, we left it on 24/7, so we paid for it 24/7. Now, if you can only use compute, or database, or networking or whatever, just on-demand, just for a little while, you only pay for it for a little while. This is a really interesting economic model. To me, this whole serverless thing is a very important aspect of the direction where the cloud is actually going right now. The big thing, the cloud, is just a virtual computer at the other end of the Internet; it certainly does not do it justice, it's not really keeping up with the actual state of affairs.
Serverless is really important. We're not there yet. Yesterday there was an excellent presentation on why serverless is improving in one direction, but then going back in some of the other directions. I would certainly not claim that it is perfect yet, but I think the direction where we're going is really interesting, and it's still a very young technology. I'm really eager to see where this will be going in the next couple of years.
What is Happening in the World?
Let's leave cloud for a little bit here. Let's talk about what is happening in the rest of the world. Don't worry, we’ve only got 35 minutes, so I will not turn it into some political rants. What I'm talking about is, apart from technology, where are we going? What's the direction where we are going in? This direction has been laid out for us a bunch of years ago, when this guy, Marc Andreessen wrote the famous paper about how software is eating the world. I think this is still true, and maybe it's even going faster right now. There are lots of young companies to prove that they have overhauled existing business models. There are lots of old companies no longer around to prove that this is actually happening.
In order to deal with this new world order that came upon us, with traditional business that needed to evolve and adopt faster, we needed ways to think about how we can move faster without breaking all the stuff that we have around. If you look at most enterprises, they are not like Silicon Valley unicorns which are only interested in doing new stuff, and more new stuff, and even newer stuff. They have their existing business to take into account. They have lots of stuff, which is already there, and which they need to take care of. You cannot just move fast without breaking stuff.
In the meantime, in the past 5 to 10 years or so, we came up with some of the answers in order to try and move fast and not break things. These answers comprise of things like this. In order to adapt more easily to change, we invented the whole idea of microservices, we tried to adopt that. Then in order to not break things, we came up with processes like CI/CD, and we introduced containers to make things portable across environments. Then we sprinkled a bit of process on top of that, so we tried to transform not only our development teams, but also our business into an agile organization. We only build the stuff that we actually need, and we build it in a timely fashion.
Then we sprinkled a bit of DevOps on top of that, to be able to integrate the whole thoughts and ideas of CI/CD and containers into the solution of deploying things into production faster. While doing this, while applying all of these things, this resulted eventually in something like this. This is good, this is a screenshot from a Netflix architectural document. If this is what Netflix has, then it must be good because they are the poster child for microservices. If this is their architectural landscape, and if our architectural landscape starts to look like that, then it must be good.
This screams microservices all over, so this is good, but somewhere along the way, we probably ran into some trouble. Transforming it is not only applying new and more technology into the mix, but it also involves very heavy components, literally, in terms of transforming the organization along the way. One of the first things that we actually learned while starting to do this is that 80% to 90% of real enterprise budgets are being spent on maintaining existing systems. That's just maintaining the status quo. The trouble with this is that you only have 10%, 15% left for doing new things, and within that 10%, 15% margin, that's also where your amount of money or time is for experimentation. If you want to start experimenting with new stuff, with new technology, with new process, with whatever new, then you got to find the time and the money to do it. This proved to be really hard.
Then, even if you took our existing workloads - let's say we have our Java EE app, our monolith, and we bring it to the cloud in a lift and shift fashion, we took it from maybe real hardware or virtualized hardware on-premise and then we moved it towards the cloud and ran it on top of some virtual machine at the other end of the Internet. Is that really helping us out in our journey of getting more money and time available, saving money on the infrastructure, so we can spend it on other stuff? No.
It turned out to be a false promise of the cloud that if you just move your applications into the cloud, you would start saving money. In most cases, that's just not the case. If you just look at the total cost of ownership, you probably spend more money running your existing apps in the clouds than you'd do when you run it on-premise. Then some companies started to say, "It's probably the framework in the app server, which is evil," and then some books showed up doing cloud-native Java, with Spring Boot, for example.
What did they do? They started out small, they started rewriting their logic. Then they checked all the boxes in the Spring Boot configuration. Then they ended up with what I would like to call a fat jar, and an even fatter jar, and eventually with an inverted app server. To me, I don't see the difference between running an enterprise Java application in an application server, or running it in a Spring Boot server with all the boxes checked. You have your old Netflix compliance, but it doesn't necessarily mean that you get any advantage of running your stuff in the cloud. To me, it's just the application server upside down.
Then some might say, "You forgot the magic ingredients," because you forgot to put Docker in. Put it in a Docker container, and then probably you're better off. Ok, now I made it a little bit more portable, so I could probably run it on my laptop on some test environment, and then move it into the cloud. Still, I'm not really getting the benefits of what the cloud has to offer. This is when the ideas of microservices were introduced. We say, "Ok, now we have to break up this monolith, and then we create lots of smaller services." Still, we put them in Docker containers and then we put them on virtual machines in the cloud.
While this may be a good attempt at doing microservices, we're probably even wasting more resources because we now need even more virtual machines in the cloud, and we’ve got to find out a way of managing a bunch of containers at scale. So we introduce more problems, but we try to solve other problems. You can ask, "Are you saying that microservices are a bad idea?" No that's not exactly what I'm trying to convey here, because to me, microservices are like a modularity too. Modularity is always a good thing because it's breaking up bigger structures and smaller things so they are easily solvable. It's hiding the details of implementation, and then connecting stuff together, using some sort of contracts.
This is the first thing that you'll learn when you start to do software design, cohesion, over coupling. There are a bunch of ways of introducing modularity. Some do it on the code level. If you have a large codebase, then you can start to introduce modularity on the language level. With things like a Java module system, or with OSGI, if you like, you can start to break up a bigger codebase into smaller modules, and then glue them together. When the pace of innovation or the pace of change goes up, you are more confident when you start modifying things over here, that things over there will not break down. Modularity is always a good tool. To me, microservices is just another level of doing modularity. If you break up the monolith, if you break up the large codebase into smaller parts, then you need to add that level to do modularity. Microservices to me, in essence, are a modularity tool.
Then, if you start to break up the monolith into smaller pieces, and you put a network in between, you suddenly realize that there's a very high price to pay, because decompositioning it into smaller services does not come for free. If you start moving those kinds of things to the cloud, especially to public cloud providers, you're probably not the only one on the network. What happens with dependencies between services? What happens when the network is starting to play tricks on you? The fallacies of distributed computing paper from 1995 probably has never been truer than today.
Designing distributed solutions, designing decomposed solutions, is very hard because there's a whole bunch of things. We're trying to solve one thing, we get a whole bunch of new problems or challenges in return. The answer to that is probably Kubernetes, because that's what everyone is saying, but is that true? Let me add Kubernetes into the mix. There we go, just another layer of abstraction. Yes, maybe we find some answers to the problems I listed on the previous slides. Maybe we now have some better resource utilization so we can do off with less virtual computers, so that's a little bit about optimization there. But then running Kubernetes as a platform, that is a complicated beast, and I've seen lots of companies struggle with it.
Everywhere you go these days, even a conference like this, there are at least a bunch of sessions on Kubernetes. Everybody's telling you, "Go do this. This is the future of everything. Go do Kubernetes." Don't get me wrong, I'm not against Kubernetes. I think that the problems that it solves, it solves pretty well, but then I'm trying to say here that Kubernetes is not for everyone to run and manage in production. I've seen lots of teams struggle with getting their setup right, and then running that into production. That is really hard and it's stalling lots and lots of progress, because it's such a complex beast.
DevOps
Another huge problem area is DevOps. DevOps is not new, how can this be a problem area? The thing with DevOps is that there are parallels to draw between DevOps and agile. If you think back in time, when we started doing agile, the whole idea of Agile was really well-received. I think it didn't take us long to understand what it was about. It didn't take us long to understand what the benefits were, and then we started introducing it.
In order to reach a certain amount of maturity with things like agile adoption, it took us quite a few years. This is not because we were stupid as developers; this is because we have this heavy way thing dragging behind our feet, in which we had to transform the organization as well. Agile is only getting some maturity, as soon as you also have some buy-in from business. If you think, "Product owners, who needs them? We can ask the architects," that's probably very stupid thing to do, and that's what we learned over the years. DevOps is basically the same thing. It is not about learning a few new tools or mixing and matching, like some software engineers with some out-of-work DBAs, that's probably not the way to implement DevOps.
The problem with DevOps is dinosaurs - I'm not talking about the DBAs here, by the way - for sure it's called DINO and DINO is DevOps in name only. The thing with that is, dinosaurs are extinct. If you don't want your organization to go extinct, then you have to pay attention to how you implement DevOps.
Let's think about how traditional organizations run their infrastructure and run their operations. In most traditional operations, they try and standardize everything, and for good reasons. They try and standardize a hypervisor, all infrastructure components, even programming languages. Everything. This is for good reasons, because it is, on the one hand, to manage complexity. We standardize on this, so we don't have to do that. It's also to apply some sort of a standard way of working, because in very heterogeneous landscapes, it's pretty nice if there's a select set of choices instead of, pick anything you like.
Also, in some way, standardization is some sort of a weird productivity tool. If you depict it like this, usually when somebody shows you a picture of an iceberg in the middle of a presentation, it's bad news. But in this case, it's not that bad, because what we actually do by standardization is, we create this water line, and everything under the water line is standardized, so our infrastructure, programming language, everything. Then our development teams just concentrate on getting a deliverable out the door, which is usually something portable, like a container or maybe a year file or a Fed jar. The only thing we have to do is land that thing on top of standardized infrastructure. In that way, it is some sort of a productivity tool, because everything under the waterline is what you expect. I don't think you know what to expect. Then you just create the software and then you put software and infrastructure together, and this is how it works in most organizations.
To some degrees, this solves problems. To some other degrees, it's probably not the way that you want if you want to move fast and not break things. If you try and translate this to the current time and space that we live in, and you try and reap the benefits of cloud computing, then this iceberg will look a little bit different - it will probably look a bit like this. Now this whole separation of where the application is and where the infrastructure is, it's gone. What we're trying to create here is, we are mixing and matching functional components, infrastructural components, even more infrastructural components to run our infrastructural components, and everything in between. We are mixing and mixing that into an existing solution.
Whether you're doing microservices or not, if you think about what is an application, if you think about this for a second, you try and come up with an answer, let's say, 10 years ago, let's say the 2009 answer to this. That will probably be, "An application is like a bunch of code that I have to build and test together. I toss it over the wall to the ops team, and I hope they can get it to run on a bunch of servers that they have provisions. I hope that there are users trying to use this application so we're not wasting too much dollars on our investment and infrastructure and everything else."
That was a pretty obvious way to do it, but now, the 2019 answer would probably be something like this: "An application is probably like a mix of a bunch of managed services in the public cloud, connected and customized with our own highly differentiated business logic. Then we glue everything together, which then runs and bills only when needed. When nobody runs it, we pay nothing, and if there are a lot of users, then we pay some more money, but we're probably fine with that because that seems to be like a nice economic scale.
Cloud-native in this is some sort of a DevOps journey to me. If you want to be cloud-native, it's not about picking a framework or two. It is a transformational journey and getting DevOps right. I'm not sure I'm allowed to say it, but what it actually is about is hooking stuff together, let's call it like that. It's hooking stuff together, because if you want to reap the benefits of the cloud, you really have to know all the services that your cloud provider has to offer. If you want to package some sort of software components with Docker container, and you take the Docker container to your cloud provider, say AWS, then there's at least four or five ways to run Docker containers on AWS. Figuring out which way actually suits that particular application or that particular component, that is what cloud-native is actually about. It's about keeping up with all of the advancements that your cloud provider is actually making, and then try to apply that to your software components.
Gluing the software and your infrastructure together is something which is not an end state. This is a continuous journey. You keep on improving this with new services seeing the light of day and new components seeing light of day. Because we're microservices; these services could also be short-lived. If the service changes or gets replaced by another service, then maybe the new way of deploying it is different from the old way of deploying it, because there are new services that became available. It's not thinking about servers and managing servers at scale; it is thinking about managing a mixture of both software and infrastructure at scale. It's about services and consuming everything as API's, and gluing that together as a deployment.
Like I said, it's a DevOps journey, because it's about modernizing your infrastructure and the process of actually moving faster and not breaking things. The organizational culture plays a large role in it. I will get back to that later on in the presentation. If you think about it - this is the picture that I showed you earlier - there are these three components that we can pick and choose from in cloud computing.
I want you to think of these as a collection of boxes full of Lego components. We have boxes that comprise of Lego parts, which have the primary building blocks for infrastructure as a service, then we have the stuff for PaaS, and we have serverless stuff. My advice would be that if you start assembling your logic and your infrastructure together, that you start from the letter. In this case, you start from the serverless box. If the serverless box is providing you any good options to run that logic on, then I will definitely go for that.
However, it highly depends on your workload, and whether you're meeting your non-functional requirements, and even whether you're meeting your economical requirements. Even when you do serverless, and it's pay-as-you-go, it can still be more expensive than if you solve it in a more traditional way. It highly depends on the workload, so start by assembling your application from the serverless box, and if something is not in there, if it's not fitting, then you go to the PaaS box, and you start applying some of the PaaS pieces into the solution. If that still isn't the complete answer to your solution, then you mix in some of the old infrastructure as a server stuff.
Maybe you still mix in some virtual instances and you mix in some of the storage types of the old cloud world. Eventually, you come up with a solution like this. Once again, this is an AWS example just for illustration purposes, but if you look at this example, I can highlight a few pieces out of it. You see that this is a mixture of, say, serverless technologies like lambda, and S3 storage, and API gateway, and stuff like that. I mix in some PaaS stuff, maybe there's some beanstalk for some old servers in there. Then I also have some Infrastructure as a Service, because I need to run some other pieces on some virtual computers.
Then tying everything together is eventually my application, or part of my application landscape, which I then deploy to my cloud provider. Getting to know the services of what your cloud provider has to offer, and really tuning your software to the underlying infrastructure, that is what cloud-native is basically all about. Technologies or frameworks are not going to make you cloud-native. Somebody says, "This is the cloud-native framework." That's probably a big lie. It is the way you use them, and you embed it into your process of delivering value to your business, because then we are finally entering the stage where we get the true value of the cloud.
Getting the Best from the Cloud
If cloud is no longer just virtual computers, then what is it about? To me, it is about potential for economic disruption. Instead of doing these huge upfront investments in buying hardware, and having dedicated personnel around to basically operate your data center and run all your infrastructure and manage all the infrastructure, I can now just use it on the go. I can just integrate it as an API, so I can start experiments. Even if I'm in a classic enterprise situation where there's a tiny amount of money to start experimentation with new stuff, this is a really fitting model. I can take some cool or higher-level services from cloud providers, start to experiment with this, and if it's not going into the direction that I desire, I can just cut it off, switch everything off and I'm no longer paying for it, and my investment has been tiny. I still maybe save some money for other experiments.
If it is going the direction that I want it to go, then I can try to put it into a new development project. This is how you can start to adopt new technologies out of experiments. This is where the economic disruption lies. It doesn't necessarily mean that it's going to be cheaper, so, if your main motivation for going to the cloud is going to be cost savings, it highly depends on what you choose. Cost optimization is also a responsibility of the DevOps team. Your team of engineers has to keep an eye on the bill.
In some organizations, this is going to be really weird, because if you say, "My developers or my DevOps guys or girls, they have to look at the bill in order to figure out if this or that is going to be a better way of running things," then there will probably be departments that will say, "No, you can't look at our cloud bill. That's classified information." This is just a signal of how we need to transform our organizations, so we need this transparency in order to make the optimal choices here.
Cloud definitely has an edge for experimentation, like I already mentioned a couple of times. I think the key thing here is that in determining where your true strategic value lies as an enterprise, is it in operating your own Kubernetes cluster and trying to play cloud provider by yourselves? Or is it in a far more functional perspective, where you have a competitive edge because you can deliver a great service, instead of playing cloud provider? For some companies it is about playing cloud provider, and for some companies, their interests are elsewhere. In that case, I would highly recommend trying to reap the benefits of what the cloud providers have to offer in that space.
I would like to add that friends probably don't let friends build their own Kubernetes platform. If you think about it, if you want to get the best from the cloud - I have nothing against Kubernetes - if you're fine with that and if you think it will yield you some strategic value of doing it, fine, go ahead. But we are probably not all networking experts, and experts on all this stuff that you will run into when you start to do distribution of the skill. If I have the choice of doing this myself, or I can leverage some higher-level managed service offered by my cloud provider, and it's good, then I would definitely choose that one over the other.
Is Java a Natural Fit?
Another interesting question that we get asked a lot is, in this new world of cloud computing, is Java still a natural fit? If you look at the examples that you see online, it's all JavaScript and Python all over the place. Is it time to get rid of Java and start learning another language? This morning, I think Brian Getson in his talk, said, "If I get $1 for every time that Java was declared dead, I would be a very rich man." I think that that is very true. In this case, I would say, there are advancements in the language, in the ecosystem, which will make Java a good fit for cloud computing if it's not already there. There are some really cool things like GraalVM, for example, and compiling to native code. Some of the cloud providers allow you to bring your own runtimes. If you want to do. say, lambda on AWS, you can bring the GraalVM runtime along with that. That will probably solve some of the cold stack problems which you can face if you just do regular Java on lambda instead.
I think Java will find its way in the cloud. A better question would be, "Are Java programmers a good fit in this cloud age?" The answer to that is, probably, if Java is your only skill right now, then you're in for a hard time. Because with DevOps, there comes a whole new set of responsibilities and problem areas that you need to be knowledgeable about. I found this great roadmap online, I'll show you the link to it later on in the presentation. What does it take to go from being a programmer to being a cloud engineer? Apart from all the details, it starts with picking a language or learning a language. Then you go into understanding operating system concepts. You learn about using a terminal, you learn about managing servers, about scripting, low-level networking stuff, middleware services, infrastructure as code. There's a lot in here.
All of these things you need to master them in your DevOps teams in order to create software components and run them in this brave new world of cloud nativeness. By the way, you have to pick a cloud provider, so let's look at one of those cloud providers. This is AWS' release pace over the last couple of years. In 2013, they released about one new feature a day. Last year they broke 2,000 new features a year barrier, which comes down to seven features a day. How do we keep up with all of this new stuff being poured on top of us?
We knew this when we signed up for this profession, when you want to do IT, you're entering a profession of lifelong learning. To be honest, for the past 15 years, we have been way too comfortable with that, but now there are so many things changing at a very fast pace, and you have to master all of them - not by yourself, but in your teams at least. In order to do this, this requires another piece of organizational change. The organization has to be aware of this, and there needs to be room in your daily work routine, to master all of this. Organizational culture is very important.
Cloud-Native Culture
I ran across some really good cloud-native culture killers. Most of them are about that the business doesn't really trust IT. Whatever we think of as developers or as Ops, it will probably never work because there will never be any true BizDevSecOps, whatever the latest abbreviation is these days, because there has to be buy-in across the organization to get this to work. The other thing which is really dangerous, is the cost-cutting mindset. Once again, if you're like a heavily VC funded unicorn, then you're probably not worried about this, because you only think about new opportunities. But if you're like a traditional enterprise, then there's a whole bunch of cost centers there. Cost centers are places that we're not really fond of and that we try to optimize all the time. We do reorganizational stuff, and we try to optimize the cost centers out of the way.
If you think that IT is the way to do cost optimizations, then you'll probably never reap the benefits of going down this cloud-native route, because the price you have to pay to get there is way too high. There's way too much upfront investment that you need to do in order to finally reap the benefits. If you just think that the cloud will save you money, there's now more than enough war stories of companies that had this exact same thought that will immediately prove you wrong. By going to the cloud, we now pay much more dollars for running our infrastructure. Those kinds of stories are all around.
This is a very dangerous mindset, not only because you will probably never get DevOps implemented in organizations like this, but also because culture seems to be the number one killer for people leaving. If you want your good people to leave, then make sure you have a really shitty culture; then they will all go. It's not about the pay, it's not about the managers, it's about culture. If people don't like the culture, then they leave. Culture is something that you should take into account.
Let's try and turn this around, then. What are some really good cultural aspects for trying to achieve this DevOps journey, this cloud-native journey? I think it starts with the whole thing that we call digital transformation these days, which is another horrible buzzword, so please forgive me. But I think for most of the enterprises, it is about time that they realize that they are no longer a bank or a financial institution, but they should realize that they're an IT company, which happens to be in the financial services space.
This is a really important mindset. We can tell from many stories out there, that the companies that started to adopt this mindset made huge leaps forward. Because then IT will not be a cost center anymore, and they will need to adopt their process of delivering software. This has a far more natural match with things like adopting agile and adopting DevOps. If you think back then to this parallel about agile and DevOps, it took us maybe 5 to 10 years to get Agile right. If we apply that metric to DevOps, it will take us 5 to 10 years to get DevOps right, with all of the power of the cloud out there for your competitors to reap, then you might not have five to 10 years, because you will probably be out of business in that time. This is a very disturbing thought.
The cloud should actually be seen as a potential for economic disruption over a way to save costs. It's also a perfect way to start experiments. That does not necessarily mean that if you start experiments in the cloud, that you've eventually got to run them in production in the cloud as well. If you like what you see, you can also bring it back to on-premise, and still run it on-premise, but the cloud will just give you easier access to all sorts of resources and higher-level services which you can start to leverage to really do some cheap experimentation. Then eventually, that will lead to, hopefully, faster time to market. That was basically what it was all about when we talked about software eating the world.
Then the final and maybe the key ingredient to it all, is that you have to take into account that your engineers are broadly skilled. If you can just do Java programming, then that won't cut it anymore. Some of the customers that I have been working with, they say, "We do the 20%-time thing, where we just give you a day a week to start experimenting with services that our cloud provider of choice has to offer." It's not a day to build games or something or to play games, it's still work, and you start to experiment and start to explore the world that your cloud provider has to offer. You start to explore the tools and the frameworks which are out there so that you can leverage these services and these skills.
Sometimes they also do a certification because for some reason, we like to get certified. Or maybe we don't like to get certified but our employers like us to get certified. Then you get some time for that as well. Having this time to experiment, this time to learn, also to learn from each other, that is a really important aspect in making this transition.
See more presentations with transcripts