Bio Simon has always found himself dealing with complex systems, whether it's in behavioural patterns, environmental risks of chemical pollution, developing novel computer systems or managing companies. These days Simon works as a researcher for CSC's Leading Edge Forum. He is a passionate advocate and researcher in the fields of open source, commoditisation, innovation and cybernetics.
Software is Changing the World. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.
Chris full's queation: Hello. This is Chris Swan, one of the Cloud editors at InfoQ. I am here with Simon Wardley at QCon London 2014. Simon is going to be doing a presentation later on about mapping and the video for that will be available somewhere else, some other time. That is kind of what Simon does for a living these days, but you have been very involved in Cloud over the years, Simon, and I know you have got some strong opinions. So, let’s start out with OpenStack. Tell me about OpenStack.
Oh, what a question! Cloud, fundamentally, is all about a shift in product to utility. When somebody does that to you - you are used to building a product and somebody comes into your space and creates a utility service, you often have inertia to that. You need to react to it and there’s different ways of reacting, sensible ways to adopt or try to out-commoditize your opponent. In the space there are all these wonderful product infrastructure providers – Amazon – not encumbered by a pre-existing model, provides a utility, creates a change in the market, so you need to react and one of the ways of reacting is to create a competitive market of multiple providers.
But I need something called semantic syntactic interoperability which means you need an open source reference model and standard for that market and so that originally the idea as far as I understood it. Rick Clark used to work for me, went to Rackspace, he was one of the people instrumental in the OpenStack. It was all about that creating that competitive market. But, you have to be mindful of the market and particularly to the way it changes, so the sensible thing is to co-opt and that is not what happened. What happened is that suddenly this idea appeared: let’s differentiate from the dominant “de facto” which was the Amazon. That differentiation idea has now spread. So, within the OpenStack community you have effectively got what is known as a collective prisoner dilemma where the different implementations of OpenStack are not necessarily inter-operable. So, the approach of using open source as a way of out-commoditizing our competitors’ action and creating a market is perfectly sensible. I think that OpenStack is a good example of this.
2. Let us look at foundations for a moment. We saw the announcement last week of Cloud Foundry having a foundation and the organization of that so far looks like it might be along similar lines to OpenStack. How can Cloud Foundry avoid some of the mistakes that may have happened with the OpenStack?
If you look at something like OpenStack, the reason why it had a foundation, with any major open source project, the successful ones, have some form of benevolent dictator, whether it is an individual, a group of people or an organization and the key words there are “benevolent” and “dictator”. OpenStack, at the beginning, had a dictator which was Rackspace. Rackspace seemed to be interested in using it to differentiate with Amazon. That is not necessarily what it should have been doing. So, there was a lot of tension being created and they span off and created a foundation. The problem with the foundation around OpenStack is that there really has not been that benevolent dictator, so you ended up with what we call a collective person dilemma, everybody differentiating, and the seed for that idea came from actually the original start which was the differentiate from Amazon. Cloud Foundry is different. It does not matter that it is a foundation, it has a benevolent dictator.
In this case, it is called Pivotal, a company who is basically invested and helped Cloud Foundry, it helped create Cloud Foundry. They play a different game. They are much more mindful of the market so, for example, they co-opted build packs from Heroku rather than go and build their own stuff and all the rest of it and try to differentiate. They had a sense, to look on what is going on in the market, what we should co-opt, etcetera, what is becoming sort of predominant. So they are a different base. I do not see the two as the same. I see one as an example of not necessarily the best way of playing the game but whereas I think Cloud Foundry is. There is a lot more strategic wits and game playing around them and they understand the market a lot better so I do not mix the two foundations. I think they are two separate things. They just happen to be foundations.
3. Let’s touch on Amazon for a moment. You talk about Amazon being the dominant player. I occasionally hear people talking about other companies catching up with Amazon. It seems to be far from that, that Amazon seems to be moving further ahead of other efforts. How are they doing that?
There are many different things here. One – when we talk about evolution of an activity from sort of genesis to custom-built products to commodity, when we shift from the product phase which is normally a time of peaceful competition, a relative competition sustaining change, which normally exceeds disruptive and we shift to, say, utility. That shift is often pretty rapid, it takes 10 to 15 years, which is actually quite small, to completely change an industry. What catches us out is that it is not a linear change, it is exponential, it’s called a punctuated equilibrium. So the first 10 years it takes to get to 3 - 6% of the market. So you say: “How big will they be three years later? 9% or maybe less?” Well, actually they are at 30-40-50% because of that exponential change and that catches us out all the time. So, this market is a classic example of that to be looking at Amazon: they are basically doubling every year, if you look at their last quarter, their figures.
They were probably at 6% of the world wide server market at the moment, somewhere around that region, at 4 billion. If you go forward three years, so end of 2017 - around that date, they will be at 30 to 50% of the market, they will be massive there. The game is over. How did they do it? One – this sort of change is normal, punctuated equilibrium. There is another game that can be played. This is a model known as ILC, which stands for innovate leverage commoditize. The model is pretty simple: what you do is you provide a commodity service, you enable everybody else to innovate on top of it. They are actually your research and development group. You do not bother to do it yourself, you get everybody else to build on your services. Anything which is successful and starts to spread in your ecosystem, you can identify through consumption information, so you can leverage the ecosystem to spot successful development and then you commoditize these new components.
So, if you are Amazon there is EC2 and people do big data, they come out with Elastic MapReduce and now people are building on top of that. Now, I am not saying Amazon uses that model, but that model has certain effects. One – you will get accused of eating the ecosystem because that is what you will do and second – your rate of innovation, you rate of customer focus and your rate of efficiency will all now increase with the size of the ecosystem, not the physical size of the company. If you look at Amazon, their rate of innovation, their rate of efficiency, their rate of customer service is accelerating all the time, which is sort of counter to general popular management theory that you can just do one of those. So, two interesting things happen there: one – standard, sort of economic effect, a shift from one thing to another, punctuated equilibrium, they are in the right place at the right time; second is the level of game play that they are doing. That combination made me back in 2008 – 2009 to say that Amazon is going to be the “de facto” standard for infrastructure. I have not changed my view since then.
Chris full's question: We have seen sort of more recent new entrants to the Cloud space, such as Google and they appear to be learning from the few mistakes that Amazon did make. I see particularly in examples like the way Google is doing their networking as got it away from the limitations we might see with AWS networking. When the dust settles, will that leave them with an advantage or are they still playing too much catch up?
That is a really interesting point. It depends on how effectively Amazon is playing the ecosystem game, if at all it is playing, because they are very careful not to say what they are actually doing. Most of the times you get this transformation – when we go from, say, product to commodity, sometimes you get centralization, sometimes you get decentralization and that all depends upon the game play of the actors in that market.
So, in 2008 if the big hardware providers had been switched on – we had 48 years of warning that this change was coming. Douglas Parkhill wrote the book in 1966, for example – if they had been switched on at the time and they would have probably built clones of AWS, created a price war and that would increase demand probably beyond the ability of Amazon to supply it because of the natural constraint which is building data centers. But that did not happen because of that game play were like C-centralization. But you often get one, two or three big players in that space. So, it is perfectly reasonable that Amazon will be one big player and Google will be another and there will probably be a third – I am not quite sure who the third will be. Maybe, Microsoft, maybe not. And so it then becomes the usual longer term game of competition between two different companies. So where the Google’s advantages succeed is the typical Betamax versus VHS type discussion. It depends on how big and how wide the ecosystems supporting services are around it. The technology may be better but it does not necessarily mean that they are going to win the game.
Chris full's question: You touched on a price war there and we saw what might have been perceived as a few opening salvos in that last year with things like sub hour, billing, firstly announced by Google, very quickly followed by Microsoft. We saw Google changing the pricing model entirely for their storage which in some ways just made it harder to compare one service with another. Then we also saw starvation of certain new instance types upon launch from Amazon in that they could not keep up with demand for a new product that they created. But there has not really been a comprehensive price war yet. It seems that everybody just follows Amazon. Is that ultimately because the others cannot build out the capacity fast enough either?
OK. So you have a problem. We go back to 2008 and again - people talk about Cloud as being disruptive innovation. When you get product-product substitution which is because of the change in the value network, that can be disruptive for companies because it is unexpected. It is very difficult to predict that is going to happen. Our product, commodity and utility is highly predictable, normally a long time in advance, in the case of Cloud - 30-40 years. It should not be disruptive at all. You have inertia and all the rest of it, but you have such a long period of time to prepare for it and it so easy to tap the weak signals that it is about to happen. Really, I mean, the only way you get disrupted is if your executives are just not keeping an eye on that, on the landscape – literally blindness. It is more than inertia it is blindness to what is going on. So really, they should have reacted, say, 2008, created their price war because the demand is elastic, forced up a demand beyond the ability of Amazon to supply because of the constraint and that would have fragmented the market. That should have happened. It did not happen and basically you can put that down to sucky game play of the other competitors. I am afraid they were just out played. That was it. At the end of the day, it is not their engineers, it is not their culture, it is not their staff, it is just their executives. They were utterly outplayed. So, now you have got Google outrun the Amazon and Amazon almost certainly is making a massive margin on the EC2. I mean many years ago we looked at it – we reckoned it was over 80%. So, there is lots of scope for it to reduce.
But, of course, the Amazon can’t just massively reduce its price because it would increase demand and again it would have the same problem of its constraints. So it seems to be managing it, ordered price reduction, to make sure it is doubling each year. That seems to be what is going on. And of course, Google is in that space and Google can now stir it up a little bit by creating that price war, that also means that Google is going to have the capacity, to provide for that extra demand as well. So, there is game play going on between those two. Every time Google will reduce price or change pricing, Amazon will respond and I am sure every time Amazon will Google will respond and, of course, it is good for them two, as long as they keep on controlling their ability to provide that extra demand, it is pretty bad for everybody else. Mistakes will happen there.
Chris: It sounds more like a price tension than a price war.
Well, if you are deliberately trying to change that market and fragment it, if the hardware providers had all been switched on, they would have created a price war in 2008. If their execs knew what the game is about, they would have done this. They did not. It is even worse than being out played. They are playing a game of chess, but they are not looking at the board, basically, and they just got torn apart.
OK. So, now you are in a state where you realize that you have lost this battle as there are some new execs that come in to some of these hardware companies which is always good. So, they know they have lost the battle so the sensible thing is to get out. I mean, in these days, if you are going to go and take on Amazon, Google and the public infrastructure space, you have got to looking at dropping off 5-10 billion a year just to even get a seat at the table. I mean, a couple of hundred million is not going to get you anything. So, I think a lot of them seem to have realized: “OK. We have lost that battle. We need to refocus somewhere else”. And so: “Let’s get rid of that, those sort of units”. I believe IBM sold off its x86 group to Lenovo. Perfectly sensible. I mean, if you go 10-15 years ago and you say to somebody: “IBM going to be selling off x86.”, they would say “Oh, that would never happen”. But unfortunately they got out played and so by doing that, that is the sensible reaction. I know that they are also now heavily investing in platform and that is also a sensible counter because if you cannot adopt what you try as in co-opt the same processes are out-commoditize, you try to build on top of it. So, if you are trying to build now a market on platform, then you can play all sorts of substitution games with the underlying components. So, that sort of approach from IBM certainly seems pretty sensible.
Back in 2008 I used to run Strategy for Canonical and we mapped out the environment, mapped out where it was going and decided to attack a number of different areas. We were a tiny fraction of server OS. By 2010 we were about 70% of Cloud, which was great. It did not cost us a great deal of money, as well. So that was very good. One of the things we knew at the time is that in the shift you would get a private form of Cloud and you would get the hybrid model. There are two different types: the hybrid of public-public and the hybrid of public-private. But the private was always going to be transitional and it was also going to be because of concerns and fears and all the rest of it. Many people would not efficiently provide it. Then there are power issues because, even with the efficiency improvements in transistors, the doubling rate because the expansion of transistors is roughly 2.5 years so that becomes a crunch point about power we need for computing.
That is why you see these big players all buying up dams and power stations and things like that. So, we put a crunch point round about 2016. The sort of golden years of private Cloud would be this year, last year, next year and then afterwards it will get pretty crunched because just the cost – it does not become economical. Of course, there is lots of people who will argue “Oh, no. It is going to be the future” because that is what they are selling. There are several reasons why people use private clouds: concern over security, but often your security is less than you would get on public providers, concerns over the belief that somehow we can do it better, which is fairly fanciful. Remember Amazon – the idea of EC 2 was Ben Black in 2003 Chris Pinkham built it in 2004. So they have had a long, long time and they are very good at this.
The other problem, of course, is legacy environments because when activities evolve, then practices tend to co-evolve. So, for products, architectural practices were all about n+1 bigger machines and sorry - scale up and n+1 for resilience and is it evolved to utility, you got new architectural practices designed for failure distributed systems and all the benefits were in the new practices in a more evolved world and we often have estates which were built on best practices for the old world. So, we want to move one to the other and, of course, it does not work like that. You have to either re-architect or change it some way and we do not want to do that because of the cost. So, we want this fanciful world where we want all the benefits of commodity and volume operations, but we want it built with non commodity components customized just for us. It is a myth. It is not going to happen. There is always a trade-off. So, there is lots of desire for things like the private Cloud environments and yes, probably last year, this year and maybe next year are the golden years of private Cloud, but by 2016 I expect it to be going straight down, heading towards a niche.
No, you don’t. I have a hybrid Cloud. I used multiple data centers, in different regions of the world. One is called US1, one is called US2, one is called EU1, US East, US West. I could have an even better hybrid by mixing in a Google component to that. I mean, this idea that hybrid has to be public + private – I do not know where that comes from. I mean, probably some vendor trying to sell you private Cloud and would say “You have to have our components”. So a hybrid can perfectly be public and public. The only reason why you would want multiple public providers rather than using multiple regions from a single provider is purely buyer-supplier relationship, second sourcing options, etcetera. Of course, there is a cost associated with that as well. So, if you have multiple providers who are very close to each other in terms of semantics syntactic inter-operability, that cost of switching is much less and it becomes much easier to use them. But I think in the case of Netflix they are just sort of said ”Cost is too much. We are just going to use multiple regions of Amazon”. It would be nice if there were AWS clones out there and I believe there are some really massive ones being built at the moment in places like India, but they are not publicly available yet.
9. Why don’t we have more regions? So, we have seen many government Cloud initiatives, we see that public sectors often want to keep data within a certain jurisdiction. So, why don’t we have more choice over regions than we do at the moment?
Interesting question, one you should go ask Amazon or some of the other vendors. I mean, we have a number, but obviously we would like more.
10. Can some of the blame be pointed towards the telcos there in that from the position of legacy, public telecoms providers and the competition that arose as telecom was de-regulated. They seemed like the natural provider of some of these more jurisdictionally confined government Clouds, but they failed to step up to the game?
Ah, so should the telcos have gotten and being large scale, public providers of, say, AWS clones or something like that? It seems like a perfectly sensible thing for them to have done. I think today it is probably a bit late. The promise of OpenStack was the competitive market. The problem with it is twofold: one, it does not adopt what is the “de facto” standard? That would be what I would concentrate on. Certainly, it provides EC2 equivalents, etcetera, but it needs to focus on that. That should be its core goal. And secondly, you need to cut out a differentiation between the versions because your whole goal is creating a competitive market, but that doesn’t exist really today. So, if that existed and the telcos could all come together, build different providers of the same thing, create a competitive market, if possible, that have to be willing to bet lots and all the rest of it. There is a single telco today and again - if you are going to play, if you are willing to drop in it 5 or 10 billion, fair enough – if you are not, you are going to find it very tough. Even more, it’s left it so late as well.
11. Was one of the inhibitors of that, now, when, say, Eucalyptus came along and essentially provided a clone of Amazon, there were concerns over intellectual property rights around APIs and if we think back to that time, it was still whilst the whole Oracle Java versus Google Android?
It is still going on. APIs are principles, not expressions, so you cannot copyright an API, unless, of course, Oracle wins its court case and suddenly we can copyright principles, which has quite some implications for the entire industry, far beyond just Cloud, because basically you won’t be able to write a function without having some lawyer there checking if somebody else in the world hasn’t written that function and copyrighted it. Just literally the function. Not actually what is in the code, just the name and the interfaces. So, back then, there was this whole thing “Oh, we want to have open APIs” and fair enough, but APIs can’t be copyrighted. And then there were concerns about “Wow. What if somebody has got a patent in their space” And OK. That is perfectly true. You can create your own API, make it open and somebody may still have a patent in that space. It is simple market blaming. When somebody is appearing, it is becoming “de facto”. There is no reason why they could not have re-implemented those APIs. None at all, reverse engineering principles, it is a principle non-expression. But of course, people made a lot of fuss about it. So they didn’t or they used it as an excuse, or should I be unkind and say some people wanted to promote their APIs so, of course, they wanted everybody to use what they were providing.
Chris: I will finish with another plug for your mapping presentation that you are going to be doing later on this morning.
Thank you very much. I retired from Cloud in 2010. It must have been 2010 back then. Of course, I keep on getting dragged back in and it never lets me go, but it has been like 3D printing. Today I am going to be talking about mapping and manipulation environments, how you actually see the chess board around you, how you attack other companies, etcetera. So, hopefully people will find it interesting. Cloud is just a tiny little subset of that subject, for me, anyway. Pleasure, as always.