BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Why the Serverless Revolution Has Stalled

Why the Serverless Revolution Has Stalled

Leia em Português

This item in japanese

Key Takeaways

  • For a few years now, serverless computing has been predicted by some to usher in a new age of computing that thrives without an operating system to execute applications. We were told this framework would solve a multitude of scalability problems. The reality hasn’t been exactly that.
  • Though many view serverless technology as a new idea, its roots can be traced all the way back to 2006 and the Zimki PaaS and Google App Engine, both of which explored a serverless framework.
  • From limited programming language support to performance issues, there are four reasons the serverless revolution has stalled.
  • Serverless isn't useless. Far from it. However, it should not be viewed as a straight across replacement for servers. In certain application development environments it can be a handy tool.

The server is dead, long live the server!

Or so the battle cry of the serverless revolution goes. Take even a quick glance through the industry press of the last few years, and it would be easy to conclude that the traditional server model is dead, and that within a few years we will all be running serverless architectures.

As anyone who works in the industry knows, and as we've also pointed out in our article on the state of serverless computing, this isn't true. Despite many articles expounding the virtues of the serverless revolution, it has not come to pass. In fact, recent research indicates that the revolution may have already stalled.

Some of the promises made for serverless models have undoubtedly been realized, but not all of them. Not by a long shot.

In this article, I want to take a look at why, despite serverless models finding great utility in specific, well-defined circumstances, it seems that the lack of agility and flexibility of these systems is still a bar to their more widespread adoption. 

The Promise of Serverless Computing

Before we get to the problems with serverless computing, let's look at what it was supposed to provide. The promises of the serverless revolution have been multiple and – at times – very ambitious. 

For those new to the term, a quick definition. Serverless computing refers to an architecture in which applications (or parts of applications) run on-demand within execution environments that are typically hosted remotely. That said, it's also possible to host serverless systems in-house. Building resilient, serverless systems has been a major concern of sysadmins and SaaS companies alike over the past few years, because (it is claimed) this architecture offers several key advantages over the “traditional” server and client model:

  1. Serverless models don’t require users to maintain their own operating systems, or even to build applications that are compatible with particular OSs. Instead, developers can produce generic code, and then upload it to the serverless framework, and watch it run.
  2. The resources used on serverless frameworks are typically paid for by the minute (or even by the second). This means that clients only pay for the time they are actually running code. This contrasts favorably with the traditional cloud-based virtual machine, where often you end up paying for a machine that sits idle much of the time.
  3. Scalability has also been a major draw. Resources in serverless frameworks can be dynamically assigned, meaning that they are able to deal with sudden spikes in demand.

In short, this means that serverless models are supposed to deliver flexible, cheap, scalable solutions. When put like that, it’s amazing that we didn’t come up with this idea earlier.

Is This a New Idea?

Though, actually, we did. The concept of letting users pay only for the time that code actually runs has been around since it was introduced as part of the Zimki PaaS in 2006, and the Google App Engine offered a very similar solution at around the same time. 

In fact, what we now call the "serverless" model is older than many of the technologies now referred to as "cloud native" and that achieve much the same thing. As some have noted, serverless models are essentially just an extension of an SaaS business model that has been around for decades.

It’s also worth recognising that the serverless model is also not a FaaS architecture, though there are links between these. FaaS is essentially the compute-focused portion of a serverless architecture, and so can form part of this, without representing the entire system.

So why all the hype now? Well, as  the rate at which the internet penetrates the developing world continues to rise rapidly, there has been a simultaneous rise in the demand for computing resources. Many countries with rapidly growing ecommerce sectors, for instance, simply don't have the computing infrastructure to handle the apps that run these platforms. That's where for-hire serverless platforms come in.

The Problems With Serverless

The issue is that serverless models have ... issues. Don't get me wrong: I'm not saying that serverless models are bad per se, or that they don't provide substantial value for some companies in some circumstances. But the central claim of the "revolution" – that serverless would rapidly replace traditional architectures – is never going to happen.

Here's why.

Limited Programming Languages

Most serverless platforms only allow you to run applications that are written in particular languages. This severely limits the agility and adaptability of these systems.

Admittedly, most serverless platforms support most mainstream languages. AWS Lambda and Azure Functions also provide wrapper functionality that allows you to run applications and functions in non-supported languages, though this often comes with a performance cost. So for most organizations, most of the time, this limitation will not make that much difference. But here's the thing. One of the advantages of serverless models is supposed to be that obscure, infrequently used programs can be utilized more cheaply, because you are only paying for the time they are executing. And obscure, infrequently used programs are often written in ... obscure, infrequently used programming languages. 

This undermines one of the key advantages of the serverless model.

Vendor Lock

The second problem with serverless platforms, or at least with the way that they are implemented at the moment, is that few of platforms resemble one another at an operational level. There is little standardization across platforms when it comes to the way that functions should be written, deployed, and managed, and this means that migrating functions from one vendor-specific platform to another is extremely time consuming.

The hardest part of migrating to serverless isn't the compute functions — which are generally just snippets of code — but the way in which applications are entangled with connected systems like object storage, identity management, and queues. Functions can move, but the rest of an application isn't as portable. This is the opposite of the cheap, agile platforms we were promised.

Some would contend, I suspect, that serverless models are new, and that there hasn't yet been time to standardize the way they work. But they are not that new, as I've pointed out above, and plenty of other cloud-native technologies like containers have already been made much more usable via the development and widespread adoption of strong, community-based standards.

Performance

The computing performance of serverless platforms can be difficult to measure, partially because the companies that sell these services have a vested interest in keeping this information hidden. Most will claim that functions running on remote, serverless platforms will run just as fast as they would on in-house servers, barring a few unavoidable latency issues.

Anecdotal evidence, however, suggests the opposite. Functions that have not been run on a particular platform before, or have not been run in while, take some time to initialize. This is likely because their code has been shifted to some less accessible storage medium, though – just like with their performance stats – most serverless computing vendors will not divulge if this is the case.

There are a number of ways around this, of course. One is to optimize your functions for whichever cloud-native language your serverless platform runs on, but this somewhat undermines the claim that these platforms are "agile."

 Another approach would be to make sure that performance-critical programs are scheduled to run at frequent intervals, in order to keep them "fresh." This second approach slightly contradicts, of course, the claim that serverless platforms are more cost-efficient because you are only paying for the time your programs are running. Cloud providers have introduced new ways to reduce cold starts, but many require a "scale to one" model that undermines the initial value of FaaS.

This issue of "cold starting" can be reduced by running serverless systems in-house, but this comes with its own costs, and remains a niche option for well-resourced teams.

You Can't Run Entire Applications

Finally, perhaps the most crucial reason why serverless architectures are not going to replace traditional models anytime soon: you (generally) can't run entire applications on severless systems. 

Or rather, you could, but it would not be cost-efficient to do so. Your successful monolithic app probably shouldn't become a series of four dozen functions connected to eight gateways, forty queues, and a dozen database instances. For this reason, serverless suits greenfield development. Virtually no existing application (architecture) ports over. So you can migrate, but expect to start from zero.

This means that, in the vast majority of cases, serverless platforms are used as an adjunct to in-house servers, to perform tasks that require large amounts of computational resources. This makes them really quite different from two other forms of cloud-native technology, containers and virtual machines, that both offer a holistic way of performing remote computation. This illustrates one of the difficulties in transitioning from microservices to serverless.

This is not necessarily a problem, of course. The ability to occasionally draw on huge computational resources, without paying for the hardware necessary to achieve this in-house, can be of real and lasting benefit in many organizations. However, managing the way in which applications run, with portions of this on in-house servers and other portions running on serverless cloud architectures, can bring another level of complexity to the deployment of these applications.

Viva la Revolucion?

Despite all these complaints, I'm not against serverless solutions per se. I promise. It's just that developers should realize – especially if they are exploring serverless models for the first time – that this technology is not a straight replacement for servers. Instead, take a look at our tips and resources for building serverless applications, and decide how best you can deploy this model.

About the Author

Bernard Brode is a product researcher at Microscopic Machines and remains eternally curious about where the intersection of AI, cybersecurity, and nanotechnology will eventually take us.

 

 

Rate this Article

Adoption
Style

BT