BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Why the Serverless Revolution Has Stalled

Why the Serverless Revolution Has Stalled

Leia em Português

This item in japanese

Bookmarks

Key Takeaways

  • For a few years now, serverless computing has been predicted by some to usher in a new age of computing that thrives without an operating system to execute applications. We were told this framework would solve a multitude of scalability problems. The reality hasn’t been exactly that.
  • Though many view serverless technology as a new idea, its roots can be traced all the way back to 2006 and the Zimki PaaS and Google App Engine, both of which explored a serverless framework.
  • From limited programming language support to performance issues, there are four reasons the serverless revolution has stalled.
  • Serverless isn't useless. Far from it. However, it should not be viewed as a straight across replacement for servers. In certain application development environments it can be a handy tool.

The server is dead, long live the server!

Or so the battle cry of the serverless revolution goes. Take even a quick glance through the industry press of the last few years, and it would be easy to conclude that the traditional server model is dead, and that within a few years we will all be running serverless architectures.

As anyone who works in the industry knows, and as we've also pointed out in our article on the state of serverless computing, this isn't true. Despite many articles expounding the virtues of the serverless revolution, it has not come to pass. In fact, recent research indicates that the revolution may have already stalled.

Some of the promises made for serverless models have undoubtedly been realized, but not all of them. Not by a long shot.

In this article, I want to take a look at why, despite serverless models finding great utility in specific, well-defined circumstances, it seems that the lack of agility and flexibility of these systems is still a bar to their more widespread adoption. 

The Promise of Serverless Computing

Before we get to the problems with serverless computing, let's look at what it was supposed to provide. The promises of the serverless revolution have been multiple and – at times – very ambitious. 

For those new to the term, a quick definition. Serverless computing refers to an architecture in which applications (or parts of applications) run on-demand within execution environments that are typically hosted remotely. That said, it's also possible to host serverless systems in-house. Building resilient, serverless systems has been a major concern of sysadmins and SaaS companies alike over the past few years, because (it is claimed) this architecture offers several key advantages over the “traditional” server and client model:

  1. Serverless models don’t require users to maintain their own operating systems, or even to build applications that are compatible with particular OSs. Instead, developers can produce generic code, and then upload it to the serverless framework, and watch it run.
  2. The resources used on serverless frameworks are typically paid for by the minute (or even by the second). This means that clients only pay for the time they are actually running code. This contrasts favorably with the traditional cloud-based virtual machine, where often you end up paying for a machine that sits idle much of the time.
  3. Scalability has also been a major draw. Resources in serverless frameworks can be dynamically assigned, meaning that they are able to deal with sudden spikes in demand.

In short, this means that serverless models are supposed to deliver flexible, cheap, scalable solutions. When put like that, it’s amazing that we didn’t come up with this idea earlier.

Is This a New Idea?

Though, actually, we did. The concept of letting users pay only for the time that code actually runs has been around since it was introduced as part of the Zimki PaaS in 2006, and the Google App Engine offered a very similar solution at around the same time. 

In fact, what we now call the "serverless" model is older than many of the technologies now referred to as "cloud native" and that achieve much the same thing. As some have noted, serverless models are essentially just an extension of an SaaS business model that has been around for decades.

It’s also worth recognising that the serverless model is also not a FaaS architecture, though there are links between these. FaaS is essentially the compute-focused portion of a serverless architecture, and so can form part of this, without representing the entire system.

So why all the hype now? Well, as  the rate at which the internet penetrates the developing world continues to rise rapidly, there has been a simultaneous rise in the demand for computing resources. Many countries with rapidly growing ecommerce sectors, for instance, simply don't have the computing infrastructure to handle the apps that run these platforms. That's where for-hire serverless platforms come in.

The Problems With Serverless

The issue is that serverless models have ... issues. Don't get me wrong: I'm not saying that serverless models are bad per se, or that they don't provide substantial value for some companies in some circumstances. But the central claim of the "revolution" – that serverless would rapidly replace traditional architectures – is never going to happen.

Here's why.

Limited Programming Languages

Most serverless platforms only allow you to run applications that are written in particular languages. This severely limits the agility and adaptability of these systems.

Admittedly, most serverless platforms support most mainstream languages. AWS Lambda and Azure Functions also provide wrapper functionality that allows you to run applications and functions in non-supported languages, though this often comes with a performance cost. So for most organizations, most of the time, this limitation will not make that much difference. But here's the thing. One of the advantages of serverless models is supposed to be that obscure, infrequently used programs can be utilized more cheaply, because you are only paying for the time they are executing. And obscure, infrequently used programs are often written in ... obscure, infrequently used programming languages. 

This undermines one of the key advantages of the serverless model.

Vendor Lock

The second problem with serverless platforms, or at least with the way that they are implemented at the moment, is that few of platforms resemble one another at an operational level. There is little standardization across platforms when it comes to the way that functions should be written, deployed, and managed, and this means that migrating functions from one vendor-specific platform to another is extremely time consuming.

The hardest part of migrating to serverless isn't the compute functions — which are generally just snippets of code — but the way in which applications are entangled with connected systems like object storage, identity management, and queues. Functions can move, but the rest of an application isn't as portable. This is the opposite of the cheap, agile platforms we were promised.

Some would contend, I suspect, that serverless models are new, and that there hasn't yet been time to standardize the way they work. But they are not that new, as I've pointed out above, and plenty of other cloud-native technologies like containers have already been made much more usable via the development and widespread adoption of strong, community-based standards.

Performance

The computing performance of serverless platforms can be difficult to measure, partially because the companies that sell these services have a vested interest in keeping this information hidden. Most will claim that functions running on remote, serverless platforms will run just as fast as they would on in-house servers, barring a few unavoidable latency issues.

Anecdotal evidence, however, suggests the opposite. Functions that have not been run on a particular platform before, or have not been run in while, take some time to initialize. This is likely because their code has been shifted to some less accessible storage medium, though – just like with their performance stats – most serverless computing vendors will not divulge if this is the case.

There are a number of ways around this, of course. One is to optimize your functions for whichever cloud-native language your serverless platform runs on, but this somewhat undermines the claim that these platforms are "agile."

 Another approach would be to make sure that performance-critical programs are scheduled to run at frequent intervals, in order to keep them "fresh." This second approach slightly contradicts, of course, the claim that serverless platforms are more cost-efficient because you are only paying for the time your programs are running. Cloud providers have introduced new ways to reduce cold starts, but many require a "scale to one" model that undermines the initial value of FaaS.

This issue of "cold starting" can be reduced by running serverless systems in-house, but this comes with its own costs, and remains a niche option for well-resourced teams.

You Can't Run Entire Applications

Finally, perhaps the most crucial reason why serverless architectures are not going to replace traditional models anytime soon: you (generally) can't run entire applications on severless systems. 

Or rather, you could, but it would not be cost-efficient to do so. Your successful monolithic app probably shouldn't become a series of four dozen functions connected to eight gateways, forty queues, and a dozen database instances. For this reason, serverless suits greenfield development. Virtually no existing application (architecture) ports over. So you can migrate, but expect to start from zero.

This means that, in the vast majority of cases, serverless platforms are used as an adjunct to in-house servers, to perform tasks that require large amounts of computational resources. This makes them really quite different from two other forms of cloud-native technology, containers and virtual machines, that both offer a holistic way of performing remote computation. This illustrates one of the difficulties in transitioning from microservices to serverless.

This is not necessarily a problem, of course. The ability to occasionally draw on huge computational resources, without paying for the hardware necessary to achieve this in-house, can be of real and lasting benefit in many organizations. However, managing the way in which applications run, with portions of this on in-house servers and other portions running on serverless cloud architectures, can bring another level of complexity to the deployment of these applications.

Viva la Revolucion?

Despite all these complaints, I'm not against serverless solutions per se. I promise. It's just that developers should realize – especially if they are exploring serverless models for the first time – that this technology is not a straight replacement for servers. Instead, take a look at our tips and resources for building serverless applications, and decide how best you can deploy this model.

About the Author

Bernard Brode is a product researcher at Microscopic Machines and remains eternally curious about where the intersection of AI, cybersecurity, and nanotechnology will eventually take us.

 

 

Rate this Article

Adoption
Style

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Not shocked

    by Mark N,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    To me, Serverless is the new Stored Procedures.

  • Serverless != Lambda/Functions

    by Michael Boker,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I feel like the opinions in this article are based on one type of serverless offering - the serverless function ("Lambda" in AWS). How do you feel about serverless containers, like AWS Fargate? How about serverless data stream processing - like AWS Kinesis Data Analytics? I agree with many arguments made here, in the context of serverless functions. But, I think that some generalizations were made, which aren't generally true.

    Also, I feel that the argument about vendor lock would apply in many cases when using hosts. If you're developing a hosted service on AWS, you are likely interacting with other AWS components via the AWS SDK, and that will be true on servers as well as serverless.

    Any thoughts?

  • Re: Serverless != Lambda/Functions

    by Rafael Gumieri,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I'm not sure if we can call AWS Fargate as Serverless, because you still need to understand about the operating system in your container, what is not in your control is the linux kernel in the host instance.
    But if it is or not Serverless, I agree with you that the "malleability" from containers are the biggest advantage here and I feel more comfortable or capable of tuning my system environment.

    About the vendor locking, it is very tricky indeed. We could say a well done app could have its outer shell easily replaced to another cloud provider.

    But I cannot say much because we found so many limitations in the AWS Lambda that pushed us back to the AWS ECS. Our conclusion was: It is possible and effective to make a POC with Lambda, but when it has to be cost optimized and stable… It will need to be placed in a container. In the end we just stay in containers for the applications.

  • You Can't Run Entire Applications

    by Autarch Princeps,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    You cannot do a lot more then that. Entire frameworks become impossible to use, even if in theory they could run with the concept of per request runtime, the fact that you have to on a fundamental level redefine what input means, breaks most of them.
    Serverless is ok, if you have to right a little glue code or wrap something else, but in most full projects you don't start from scratch, nor are many projects worth designing a custom solution for. In the vast majority of cases, what we see, is just use standard webshop/cms/blogplatform/etc. and then import a few images, stylesheets, fonts and articles, and your done. No programming necessary.
    And if we really do program something new, the extreme simplicity of serverless has you yearning for all the capabilities of Kubernetes within the first hours of a project.

  • Lack of "GNU" standards

    by Enrique Benito,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Making a weak aproach, serverless is similar to UNIX utils. In UNIX we have serverless apps like grep, sed, awk, sort, uniq, ... that do not consume recurses until used. They take input from STDIN and write to STDOUT. This is quite similar to serverless where input is read from an "HTTP" request and written back as an HTTP response. The core diference is than in UNIX there have been standard tools for decades, while in the serverless cloud there is nothing similar.
    We need a "GNU text utils" for the cloud era. Some standard serverless services that anyone can use with the same freedom that we can use GNU tools in Linux/Mac OSX/UNIX/POSIX systems. We can not expect for companies to create their own serverless utility apps from the scratch as is the case at this moment.

  • False premise of stateless applications

    by Andrea Del Bene,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Since micro services became popular we still live with the false premise that our applications can be completely stateless, and that they can be scaled by simply adding service instances or running more parallel serverless functions.
    As this assumption proves to be false, we usually need some kind of caching mechanism to keep performances at a decent level and we end up with something very similar to old 'server session'.
    So in the end all the benefits of these new 'revolutionary' architectures just turn to be impossible to fully achieve.

  • Re: Lack of "GNU" standards

    by Frank Carver,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    We have effectively had this since the 1990s ( See en.wikipedia.org/wiki/Common_Gateway_Interface ) This used to run almost all of the dynamic content on the web, but it has a lot of issues, not least ones of performance, security, and the difficulty of keeping any kind of context between requests. These days that space is mostly filled by PHP, which would also count as "serverless" according to this article.

  • Re: Lack of "GNU" standards

    by Frank Carver,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    We have effectively had a standard "serverless" platform since the 1990s ( See en.wikipedia.org/wiki/Common_Gateway_Interface ) This used to run almost all of the dynamic content on the web, but it has a lot of issues, not least ones of performance, security, and the difficulty of keeping any kind of context between requests. These days that space is mostly filled by PHP, which would also count as "serverless" according to this article.

  • I suppose it's more of the trough of disillusionment than the end of the revolution

    by Florin Jurcovici,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    If you look at mostly any technology, it never completely replaced what was before it. New technologies evolve from new needs, but this usually doesn't make old needs go away very quickly, therefore old technologies stay in use for an old time. Some, like C for system code, for decades. Potentially for centuries, who knows? Therefore, considering all limitations of serverless, it should have been obvious to any technologist from the get go that it won't be a replacement for all that was before it.

  • Re: Lack of "GNU" standards

    by Florin Jurcovici,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I'm not sure this is a valid analogy. The big problem with serverless for server applications is state. You can't reduce all computations to simple stream processing. Oftentimes, just setting up the stream processing pipeline is a tedious task - loading a custom function is not as fast and easy as it is to start a process from a binary in *n*x shell. Other times, even if the input is simple, producing the output requires fetching data from multiple sources - which may not be already prepared to provide it. A microservice using a transient cache to speed up things is a better solution, in such cases, regardless of how well optimized a serverless approach might be.

    Serverless becomes a widely usable approach only with data locality. Depending on the data, this approach might be difficult or outright impossible to implement.

  • Re: False premise of stateless applications

    by Florin Jurcovici,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Microservices, even when they employ caching, are a far cry from server session based old style web applications. Session is managed on clients nowadays. Microservices, when caching, cache data which isn't session specific.

    Also the notion of stateless as employed by microservices is different, I believe, from what you understand by stateless. You always need state as something that's persisted independently of any use in some ongoing operation. You need to persist bank account balances, log files and whatnot. These might never ever be used in session state. Session state typically contains things like items added to a shopping cart, payment orders as they are edited, query results on indexed logs and the like. Microservices never cache or persistently store such typical session contents. Such contents, in microservices-based applications, are maintained by the client. Microservices just perform one-off operations, such as performing a query and returning the results - and then completely forgetting about it. If queries have a good chance of being repeated, results might be cached. But such cache contents are not state - that state being lost only causes a performance hit for one request, until the cache is rebuilt. Nothing gets lost, as is the case with typical session information.

  • Re: False premise of stateless applications

    by Andrea Del Bene,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I agree on the fact that microservices cache has a different content compared to server session, but I still see two main false premises about microservices:

    - you can not simply scale by adding service instances. Cache still crucial for performances and as the traffic grows you will need to scale cache, which means scaling its state to be consistent and replicated, otherwise your cache will be useless. And scaling such kind of cache is not so different from scaling a server session.
    -nearly every non trivial application need to authenticate its users.This requires to keep a state about users, especially if you want to log out a previously authenticated user.
    Off course you can rely on a third part service for authentication (for example an OAuth service) and keep your microservice stateless, but in this way you've just delegated someone else to handle your application state.

  • Re: False premise of stateless applications

    by Florin Jurcovici,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    You obviously can't scale beyond the hard limits of the universal scalability law.

    There's a very popular mechanism for avoiding delegating state management, very specifically for authentication/authorization: JWT.

    Caches maintained by stateless microservices usually contain data which doesn't change frequently, and which, if the microservice crashes and needs to restart, can be rebuilt from scratch, without much performance impact overall.

    Proper user sessions, OTOH, always need to be synchronized and consistent. They have completely different requirements than typical caches used by microservices, which are most often neither replicated nor consistent.

    In terms of the universal scalability law, user sessions maintained server-side make much too much of the data require synchronization, whereas microservices reduce the data that requires synchronization to a minimum, don't cache it in any way, but cache all other data that isn't local. You gain more, with regard to both performance and scalability, this way.

    The problem with serverless is that it makes the kind of simple, local, inconsistent caching that microservices employ, impossible. A vast array of applications are served well by a microservices-based approach, serverless doens't fit that many.

  • Distributed procedural calls

    by Arman Kurtagić,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    We have already tried procedural coding style. Function calling another function, now distributed! :)

  • Re: Serverless != Lambda/Functions

    by Patrick Rodies,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    That is true Michael but mainly due to the vendors themselves and the maturity of these solutions.
    For vendor lock-in, you are correct but it is a risk. Many of us work in hybrid env with multiple cloud providers.
    Two major steps backward for me:
    First, they painfully ignore the importance of efficient data processing. Second, they stymie the development of distributed systems. This is curious since data-driven, distributed computing is at the heart of most innovation in modern computing. This latest point is paramount for me and difficult to sell inside a large enterprise. The cost saving for infra will have repercussion on productivity for product teams.
    This is well explained in this paper: Serverless Computing: One Step Forward, Two Steps Back
    Joseph M. Hellerstein, Jose Faleiro, Joseph E. Gonzalez, Johann Schleier-Smith, Vik

  • Re: False premise of stateless applications

    by Enrico Piccinin,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    You rightly wrote "Caches maintained by stateless microservices usually contain data which doesn't change frequently, and which, if the microservice crashes and needs to restart, can be rebuilt from scratch, without much performance impact overall."
    If we consider that Serverless Functions have a certain lifespan (about 15 minutes now for AWS) we can still consider to use these types of cache in our FaaS implementations. Whether this makes sense or not depends on the specificity of the application logic, the type of load curve and so on. But caching is still possible, at least up to some extent, also with Serverless Functions.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT