BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Understanding Serverless: Tips and Resources for Building Servicefull Applications

Understanding Serverless: Tips and Resources for Building Servicefull Applications

Leia em Português

Bookmarks

Key Takeaways

  • Serverless is more than just Functions as a Service.
  • Do not fear vendor lock-in; embrace the power the vendor provides through event integration.
  • Open source tooling can help simplify building complex applications.
  • Use Infrastructure as Code solutions like CloudFormation to define your serverless applications and simplify DevOps. 
  • Powerful monitoring solutions can provide visibility of function and integration performance with accurate cost management and estimation tools.

While serverless technology has been rapidly growing in popularity over the past few years, there are still many misconceptions and concerns regarding serverless solutions. Vendor lock-in, tooling, cost management, cold starts, monitoring and the development lifecycle are all hot topics where serverless technologies are concerned. This article aims to address some of these areas, as well as share tips and resources to guide serverless newcomers towards building powerful, flexible and cost-effective serverless applications.

Misunderstandings on serverless technology

One of the major misconceptions is that serverless and Functions as a Service (or FaaS) are the same thing, and thus not a particularly radical change that’s worth adopting. While AWS Lambda has certainly been one of the stars of the serverless uprising and arguably one of more popular elements of a serverless architecture, there is more to serverless than FaaS. 

The core tenets of serverless are that you don’t need to worry about managing infrastructure or scaling and you pay only for what you use. Given these parameters, there are a lot of services that this applies to, like AWS DynamoDB, S3, SNS or SQS, Graphcool, Auth0, Now, Netlify, or Firebase (and many, many more). Ultimately, serverless offers the power of cloud computing without the burden and responsibility of managing infrastructure or optimising for scalability. This abstraction also means the security at the infrastructure layer is no longer your concern, which, when considering the difficulty and complexity of maintaining security standards, is a massive boon. Last but not least, if you aren’t using your provisioned infrastructure, you don’t have to pay for it.

Serverless can also be considered as a ‘state of mind’ — a mentality one takes when it comes to architecting a solution. Avoid approaches that require you to maintain any infrastructure whatsoever. With a serverless approach, we are trying to redirect our time working on our project towards things that have a more direct impact on and benefit to our users, like robust business logic, engaging user interfaces and responsive, reliable APIs. If we can avoid managing and maintaining a free-text search platform by paying Algolia, for example, then that is what we will do. Building applications in this way can greatly reduce time to market, as you no longer need to worry about managing a complex infrastructure. Avoid the responsibility and cost of managing infrastructure and focus on building the applications and services that your customers are looking for. Patrick Debois referred to this approach as ‘servicefull’, which is a term that has been embraced by the serverless community. Functions should then be considered as the glue that binds our services together and conceptualised as a deployable unit (rather than deploying an entire library or web application), allowing incredibly granular control over deployments and changes to our application. If it’s not possible to deploy functions in this manner, that may be a code smell indicating that the function has too much responsibility and should be refactored.

Some are concerned about vendor lock-in when developing cloud applications. With serverless, this attitude is no different and I think this stems from a misunderstanding of what serverless is really about. In my experience building serverless applications on AWS, for instance, and embracing the ways AWS Lambda can glue other AWS services together is a part of the power of serverless architectures. It’s a strong example of where the sum is greater than the parts. Trying to avoid vendor lock-in can actually cause you greater problems than you may think you’re solving. When working with containers it may be easier to manage your own abstraction layer between cloud providers, but when it comes to serverless, the effort wouldn’t be worth the cost, particularly when one considers the cost effective nature of serverless to begin with. Be sure to consider how vendors provide facilities to expose services; some specialist services depend on strong integration points with other vendors and may provide plug-and-play style hooks out of the box. It’s easier to specify the Lambda to invoke from an API Gateway endpoint than it is to proxy a request to an existing container or EC2 instance. Graphcool provide simple configuration with Auth0 which would be easier than using a custom identity provider.

Picking the right vendor for your serverless application is an architectural decision. You don’t build a serverless application in such a way that one day you might go back to managing servers. Picking a cloud vendor is no different to choosing to build your application using containers or what database you decide to use, or even what language you write your code in. 

It’s good to consider:

  • What services you need and why. 
  • The different services cloud providers provide and how can you glue these together using your chosen FaaS implementation. 
  • What programming languages are supported (dynamic vs static typing, compiled vs interpreted code, benchmarking, cold start performance, open source ecosystem, etc.).
  • What your security requirements are (SLAs, 2FA, OAuth, HTTPS, SSL, etc.).
  • How to manage your CI/CD and software development cycles.
  • What kind of infrastructure-as-code solutions you can take advantage of.

If you’re extending an existing application and adding serverless features incrementally, then this may limit your options somewhat, but nearly all serverless technologies can provide some form of API (either via REST endpoints or message queues) that would allow application extensions to be developed independent of the core application, but with an easy integration point. Look for services that provide comprehensible APIs, solid documentation and strong communities and you won’t go wrong. A lot of the time, when it comes to serverless technologies, the simplicity of the integration can be your key metric and is probably one of the greatest contributing factors to the success AWS has enjoyed since Lambda was released in 2015. 

Where serverless can be beneficial

Serverless can be used just about anywhere, though I find that the benefits of serverless computing go beyond use cases. The barrier-to-entry is now so low for cloud computing, thanks to serverless technologies. If a developer has an idea, but doesn’t know how to manage cloud infrastructure and optimise costs, they don’t need to worry about finding someone with an engineering background to help them out. If a startup is trying to build a platform but is worried about costs getting away from them, they can rest easy with a serverless approach. 

Thanks to the cost-saving and scaling aspects of serverless, it is as equally applicable to an internal IT system as it is to a global, multi-million user web application. When it comes to billing, rather than talking in Euros (or whatever your currency), you can talk in cents. This is incredibly powerful. To leave even the most basic AWS EC2 instance (t1.micro) on for a month will cost you €15, even if you do nothing with it (who hasn’t forgotten to turn one off!). For comparison, you would need to run a 512MB Lambda function with a 1 second duration up to 3 million times in the same period to incur the same level of cost. Equally, if you do nothing with that function, it costs you nothing. 

As serverless is predominantly event-based, it can be straightforward to add serverless infrastructure to legacy systems. For instance, one could create an analytics service for a legacy retail system using AWS S3, Lambda and Kinesis that can receive data via API Gateway endpoints or improve a free text search system using DynamoDB streams and Algolia.

Most serverless platforms support various languages, the most common being Python, JavaScript, C#, Java and Go. Generally, there are no limitations as to what libraries you use with each language, so feel free to use your favourite open source libraries. However, it’s a good idea to keep your dependencies low to ensure that your functions perform as optimally as possible to help take advantage of the colossal scalability of your serverless application. The more packages the container needs to load, the longer your cold start time gets.

Cold starts are when the container, runtime and function handler need to be initialised before use. This can lead to function durations of around 3 seconds, which is not ideal when you’re trying to provide a response to impatient users. However, cold starts only occur on the first invocation after several minutes of a function being idle. Many consider this to be a minor inconvenience that can be circumvented by pinging the function to keep it warm or simply ignored altogether. 

While AWS has released Serverless Aurora, a serverless SQL database, SQL databases are not an ideal use case for serverless computing as they depend on connections to perform transactions which can quickly become a bottleneck when AWS Lambda receives a high throughput. While Serverless Aurora is constantly improving and is definitely worth checking out, NoSQL solutions like DynamoDB are considerably better suited to serverless computing solutions, currently. No doubt this situation will change in the coming months, though.
Tooling is also a bit of a limitation, specifically around local testing. While solutions like docker-lambda, DynamoDB Local and LocalStack exist, they can often be quite fiddly and require considerable configuration. There is active development on all of these projects, however, and it is only a matter of time before tooling reaches the standard we are all accustomed to.

The impact of serverless on the development lifecycle

As your infrastructure is always just configuration, it is possible to specify and deploy your code using scripts. This could simply be shell scripts or you could use a configuration as code solution like AWS CloudFormation. What I like about CloudFormation is that while it doesn’t provide configuration for every area, it does allow you to specify custom resources that can simply be Lambda functions. What this means is that where CloudFormation fails you, you can write a custom resource (a lambda function) that will allow you to pick up that slack. You can use a Lambda-backed custom resource to do just about anything, even configure dependencies outside of your AWS environment. 

Again, as everything is simply configuration, it is possible to parameterise your deployment scripts on a per environment/region/user basis, particularly if you use an Infrastructure as Code solution, like CloudFormation. For example, you could deploy a replica of your infrastructure for every branch in your repo to test them during development in complete isolation from one another, which dramatically reduces the feedback loop for developers trying to figure out if their code performs adequately in a live environment. Managers don’t need to worry about the cost of how many environments are deployed, as they are being billed on a per-use basis.

DevOps have less to worry about as they only need to make sure the developers have their configuration correct. There are no more instances, load balancers or security groups to manage. The term ‘NoOps’ is often bandied around in this respect, but it’s still important to have expertise in infrastructure configuration, especially when it comes to IAM configuration and optimising cloud resources.

There are very powerful visibility and monitoring tools like Epsagon, Thundra, Dashbird and IOPipe to provide visibility and traceability of serverless applications. They can provide information like logging, tracing, function performance metrics, architectural bottlenecks, cost analysis and estimates and more. Not only does this provide DevOps engineers, developers and architects great insight into how the application is performing, it also gives management a much better picture of real-time, to-the-second resource costs and billing projections. This is considerably harder to do with a managed infrastructure.

Architecting serverless applications is considerably simpler as you no longer need to consider web servers, managing VMs or containers, server patching, operating systems, internet gateways, jump boxes, etc. The abstraction of these responsibilities allows serverless architectures to focus on what’s most important — addressing the needs of your business and customers.

While tooling could be better (it’s improving every day), the developer experience is great as developers get to focus on writing business logic and how best to offload application complexity to different services in the architecture. Serverless application orchestration is event-based and abstracted away by the cloud provider (like SQS, S3 events or DynamoDB streams), so all a developer needs to do is specify how their business logic will respond to said events and no longer worry about how best to implement databases, message queues or how to operate on data on a specific storage device as optimally as possible. 

Code can be run and debugged locally just as you would with any other application development process. Unit testing doesn’t change. As I mentioned earlier, the ability to deploy the entire application infrastructure using a customisable stack configuration empowers developers to gain critical feedback fast, without having to worry about the cost of testing or the impact on expensive managed environments. 

Tools and techniques for building serverless applications

There is no definitive way to build a serverless application and the services you use to build it are no exception. AWS is a clear leader when it comes to offering strong serverless solutions, but Google Cloud, Zeit and Firebase are all worth checking out. If you are using AWS, I quite like the Serverless Application Model (SAM) as an approach to building applications, especially when working with C#, as the tooling in Visual Studio is exceptional. Anything Visual Studio can do, the SAM CLI can do, too, so don’t fear you’re missing out on anything if you’re using a different IDE or text editor. SAM works with other languages, too, of course.

When working with other languages, the Serverless Framework is a superb open source tool that allows for everything to be configured by very powerful YAML configuration files. The Serverless Framework also supports multiple cloud providers, so if you’re looking for a multi-cloud solution, this framework will certainly make your life easier. It also has a huge community with loads of plugins to suit your needs. I’ve contributed to it in the past and the team working on it are incredibly friendly, helpful and welcoming. 

For local testing, docker-lambda, Serverless Local, DynamoDB Local and LocalStack are all great, open source tools. Serverless is still in its infancy, as is tooling, so it can often take a bit of work to get local testing to work if your scenarios are more complex. However, it is so incredibly cheap to simply deploy the stack to your environment and test it there. This has the advantage of not having to worry about accurate replication of cloud environments locally.

Use AWS Lambda Layers to reduce the size of packages deployed and improve load times. 
Make sure you’re using the right programming language for the job. Different languages have different benefits and drawbacks. There are a lot of benchmarks out there, but JavaScript, Python and C# (.NET Core 2.1+) are clear leaders when it comes to AWS Lambda performance. AWS Lambda recently introduced the Runtime API that allows you to specify the language and runtime you wish to use, so feel free to experiment (C++, anyone?).

Keep deployment packages small. Smaller packages are faster to load. Avoid including large libraries, particularly if you’re only using one or two features from it. If you’re programming in JavaScript, make sure you use a build tool like Webpack to optimise your build and only include exactly what you need. .NET Core 3.0 introduces QuickJit and Tiered Compilation which improves performance generally, but helps a lot with cold starts.

The event-based nature of serverless functions can make business logic difficult to coordinate at first. Message queues and state machines can be incredibly useful in this regard. Lambda functions can call each other, but you should only really do this if you aren’t waiting for a response (fire and forget) — you don’t want to be billed for waiting for another function to complete. Message queues are useful for decoupling areas of business logic, controlling application bottlenecks and processing transactions (when using FIFO queues). AWS Lambda functions can be assigned SQS queues as dead letter queues that will keep track of failed events for analysis. AWS Step Functions (a state machine) is incredibly helpful when managing complex processes that require chaining functions together. Instead of having a Lambda function invoke another function, Step Functions can coordinate state transitions, pass data between functions and manage global function state. They allow you to specify retry conditions or what to do when specific errors occur, which can be incredibly powerful. 

Conclusion

Serverless technologies have developed at an incredible rate over the past few years. This paradigm shift is not without some misunderstandings. Serverless solutions have great strengths throughout the development lifecycle from simplifying development and DevOps to greatly reducing operating costs thanks to abstracted infrastructure and scaling management. While serverless is not without its caveats, there are solid techniques and design patterns that can be used to build strong serverless applications or integrate serverless elements into existing architectures.

About the Author

Christopher Paton is a senior developer at Johnson Controls in Cork, Ireland. He has worked in various industries from broadcast media to public sector organisations, in both the UK and Ireland. He has been using serverless technologies since Lambda was released in 2015. He has spoken at AWS User Groups on serverless and hosts the Serverless Framework Meetup in Cork. He will be giving a talk at RebelCon entitled, ‘Build and Deploy Rapidly with Serverless’.

Rate this Article

Adoption
Style

BT