BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Deterministic, Reproducible, Unsurprising Releases in the Serverless Era

Deterministic, Reproducible, Unsurprising Releases in the Serverless Era

Bookmarks
49:00

Summary

Ixchel Ruiz explores good practices, tips and lessons learned to make a release to production without surprises.

Bio

Ixchel Ruiz has developed software application & tools since 2000. Her research interests include Java, dynamic languages, client-side technologies and testing. Java Champion, Ground breaker ambassador, hackergarten enthusiast, Open Source advocate, author, public speaker and mentor.

About the conference

QCon Plus is a virtual conference for senior software engineers and architects that covers the trends, best practices, and solutions leveraged by the world's most innovative software organizations.

Transcript

Ruiz: My name is Ix-chel Ruiz. I'm a Java champion. I work for JFrog. I would like to tell you a story. It all begins on the day before the weekend. It's almost the end of the day, and slowly but surely, the team is getting ready to leave, and they're enjoying their time off. Here I am, getting ready for one last but important step, releasing the latest version of our product. For many, even a past version of myself, this would be an opportunity to question my state of mind. It's not that I am in need of an adrenaline rush. I like to think that I have a cool and clear mind. In reality, I'm one of those people that check more than one weather forecast and will bring an umbrella even if the chance of rain is low. Why? Why do I ever consider releasing a new version at this hour? It's simple. I trust that our software development process is robust. I don't expect or want despair or drama in my professional life, nor have a weekend full of dramatic incidents. What I want, need, and expect is a successful release with the confidence that this version is an improvement on the last one. This success is not the result of an odd chance event. We can reproduce this particular release version anytime we want, and all the releases to come will follow the same deterministic process. This is the goal. We have a plan. It's a good one.

Software Development

Let's review again that title: deterministic, reproducible, unsurprising releases. Building software of any kind, frameworks, libraries, packages, or modules require an iterative process involving development, operations, and runtime or execution. If we increase the number of feedback loops between these phases, and bring important concerns early in the development process, we can improve the quality of the software under development. That's easy, isn't it? Let's increase the challenge here. We are aiming for deterministic, reproducible, unsurprising releases in the serverless era. Microservices have been quickly adopted in the last decade, almost becoming the default style for building enterprise applications. Or at least we all have been tempted to migrate our existing monolithic applications to microservices. Martin Fowler once described microservices as an approach to developing a single application in a suite of small services, each running in its own process and communicating with lightweight mechanisms. Built around business capabilities and independently deployable by full automatic deployment machinery, maybe with the bare minimum of centralized management of the services and perhaps written in different programming languages.

Challenges

As software developers, we are already focused on improving the quality of the software, releasing more valuable features. Fulfilling these tools is hard. Sometimes we even have to decide, which one are we going to focus, because resources are always limited? From requirements specification, documentation, architecture, testing, security, automation, to cross team collaboration, the challenges are plenty. Having these services potentially written in different languages, evolving at different rates as a whole different product, and changing and adapting API contracts, makes already existing challenges around testing, security, and monitoring more difficult.

Tooling for the Java Ecosystem

In the Java world, we have adopted the microservice architecture style with gusto. There are several frameworks that support our microservice development journey. For example, Spring Boot, Quarkus, Micronaut, Dropwizard, among others. They provide their own testing libraries, or leverage known libraries like JUnit, Hamcrest, Mockito, AssertJ, REST Assured. Because the Java ecosystem, it's a very mature one. The Java platform is even one of the most interesting pieces of engineering that I have come across. I want to spend a few minutes showcasing my all-time favorite tools, highlighting features that we can leverage when testing microservices. This session is all about tools, because we are as good as our tools. We need to master them, and sometimes even change them. These are my champions. They have saved my professional life more than once. We're talking about Java, so these are not new tools. Maybe I can show them in a different light. Sometimes by doing that, you can see the benefits.

WireMock

WireMock is a simulator for HTTP based APIs. Some people say that this is a service virtualization tool, others say that it's a mock server. What is interesting is that it runs in a standalone process, without the HTTP server, or in a Docker container. You can do proxying, and even selectively proxying requests to other hosts. You can even use a matching criteria. That's very powerful. Record and replay and simulating faults, or a stateful behavior. When you start recording and replay, it actually will generate the call, the JSON object, and the files that it requires as resources. Later, you can go and see the request and change everything. You're going to start again running as a standalone process and play back. For me, this is super powerful, because it's really easy to modify and create this mockup for our frontend development team. They don't need to wait until we have implemented all the APIs. We can agree in a contract, and instead of saying, there's your contract, work with it. No, we can help them create these mockups that are not so difficult to create. When I'm saying simulating faults, you can create them randomly, you can create different HTTP codes.

For me, it was a way to push our frontend development team to figure out these things already. How are you going to behave if your call, if your service is not there, or it's not getting as fast as you expected? What are we going to do? For me, it is really one of my favorite tools, it's flexible. You can use it in different ways. For me, the capability of helping order your teams, it's super important. We are in version 2.32.0. It was released last December. The team introduced the ability to run WireMock without needing the HTTP server, for a serverless deployment model. They are already working in this idea of serverless. It was already possible to do it using WireMock app, which relies on internal APIs. They will break when they move to Java modules.

REST Assured

My second tool is REST Assured. REST Assured is a Java DSL for simplifying testing of REST calls. When we are working with JSON, sometimes, I don't want to say it's difficult, but at least for example, in Groovy or in Ruby, working with JSON or XML, it's trivial how we can't parse it, how we can create it. It is even fun. Testing with these tools, it's very good because you can specify request data, for example, multivalued parameters, cookies, headers, path parameters. You can verify response data, again, cookies, status, Boolean content matching, measuring response times. It supports authentication, OAuth1, OAuth2, and has Spring support. Even Quarkus also in their guides, they will show you how to use REST Assured. It's really one of the most helpful tools for testing REST APIs. Microservices do communicate a lot on REST. It's not a must. We usually tend to do it because it's so commonplace. When we start defining our contracts, or we start by defining our different models, it is easy to start with JSON because it is human readable. Everybody knows it. We are now in version 4.5.1. That was released in February 11, 2021. One important thing is that in the previous version, 4.5.0, they upgraded from 3.0.8 to 3.0.9 for Groovy.

Testcontainers

This one, I have been using Testcontainers for quite a long time. Actually, when I discovered Testcontainers, it was because we were in a difficult position in my particular project at that time. When I start using Testcontainers, it actually changed our minds. The problem at that moment was we were doing some interesting stuff. It was silly stuff, but it was still something that we require for that project. When we were testing the database, or somewhere, where you're testing against the database, the easiest way to start that was with H2. The problem was that if you're not testing with the proper database, then you're not testing with the proper database, you're too far away from production and everything can happen. At that time, we were trying to start our own Docker images with Docker Compose, but we had tons of problems. We have port clashing, you name it. We cannot run them concurrently, at least a lot of tests, or when something failed during the test, we had some of the containers, and the call from our operations team saying, you're killing me.

Testcontainers is a Java library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. You have the generic container, that's why they can say anything else that can run in a Docker container. They already have defined modules, support for different modules. For example, most of the most common databases, Postgres, MariaDB, and even Oracle XE. Oracle XE actually starts really fast, and it's really easy to set up nowadays. The advantages of using a dedicated or these kinds of modules that are already defined is that they take into account, where are the logs? What are the health or the waiting policies? They already know, and they can provide you with accurate information when your container or the service inside the container is ready to be used.

On the order kind, they know that always modules that they are available. We have the Elasticsearch container and the WebDriver container. I'm going to highlight these two, because, for example, the Elasticsearch containers comes with the basic license, which contains a security feature. You can turn on security by providing a password. In the case of the WebDriver containers, it contains web browsers and they are the ones compatible with the Docker Selenium project, which means that you have access to Firefox, Edge, and Chrome out of the box. They support the vnc screen recording, so Testcontainers can actually automatically record video of tests runs, optionally, only the ones that just fail. Testcontainers is in version 1.16.3. This was released on January 18, 2022. One of the most interesting things that recently happened is that they introduced the K3s modules. Other people outside, were asking the team to use Testcontainers to help test Kubernetes components. Kubernetes controllers or operators, they actually need something more than just mock Kubernetes APIs. In this release, they bring the K3s modules, to spin up the K3s lightweight Kubernetes inside a container. We're already starting to think about it.

JMeter

The last of the tools that I want to talk about is JMeter. This tool has been longer than my time in Java. Why do I love JMeter? This is again, because it saved me, it saved the day. People usually know it because it performs load and performance testing, distributed testing. It can use different application server protocols like HTTP, HTTPS, SOAP, REST, FTP, databases with JDBC, LDAP, SMTP, POP, and IMAP. What it actually made mildly in another project is the scriptable samplers, the support of JSR223. For example, it supports Groovy. At that time, we were doing some distributed testing. We had a REST API but it had some interesting logic. We couldn't use only record and replay because we were parsing some values in the headers. They were not complicated, because they follow a certain makeup. What we actually did is create these scripts that modified the specific header that we needed, and nothing more sophisticated was needed than using JMeter. The version 5.4.3 is the most recent one. What is important to notice is that this was only a fixed release. They updated Apache Log4j to 2.17.0, from to 2.16.0. This was unfortunately still inside the range of versions of Apache Log4j that still have a problem. Remember, that was between versions 2.0-alpha1 to 2.16.0, excluding 2.12.3. If you're running that particular one, it's not a matter of life and death that you change, but I still recommend you to upgrade.

Those were my favorite tools. Those are great when we are working with Java applications. I'm not going to say with just monoliths, because they are still useful for multiple servers, microservices. It is important that we use them because testing, it is part of what we should be doing to improve the quality of our software. Even if we have a healthy number of unit integration contract, that is consumer and provider tests, and the provider tests should be true. The ones that the team doing the provider is creating, and the ones that we are doing to verify that they haven't changed, so imagine UI, end-to-end, REST API, end-to-end, acceptance, exploratory tests. These tests are still in a controlled and well-defined world, bound by our imagination, and assumptions of what could possibly happen in production. Even if we try to bring chaos engineering and dropping some services, we're still in a very small protected place.

Monitoring and Observability

In production, there, it can be something different. Sometimes this is one of our problems. Our testing environments are very different to our production environments. How, where, what, how long, how fast is defined by our imagination, beliefs, technical capabilities, and assumptions. One possible solution is testing in production. I know it sounds wrong, we should say observe closely our services in production and understand better the system state using a predefined set of metrics and logs. Monitoring applications let us detect a failure. What I'm proposing now is monitoring. What is monitoring? Monitoring is crucial for analyzing long term threats, provide information on how the services are growing, and how they are being utilized.

The other thing that happened is now that we have a lot of cloud native applications, microservices, we need to have even more information. Here comes observability. This term was originated from control theory. It measures how well we can understand a system's internal state from external outputs. Observability uses instrumentation to provide insights that aid monitoring. An observable system allows us to understand and measure the internals, finding easily and figuring out the cause from the effects. These are the three pillars of observability, traces. Traces track the progression of a single request, called a trace. It is handled by services that make up an application. A request can be initiated by a user or an application. Distributed tracing is a form of tracing that traverse process, network, and security boundaries. A trace isn't real spans. What is a span? Each unit of work in a trace is called a span. Spans are objects that represent the work being done by individual services or a component as it flows through a system.

The second pillar, metrics. A metric is a measurement about a service captured at runtime. Logically, the moment of capturing one of these measurements is known as a metric event, which consists not only on the measurement itself, but the time that it was captured, and all associated metadata. Logs is the third pillar. A log is a timestamp text record, either structured, which is the recommendation, or unstructured with metadata. While logs are an independent data source, they may also be attached to spans.

Tooling for Cloud Services

Let's talk about tools, but for the cloud services. I believe in open source and promoting standards in the industry. I will most of the time join efforts that foster and sustain an ecosystem of open source projects or tools that implement those standards. Let's talk about the Cloud Native Computing Foundation, the CNCF. The Cloud Native Computing Foundation seeks to drive adoption of technologies and techniques by fostering and sustaining an ecosystem of open source, vendor-neutral projects. The technologies that they are promoting are there to build and run in scalable applications in modern, dynamic environments such as public, private, and hybrid clouds, containers, service mesh, microservices, immutable infrastructure, and declarative APIs. The techniques that enable loosely coupled systems that are resilient, manageable, and observable with robust automation to allow engineers to make high impact changes frequently and predictably, with minimal toil. It's important to know that Kubernetes was a project that graduated from the CNCF Foundation.

If you go right now into the CNCF Foundation, and you look into the landscape, about all the tools that they are showing, or what is the landscape of the monitoring. You can see that there is another project that graduated. Actually, Prometheus was the second project that graduated from the foundation. Also, OpenMetrics there. I'm going to talk about both of those projects. This is the landscape that they show for tracing. Here we have OpenTelemetry, Jaeger, and in our world we had Zipkin, which is also another tool that I really recommend you to look at. One thing that may be interesting for you, the gray cells represent non-open source projects. They're still showing them for you to have an idea of where they belong. For logging they have another project that was graduated from the CNCF Foundation. Then you have Logstash. There is Elasticsearch. You probably remember why, we have been using ELK a lot, which means Elasticsearch, Logstash, and Kibana. It's written in Java.

Prometheus

Prometheus is an open source monitoring system developed by engineers at SoundCloud in 2012. Prometheus was the second project accepted in the Cloud Native Computing Foundation, after Kubernetes. It's a monitoring system, includes a rich, multi-dimensional data model, a concise and powerful query language called PromQL, an efficient embedded time-series database, and over 150 integrations with third party systems. It started early and actually achieved a billion integrations with third party systems. Prometheus became the default tool for monitoring. One of the things to notice is the cardinality. Everybody will tell you that the cardinality may become an issue. High cardinality labels that means labels with big number of unique values aren't dangerous on their own. The danger is the total number of active time series. At this point, I think we, or a lot of the companies out there that are doing cloud native are using Prometheus.

OpenMetrics

OpenMetrics creates an open standard. This is the standard for transmitting cloud native metrics at scale. It acts as an open standard for Prometheus. Even if you think that it's quite new, which is true, it was a spinoff of Prometheus. It was created within the CNCF in 2017. Since then, they have published a stable 1.0, the specification. It's used in production by many large enterprises, including GitLab, DoorDash, Grafana Labs. It's primarily a wire format, independent of any particular transfer for that format. The format is expected to be consumed on a regular basis and to be meaningful over successive exposition. This standard expresses all system states as numerical values, counts, current values, enumeration, and Boolean states by common examples. Contrary to metrics, single events occur at a specific time. Metrics tend to be the aggregate of data.

OpenTelemetry

OpenTelemetry is more than just a new way to visualize data across applications. This project aims to change the way engineers use instrumentation without requiring a change in monitoring tools. OpenTelemetry is actually a collection of tools and features designed to measure software performance. It is the amalgamation of two open source traceability software that were already there. OpenTracing and OpenCensus. They are a thing of the past, and OpenTelemetry is the future. OpenTracing was developed to provide a vendor agnostic standardized API for tracing. This was inside the foundation. OpenCensus was the initial traceability platform from Google, that later evolved into an open source standard. OpenTelemetry is a CNCF incubation project, combines the strength of both of these standards to form a unified traceability standard that's both vendor and platform agnostic. It's available for use across various platforms and environments. It actually provides the API and SDK among other tools to measure and collect telemetry data for distributed and cloud native applications. It also supports transmitting the collection of telemetry data to measure and visualize tools.

Questions and Answers

Cummins: You do have a book, is it coming out or has it just come out?

Ruiz: Just came out, yes, "DevOps Tools for Java Developers."

Cummins: Would they find the tools list in that book, or is it quite a different content?

Ruiz: It's a different content. Actually, several authors actually join, and mine particularly are for microservices, but I don't go into testing, I go more into the configuration of how to build them.

Cummins: I think now you've got your action item, which is to take this away and write another book, or at the very least, a set of blog articles, because I think it would be so useful.

Ruiz: For me, testing is so important. I think we don't spend a lot of time doing it or thinking about it. For me, ways of people to actually leverage it as much as possible, like reducing the amount of effort, but still getting the most of it, makes me want them to see different tools. We have always heard like, you're as good as how you use your tool. That's why, pick one, learn it, love it, and use it a lot.

Cummins: That's an interesting perspective, because I think some of us might be tempted to go, if one tool is good, then 26 tools is better. How do you balance being open to the new tools, while still really getting a solid grasp of the tools that you do already have in your toolkit?

Ruiz: There, I think about spikes. We have to really limit the amount of resources that we are going to invest in something. Again, every single time that you add a dependency or a new tool, there is an associated cost, either because you have to migrate and then later on decide what to migrate. Second of all, because if your level of dependency on a certain tool or certain technology is too high, you have to invest more things. I will start by, what are the tests that provide the most value to either our customer or to our project? On those, see if there is already something out there? Just read about it. See, who is adopting it? What are the use case? Then if it makes sense, try one new avenue. This is something we wanted to try, and this new use case for a new story. That's it. See if it actually fits what you need.

Cummins: Can Testcontainers and Mockito go hand in hand? So far, we were mocking all dependencies like database, however, Testcontainers points to a unique direction?

Ruiz: Yes, of course, you can mock your databases. You can try to make them run as fast as possible. You can try to control the initial estate of your database and use H2. In my experience, when you have a project that has to interact, or integrate with different databases, is do not mock it, run the real databases. Testcontainers shine like you wouldn't see how nicely this tool is. I jumped into Testcontainers because our customer requires one specific use case. This is not your typical use case. At that moment, where they actually saved his life was us trying against a pseudo version of their production data, and that, you cannot mock it. You need to, in this case, automatize as many of the steps. I will recommend you to have a look at Testcontainers, their database support is magnificent. Yes, you can still mock it, if you want. There are some tests that for whatever reason, you don't want to run it against real databases. That's ok. They can coexist perfectly fine.

Cummins: I've gotten into trouble in the past with mocking, because it seems like you don't want the real thing because it's going to be messy, and so you start mocking. Then two weeks later, you've basically just rewritten the whole thing, but without getting paid for it. The worst part is it doesn't behave like the real thing. You're still not actually catching some of the bugs that are going to turn up in production.

Ruiz: Exactly. Then later on, for whatever reasons, there is a change in the contract. You're investing a lot of time in code that is not actually very worthy of spending time. With this, I'm not saying let's not write tests. I mean let's not complicate, let's not add on top of writing tests, more tasks to do.

Cummins: What do you think about using Pact.io for microservice contract testing?

Ruiz: I have used it in the past. I was really happy with it. In this case, what I propose usually is OpenAPI, and several of the tools that OpenAPI relies on. Pact.io, it's really nice. They actually support OpenAPI 3.0. It's a win for me.

Cummins: I like Pact because I always think it goes a bit deeper than the OpenAPI as well. You've got the OpenAPI, but then you've got some of the functional validation as well, which I think is nice.

I love antipatterns. I learn by hearing people do things that worked well, but I really learn by hearing all the disasters. Have you got any good tools, antipatterns to share? Like when you approach a new tool, and you thought this is going to solve my problem, and then you used it in the slightly wrong way or for the slightly wrong use case, and it just didn't do what you hoped at all.

Ruiz: I have to say that JMeter was like that at the beginning, it's awful, the UI. JMeter is a good example of sometimes what UX shouldn't look like. It's too much. There's a lot of things happening in your screen. It's not the most intuitive way to do it. Again, I still persevere. This is not the tool that it's two clicks, and everything is green, and everything is running. The documentation sometimes is not so great. That time, I was fortunate enough that I had a colleague, she was coming from a very complicated project where they were using it a lot. I learned a lot from her instead of from the documentation. That was my first attempt with JMeter. Later on, I found a series of blog posts that actually helped me a lot, and I will look for those. Then you can leverage JMeter.

Cummins: JMeter top tip, find someone who knows it. If you can't do that, find a blog post.

If you're going to use Testcontainers for a database, you need a schema, do you have any schema management tools that you would recommend to spin up that database schema?

Ruiz: If you're talking about management, Spring has its own, like how to manage the schema. If you're talking about migration of the schema, I prefer Flyway and Liquibase. Both of them are really nice. Liquibase has even tools where you can check the schema, like recreate the schema and compare them. That is quite nice, or it'll only get you the diff. Flyway, I'm more partial to it, because at the beginning when we started with it, it supported Java and Ruby, not only SQL for the migration of scripts. That's why I was partial. I haven't seen Liquibase in the recent years. I cannot say that's a thing to put us off. The tools for Liquibase were also really nice. The other thing is that you could create a validation on the schemas. That is super nice.

 

See more presentations with transcripts

 

Recorded at:

Nov 18, 2022

BT