BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Reduce Carbon Dioxide Emissions with Serverless and Kubernetes Native Java

Reduce Carbon Dioxide Emissions with Serverless and Kubernetes Native Java

Lire ce contenu en français

Key Takeaways

  • More organizations are moving application workloads to multi- and hybrid cloud platforms that offer better scalability and performance, but more carbon dioxide emissions.
  • Java isn’t the first-class choice to optimize the performance on the Kubernetes cloud even though Java is still the 3rd most popular programming language according to the TIOBE Index.
  • Serverless architecture aims to reduce the amount of on-demand compute consumption time.
  • Native Java compilation aims to solve the performance issues, but also reduce the compute resources utilization on Kubernetes and serverless platforms.
  • Quarkus, the Kubernetes Native Java framework, provides container-first and natively native features with milliseconds first boot time and tiny resident set size (RSS) memory that minimizes the resource consumption of cloud services at speed and scale.
     

With the rise of adopting cloud deployment, IT departments consume fewer physical servers, less electric power, and eventually help mitigate climate change by reducing carbon emissions. Cloud architectures help with this because they don’t need to maintain siloed compute resources, but instead enable sharing of available resources efficiently on the cloud where they are located and when they require to keep their business services running.

However, these benefits of the cloud movement didn’t significantly impact carbon dioxide emissions in a short period of time. This is due to the speed of cloud adoption being much faster than moving to carbon-free infrastructures. For example, Google Cloud is carbon neutral today but they’re trying to put more effort into becoming carbon-free and sustainable cloud computing systems.

In the meantime, developers and architects continue to put maximum effort into optimizing application performance in terms of smaller container images, faster startup and response times, and lesser memory footprints. They believe that this eventually enables the reduction of compute consumption in the application layer. 

Java Designed For a Different Time

Java was born 27 years ago to run business services with a variety of benefits such as high network throughput, long-running processes, and dynamic behavior for mutable systems. Decades ago, these were really great features for developers to write flexible and rich applications over the Internet and then run them on several application servers across infrastructure from physical servers to virtual machines.

However, things changed since Kubernetes and Linux containers were unleashed upon the world. It brought us a new paradigm to re-architect existing applications that should be dealt with cattle over cats on the cloud. The main characteristics of the new applications are portability, immutability, and scalability at speed.
 

Unfortunately, the dynamic nature of Java is not a great benefit in this new era. Nevertheless, enterprises still maintain lots of mission-critical business applications built on the Java stack which could be a roadblock to lifting and shifting the workloads to the cloud platforms. It also makes the enterprises lose opportunities to reduce carbon dioxide emissions since they need to keep the monolith applications on traditional infrastructure by spending more money. 

Ironically, Java is still the 3rd most popular programming language according to the TIOBE Index. With regard to this trend, many open source projects and tools (e.g., Shenandoah GC) came out to optimize Java performance in terms of throughput management by scaling, ephemeral status, and smaller memory footprints on immutable systems. Unfortunately, those efforts weren’t enough to browbeat developers’ minds to leave Java applications on the Kubernetes clusters rather than taking alternatives such as Javascript and Python.

Serverless Java

As a part of endless efforts to reduce compute resources on cloud computing, many enterprises have realized that all business services aren't required to run all the time (e.g., 24 x 7 x 365) by monitoring application workloads along with resource usages periodically. 

For example, certain services (e.g., an order service) are only visited less than 10% of the entire services by end-users and third parties. In that case, the serverless architecture enables you to scale down the application to zero automatically when there’s no network traffic to the application during a certain period of time (e.g., 5 minutes or 30 seconds). 

In fact, the serverless behavior can be applied not only to HTTP-based microservices but also to distributed streaming services from the Internet of Things (IoT) edge devices and Kafka message servers.

Here is your question as a Java developer, “How does Java deal with a serverless architecture?”. The bigger question is “Can Java even fit in developing serverless applications?” Traditionally, developers tended not to run Java applications on AWS Lambda according to the NewRelic survey due to the heavyweight package and dynamic behavior, as shown in Figure 1. 

Figure 1. For the Love of Serverless

That’s the reason why more developers would like to bring Node.Js and Python applications into serverless platforms and Function as a Service (FaaS) more than evolving existing Java applications. Don’t give up your Java skill set! You’ll figure out how you make your Java applications more fittable in the serverless architecture in the next section.

Natively Native Java

Building a native executable Java application brings not only huge benefits, such as faster startup and response time with a smaller memory footprint but also solves the above challenges of the traditional Java stack. Let’s take a look a bit deeper at how the native executable works! The native executable is built using an ahead-of-time (AOT) compiler that makes a standalone native image with application classes, dependency libraries, and runtimes. You can understand that it’s similar to Linux container images that contain all things needed to run an application across any container runtimes and Kubernetes as well. 

With native executables, you don’t need a Java Virtual Machine (JVM) to run your Java applications anymore. Instead, the native image can be running on a Substrate VM which is the runtime components (e.g., garbage collector, thread scheduling) in GraalVM.

Native compilation for Java also makes developers’ minds stick with Java applications for the serverless workloads because the native executables can reduce the startup time for the cold start which is one of the biggest challenges when many enterprises want to adopt the serverless architecture.

Here is a quickstart on how to install the necessary C library and dependencies to compile a native executable image for your Java applications on your operating system.

Install C Libraries

To get supported for a native compilation in C, you also need to install GCC and relevant libraries using the following commands:

  • Fedora distributions:

$ sudo dnf install gcc glibc-devel zlib-devel libstdc++-static

  • Debian distributions:

$ sudo apt-get install build-essential libz-dev zlib1g-dev

  • macOS

$ xcode-select --install

For more information on how to install GraalVM, visit this website.

Configure GraalVM

Set the GRAALVM_HOME environment variable based on your operating system:

  • Linux

$ export GRAALVM_HOME=$HOME/Development/graalvm/

  • macOS

$ export GRAALVM_HOME=$HOME/Development/graalvm/Contents/Home/

Install the native-image tool:

${GRAALVM_HOME}/bin/gu install native-image

If you haven’t already done so, set the JAVA_HOME environment variable using the following command:

$ export JAVA_HOME=${GRAALVM_HOME}

However, producing a native image requires providing lots of information about your application in advance. The reflection will only work if a class or method has been explicitly registered. This will require Java developers to convert all existing applications for registering reflection before building a native executable image. 

Getting Started with Kubernetes Native Java: Quarkus

If you could continue to develop your cloud-native microservices without spending too much time handling the reflection, then do you just need to build a native executable image before you deploy it to the Kubernetes cluster? I’m pretty sure that would be great for Java developers.

Quarkus is an open source project that aims to provide a standard Java stack that enables Java developers not only to build a container-first application on OpenJDK, but also to compile a native executable for running on the Kubernetes cluster with the following benefits:

  • Move as much as possible to build phase
  • Minimize runtime dependencies
  • Maximize dead code elimination
  • Introduce clear metadata contracts
  • Enhance developer experiences (e.g. DEV UI, Dev Services, Command Line)

Quarkus also provides an extension, Funqy, that aims to write portable serverless functions across serverless platforms such as OpenShift serverless, Knative, AWS Lambda, Azure Functions, and Google cloud platform.

Here is a quick guide on how to create a new serverless function with a native executable compilation using Quarkus.

Create a New Serverless Java Project

Let’s scaffold a new Quarkus project to create a function using the Quarkus command line tool:

$ quarkus create quarkus-serverless-example -x funqy-http

This command allows you to download the Funqy extension to enable Quarkus Funqy capabilities. The output should look like this:

Creating an app (default project type, see --help).
-----------
selected extensions: 
- io.quarkus:quarkus-funqy-http

applying codestarts...
  java
  maven
  quarkus
  config-properties
  dockerfiles
  maven-wrapper
  funqy-http-codestarts

-----------

[SUCCESS] ✅  quarkus project has been successfully generated in:

--> /Users/USERNAME/quarkus-serverless-example
-----------

Explore the Function

Go to the root directory where you created the project, then open the MyFunctions.java file in the src/main/java/org/acme directory. A simple function method (fun) is generated by default to return a greeting message. The @Funq annotation makes the general method a function that you can access by a RESTful API. 

@Funq
public String fun(FunInput input) {
  return String.format("Hello %s!", input != null ? input.name : "Funqy");
}

You can add your business logic with a new function or an existing one. Let’s leave the default code for now.

Build and Deploy a Native Executable to Kubernetes

Quarkus provides an OpenShift extension to build the application and then deploy it to the Kubernetes cluster. Execute the following Quarkus command line to add the extension:

$ cd quarkus-serverless-example
$ quarkus ext add openshift

The output should look like this:

Looking for the newly published extensions in registry.quarkus.io

[SUCCESS] ✅  Extension io.quarkus:quarkus-openshift has been installed

Add the following configurations for the Kubernetes deployment to the application.properties file in the src/main/resources directory. You will need to replace YOUR_NAMESPACE with a namespace (e.g., doh-dev) where you deploy the function. 

quarkus.container-image.group=YOUR_NAMESPACE
quarkus.container-image.registry=image-registry.openshift-image-registry.svc:5000
quarkus.kubernetes-client.trust-certs=true
quarkus.kubernetes.deployment-target=knative
quarkus.kubernetes.deploy=true
quarkus.openshift.build-strategy=docker
quarkus.openshift.expose=true

You can also build a native executable image using a container runtime (e.g. Docker or Podman) when you set the following configuration:

quarkus.native.container-build=true

Note that you can find the solution repository here.

To deploy the function, you can use your own Kubernetes cluster (e.g., minikube) but I’ll encourage you to use the Developer Sandbox for Red Hat OpenShift which provides a shared Kubernetes cluster when you sign up for a free account. The sandbox enables you to spin up a new Kubernetes cluster in 10 minutes without any installations or configurations on your local file system.

Execute the following Quarkus command line to build and deploy the function to the Kubernetes cluster:

$ quarkus build --native --no-tests

The output should end with ​​BUILD SUCCESS message.

When you go to the Topology view in the OpenShift Dev Console, your Java function (e.g., quarkus-serverless-example-00001) is already deployed. The function might be scaled down to zero since the Knative service is set 30 seconds by default when the function goes down if there’s no network traffic to the function pod, as shown in Figure 2.

Figure 2. Function in Topology view

Note that you can add a new label to REV and KSVC to display the pod as a Quarkus function which allows you to easily differentiate the other pods in the Topology view. Use the oc command line as below:

  • Add a Quarkus label to REV
oc label rev/quarkus-serverless-example-00001 app.openshift.io/runtime=quarkus --overwrite
  • Add a Function label to KSVC
oc label ksvc/quarkus-serverless-example boson.dev/function=true --overwrite

Copy the Route URL then paste the following CURL command line to access the function. For example, the route looks like https://quarkus-serverless-example-doh-dev.apps.sandbox.x8i5.p1.openshiftapps.com.

$ curl --header "Content-Type: application/json" \
  --request POST \
  --data '{"name":"Daniel"}' \
 YOUR_ROUTE_URL/fun 

The output should look like this:

Hello Daniel!

When you go back to the Topology view, you will see the function pod goes up automatically in a second, as shown in Figure 3.

Figure 3. Scale-up Function

When you view the pod’s logs, you will see the Java serverless function is running as a native image. It also took 17 milliseconds to start up, as shown in Figure 4. 

Figure 4. Startup time for a native executable

What a supersonic and subatomic application! From now on, these new Java serverless functions enable you to optimize the resource performance and reduce the carbon dioxide emissions on Kubernetes.

Conclusion

This article guides you on how Java serverless applications can help your organizations reduce carbon dioxide emissions through higher resource density than any other programming languages on the container platform (e.g. Kubernetes), as shown in Figure 5.

Figure 5. Resource density of multiple applications on a container platform

Developers are also able to build a native image for the Java application by choosing one of three GraalVM distributions, Oracle GraalVM Community Edition (CE), Oracle GraalVM Enterprise Edition (EE), and Mandrel. Find more about the differences between GraalVM and Mandel here. To keep going on your Kubernative native Java journey, visit this website.

About the Author

Rate this Article

Adoption
Style

BT