BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Presentations Live Coding 12-Factor App

Live Coding 12-Factor App

Bookmarks
50:38

Summary

Emily Jiang performs live coding of building 12-factor microservices using MicroProfile programming mode and gets them running Open Liberty and Quarkus. Jiang deploys them to Kubernetes and Istio to see how they function.

Bio

Emily Jiang is a Java Champion. She is Liberty Microservices Architect and Advocate, STSM in IBM, based at Hursley Lab in the UK. She is a senior MicroProfile lead and has been working on MicroProfile since 2016 and leads the specifications of MicroProfile Config, Fault Tolerance and Service Mesh. She is CDI Expert Group member.

About the conference

Software is changing the world. QCon empowers software development by facilitating the spread of knowledge and innovation in the developer community. A practitioner-driven conference, QCon is designed for technical team leads, architects, engineering directors, and project managers who influence innovation in their teams.

Transcript

Jiang: I'm Emily Jiang. I work for IBM. Actually, I'm based in IBM, UK down in Hursley. I'm the Senior Technical Staff member in IBM. I mainly work on open-source projects, MicroProfile, Open Liberty as an Architect. This is my Twitter handle if you want to know anything about MicroProfile, Open Liberty. Follow me on Twitter.

Twelve-Factor In a Nutshell

First, the basic concepts about Twelve-Factor. Who knows about Twelve-Factor? Who is doing Twelve-Factor? What is Twelve-Factor? Twelve-Factor is a theory, it's a methodology of best practices. It is published by Heroku. If you do not know Twelve-Factor, log on to this website to check it out. Why do we need to care about Twelve-Factor? Basically, it's like working in a team. You need to lay out the rules and responsibilities between them. Otherwise, they will keep on fighting. Twelve-Factor is here to lay out the boundary between the application and the cloud infrastructure. That's Twelve-Factor.

What Is a Twelve-Factor App?

This is the TLDR version of Twelve-Factor. Basically, as it says, clearly define the contract between the application and the underlying operating system. Once you comply with the Twelve-Factor App, your application can be deployed to the cloud infrastructure without much problem. Also, it can scale up or scale down without much hassle.

Why It Is Called Twelve-Factor

Why is it called Twelve-Factor? Because it has the 12 facts. These are 12 things. Before I go through each factor in greater detail, I want to lay out some background. The basic ingredients that I'm going to use include Kubernetes and MicroProfile. Who knows Kubernetes? Who knows MicroProfile? Not many people know about MicroProfile.

MicroProfile

MicroProfile has been around for three years. It was established by IBM, Red Hat, Payara, London Java Community, and [inaudible 00:03:53]. It grew really fast. [inaudible 00:03:55], Microsoft, and Oracle all joined this initiative. It is completely community driven. It's a large infrastructure. Everyone's opinion matters. Who has the Java EE background? MicroProfile is basically built on top of Java EE. One difference is MicroProfile doesn't have this concept of the reference implementation. It has many implementations. It has dozens of implementations. There's no concept of reference implementation. It's released very fast, lightweight and iterative releases. All these API TCKs are freely available on Maven Central.

The MicroProfile Community

The MicroProfile community has over 1000 vendors, around 169 contributors. As you can see here, IBM, we have Open Liberty, WebSphere Liberty. Who knows Open Liberty? A few. WebSphere Liberty? Open Liberty is the open source version of WebSphere Liberty. Red Hat has Quarkus, WildFly, and Thorntail. Who knows Quarkus? In this demo, I'm going to demonstrate Open Liberty and Quarkus. Basically, I've created two microservices: service A on Open Liberty, service B on Quarkus. You will see how they interact with each other on Kubernetes. This is the production version of the GlassFish. This is from Apache TomEE. This is Helidon from Oracle. This is Apache Launcher from Fujitsu. This is the kumuluzEE. It's a very small one. Hammock. This is Piranha. It is quite new. That's enough information about MicroProfile. It's a very fast release cadence. Every year we have three releases. In February we released 3.3. It has many specifications inside.

Codebase

The first one is the codebase. Basically, it says, one microservice, one codebase. Basically, one team should work on this microservice. This microservice should have a dedicated release plan. It can control itself, when to release and how often it can release. It's a small team as well. Basically, it translates to your daily work as one microservice is equivalent to one GitHub repo.

Dependencies

Once you do microservice, you should clearly specify all third-party dependencies. Who is doing with Maven? Gradle? It has more Mavens. This is across the board. It has more Maven developers than Gradle. Basically, in your Maven, in a POM XML, you clearly specify all the dependencies. Do not package your third-party dependencies in your application. You always should freely update the version, in case it has a security vulnerability. You don't stay with old dated third-party libraries. Do not expect your cloud infrastructure to supply these dependencies for you.

Config

Externalize your configuration. This enables build once, configure everywhere. Basically, if you externalize your configuration, your application gets the config value from some other underlying config sources. Let it be a Kubernetes configmap or a database, or from the secret. The build is, a MicroProfile config can be used to directly inject the config values inside your microservices. You don't even know where these values are from. If you deploy it to a cloud infrastructure, you freely configure anywhere. It will get the value for you. In this way, you can achieve build once, configure everywhere. You do not need to repackage your application if you really just need to change the config value, because they've been externalized.

This shows you how. In MicroProfile config, we come up with two free world APIs. The first one, you can use a programmatic way to look up the config value. Like this config value myProp, it could be from the JVM properties, or environmental variables, or some from ZooKeeper, or somewhere. This API can directly get this value from the config source, and then you can get the value. This is using the inject model. You can say, @inject, @ConfigProperty, and this is my.string.property, you inject into this variable. It's that easy. Anybody knows CDI? For the ones that do not know CDI, do you know Spring injection? @inject is equivalent to autowired. When you see @inject, you just think automatically it is autowired.

Backing Services

You should treat backing services as attached services. Basically, if service A directly calls into service B, for example. I'm trying to do social media. I could use Twitter. Or, if Twitter is not up, I could use social media. The only build is here, if you define this API, you can plug in Twitter or you can plug in Facebook, for example. Treat backing services as attached resources so you can easily swap out.

Here, I show how you can do that. In the MicroProfile REST Client, you can just declare this as your API. Then you can just say, this is a REST client. Then in your service A, like a JAX-RS resource, you can directly inject this one. The underlying implementation can create this instance and directly inject into this variable. Then you would think, where does the service come from? This is the binding. This is the fully qualified class name/mp-rest/url. This is where the service lives. For example, here, you could change this value to be Twitter, or later, you can change this value to be Facebook. This is the configuration. You can configure in your Kubernetes configmap.

Build, Release, Run

You would strictly separate build, release, and run, like different stages. Build, obviously, you do coding development. Then you do a Maven clean install and test it out. Then you say, "Let me release it." You can release it to Maven Central. When you release, it doesn't mean it automatically runs. Then you can decide how you run. In this way, you could use Kubernetes. Or you can still directly do the user A/B testing first, and then you do a blue-green deployment. Then gradually, feed off the old version that is a service. Also, you need to consider the CI/CD pipeline. Have you heard of Tekton? This is a new initiative that is being proposed in the CNCF. It's a CI/CD pipeline, Tekton, directly doing continuous integration, continuous development. Check it out.

Processes

Basically, in Twelve-Factor, it is recommended that you do the stateless processes. Basically, in this way, when you scale up or scale down, you lose nothing because they don't contain any process. You can use JAX-RS counteractive to this microservice.

Port Binding

You'll export your service via the port binding. If you know Kubernetes already, you deploy microservices to Kubernetes, it will assign you a port. Your previous port is inside the port. Once you want the external world to know you, you have to do some magic. You cannot directly add a port in your application. In this way, MicroProfile config can directly inject it to the various port for you, the correct port. If you deploy it to Kubernetes, and in your deployment YAML file, you say, "I live on this port." You can tell the MicroProfile config to directly inject the correct one for you.

Concurrency

The concurrency is the kernel. It says in your application, you don't need to worry about when to scale up, when to scale down. This cloud infrastructure can do it for you. Now we do a Kubernetes service. You can do the autoscaling yourself. You can scale up and scale down, depending on the load. Also, you can even do the zero instance as well.

Disposability

Basically, disposability says, your microservice needs to be very fast. Fast startup and fast shutdown. I've been wondering, why? Because of the money. If it is starting up, you are not doing anything. If you are shutting down, again, you are not doing anything. If it's easy to do, you still need to pay the bill. That is why. The other thing is your microservice needs to be robust. It should react. It should function under whatever the situation is. We use MicroProfile fault tolerance to make it resilient. In this MicroProfile fault tolerance, we have retries, circuit breaker, bulkhead just to control how many concurrent process to access this endpoint, and timeout for mission critical stuff. A fallback is provided as a last resort solution for you, a contingency plan.

Development/Production Parity

Basically, it keeps the development and production as similar as possible. Ideally, now, the same team does everything. You do the development and also the maintenance as well, so you know how hard it is if you don't write good code. How hard it is to debug it.

Logs

Treat logs as event streams. There's two best practices. One is you can direct it to a sysout, to stream out the logs, and direct it to port to the [inaudible 00:17:40]. The other thing is, what format do you use? Who's using JSON format? If you use a JSON format, you are good. JSON format is mainstream. If you are not using a JSON format you should seriously think about using JSON format.

Admin Process

The last one is Admin Process. Basically, if it's a one-off process, don't code it in your microservice. You should utilize the Kubernetes jobs. Also, you can execute it using Kube Control, QC.

The Factors

Codebase, like GitHub. Dependencies: Maven, Gradle. Config. Backing service and processes. Port binding and Disposability, MicroProfile can do that for you. The best practice is concurrency. Kubernetes can do that for you, replicas. Dev/prod parity, the best practice. Logs, again, the best practice. Admin process, Kubernetes can do that for you. Basically, MicroProfile plus Kubernetes is all you need.

How to Get Started

How can I create a microservice? There's a couple of ways. One is using MicroProfile Starter. The other one is using Appsody. This is the IBM open-sourced cloud native microservice generator. This is directly from the MicroProfile community. I will show you how to use MicroProfile Starter to create a microservice.

Demo - MicroProfile Starter

First, I need to make sure I have the mirrors. They're all good. This is MicroProfile Starter. I create two microservices. The first one, I use Liberty. It's MicroProfile 3.2 that I use. I choose Open Liberty. I select all of them, and I download. Let me name it, so that I can get a good name. I have two services. I want to open this, show you what it is like. We use VS Code, IntelliJ, Eclipse? They're all good. Here, I just have a JAX-RS application saying Hello World. This is on Open Liberty. Basically, I'm going to start this application in development. Basically, in development means it will start. I can code and then you can see the changes straight away. While it's doing, let me create another service on Quarkus. Here I say, Quarkus. Also, I do that. I choose Quarkus. In Quarkus, I want to open it up as service B. Basically, I want service A from Open Liberty talking to service B from Quarkus. Quarkus also has a development mode. I do a Maven compile, Quarkus. For this live demo, it is pair programming. If it doesn't work, you have responsibility as well. Quarkus is on this one, and port 8080. Liberty is on port 9080. Let's go and have a look. If I go to a localhost, 9080, this is on Liberty. It says Hello World. In case, just demonstrate to you that I'm not lying, I can say at QCon. This is a demonstration account. You can see, clearly, I can go back.

Similarly, I also work on Open Liberty. Quarkus is also really cool. Let's see Quarkus. It's running on port 8080. Let me go to open the code there. The Quarkus says process, parameter, value is port 8080. This is the URL. You can see, process, parameter, value, host. If I change something, I say, Quarkus serious. It's also development mode. Quarkus lives in its own little world. Open Liberty lives in isolation.

I want to make service A from Open Liberty directly call into Quarkus, how do I do it? I go back to the code here. This is service A. I have the module here as a client. This is this API, do something. In Quarkus, it is do something. Same API. It has implemented that API. In this case, actually, I just say this API, register the REST client. Here, I directly inject it. How do I know which server I'm going to code? How can I bind them? The binding is happening in the fully qualified class name, emptyrest/url. Here it generates 9081. If now I click on here, if I try to invoke this client, I don't have anything because I haven't wired to anywhere. 9081 doesn't exist. I want to call into Quarkus now. I can do 8080, this one. Notice the URL is directly the URL I used out here. I saved that. I call into here. You can see Quarkus. It's that easy, directly in the local environment. Then you say, that's local, nothing fancy. How can I manage it in Kubernetes?

The next step is we need to dockerize them and deploy to Kubernetes. We're happy with the development. Let me stop this one, and also stop service A. If I want to dockerize it, I need a Docker file. I have pre-canned Docker file. I just quickly copied that over, this is the Liberty one. Then I can do a Docker build. I'll show you what a Docker file looks like. I can do a Docker build. I had Docker build -t Liberty, that's the Docker image. Then I give it version 1. It will do the build. I need to do the Maven clean install first. Let me show you where they're doing that build. Let me show you what the Docker file looks like. This Docker file has directly copied the files used by Liberty, and server XML, and bootstrap.property, and server.env, and also copied war, and run.config.sh. This has finished, so I can do the build.

Similarly, I can do service B. I can copy the Docker file. Before I copy it, I need to build it as well, service B. If you have config.service, you might want to take a look at Quarkus as well because it's very small and very fast. Then you can copy the Docker file here. Then you can do a Docker build. You can do the Docker run, and test whether it still works. Let's test here to see. Docker runs, and you do a port binding 8080, to see whether you can still access the application. Let's quickly have a look at the port 8080. Yes, it seems to work. That's good stuff. Liberty has been dockerized as well. That's good to go. We can also test it out, if you want, using this command.

The next one, I'm not going to test it. I want to deploy it to Kubernetes. If I want to deploy to Kubernetes, I need a YAML file. I need a deployment file. Here, you can directly copy this deployment file. Let me first show you in Quarkus' version. This is where there is Docker, you have an image. This is copied as JAR files. Then run Java. How do I deploy them? For the deployment, basically, I have two services. I need to create a deployment. I need to create a service. For service A, I have a deployment. This is my Docker image version 1.0, and this is 9080. This is build. This is very important. How do I do the service binding? This is my MicroProfile REST client. Basically, you know it. This is a recent bind to localhost 8080. Once you do it in Kubernetes, there is no localhost 8080 anymore. This is deployment. In service B, I have declared service B as a service. Factor, App B, service, it has a local port, and 8080 is the cluster IP. The same cluster, the port can access it. It's not being externalized. Here, I direct it into this backend URL. If you want to do a health check for Kubernetes, which service do you want to call into? Here, in my app, I have two endpoints: health/live, and health/ready. Basically, they are in here. This is MicroProfile health check.

Next one, I can direct it to a Kube Control, apply, and [inaudible 00:34:56], and deploy to it. Then I can do a Kube Control get ports. Keep on refreshing. There's two deployments. Why is it two deployments? It is because in my deployment I have two replicas. At the moment I had called the two replicas. If you use Kubernetes service, you can do that autoscaling. How do I know which port is linked? I can do Kube Control get services. Over here is the new port, 30080. Let's directly go in there. Instance of localhost 8080, or something like that. It works. I got the normal plumbing working. You got your Quarkus. Service A called into service B in Kubernetes without doing much hassling. That's all working.

Demo - Configuring the Application

I want to demonstrate, how can I configure my application? This is incremental development. I get, basically, the plumbing working. Let's go back to service A. I have this Hello World controller. I said, Hello World at QCon. This is only QCon, and maybe I want to say some other conference. I want to externalize this configuration. You just use MicroProfile config to directly inject it, @inject, @ConfigProperty. This is all from MicroProfile. I say, name = conference. The type, I say, string conf. Here, import that. I need to declare the conference somewhere. I can declare in my application first. This says conference, QCon London Wednesday. I save it. Let me delete this for the time being, deployment. Let me go back to the Maven Liberty dev, because I'm doing a second incremental development at the moment. I start this application. Now it is a 9080. It is because I need to do a Maven clean build first. This is the configuration.

What if service B, sometimes, is not functioning well? Let's put a bug in service B. Here, service B is so simple. However, I want to say every three times I want service B to fail. Here, I want to say, increment it every three times. I want to throw new runtime exception. Let me stop that. Quarkus. Service A is up. Let's have a look. Where did I save it? Let's go to service A. Let's go over here. I inject QCon London Wednesday. This can externalize your configuration, and later on you can override in Kubernetes. Also, for Quarkus, sometimes it's not working. Now you can see, because, here, service A directly calls into service B. I call it. Sometimes it's not working. What are you going to do? Are you going to call your friend? What happens? "Service B is out, come and fix it." What if you don't know anybody from service B? You have to fix it yourself. There's nobody other than yourself available at the moment you want something to be fixed. At this time you can rely on the MicroProfile fault tolerance. In here, while calling this service, I can directly call into here. If something doesn't work, actually, I want you to retry it. Let's see. It always works. Look carefully at the top, that one time, briefly, you can see the circle because it did a retry. It always works.

The other thing is, what if you are doing time critical stuff? You don't have a lot of time to wait. Let's plug in some other delays in service B. again, if I say, if count divided by 5. Then I want to plug in a sleep. One second, sleep. Now it's service A, you call it. Sometimes it takes longer to come back. What if this is really time critical stuff? It is lifesaving stuff. You really want the response within 100 milliseconds. I'll show you how MicroProfile fault tolerance can help you with that.

Because you cannot control service B, so in service A you have some control. In your service, you can say, actually, I don't have patience. I only can wait for 100 milliseconds. It's very important that you make it asynchronous. If the thread cannot be interrupted, it will ignore it and go ahead to do whatever it can. You've written a completion stage. Who is familiar with completion stage? If you think about asynchronous, you should look into the completion stage. Here, I do a CompletableFuture.completedfuture. I do a service. Save it. There is no long delay at all.

The next one is you really want to make a bulletproof microservice. In this way, you need to think about a fallback, because sometimes you do need your friend. Here, you can say, if something is not working, I want to do a fallback. In this fallback, you can say, this is my fallback, for example. This is very interesting and you can call into a different service if you want. The essential information needs to have the same signature. The visibility can be private, or public, or others, protected. Then you can say, if something is not working, please fallback. You can say fallbackMethod = fallback, the method or name. Although in this one, because it always works, you may not be able to see that there is a fallback. However, you can always plug in just in case. Here, I used MicroProfile to be able to do it. After I have done all these demos, I can dockerize these. Then push a different version to Kubernetes. You can get it working.

Recap

Basically, I demonstrated how to use MicroProfile to do the clear contract. I demonstrated the config, backing service, and processes, and disposition, and the port binding. My replica demonstrates concurrency. Admin process is one of the tasks you can do yourself. For my microservice, I have three paths. Service A checks into the Maven repo as its own Git repo. Service B would check in here. My deployment YAML file, which checks in here. They'll all be in version control.

 

See more presentations with transcripts

 

Recorded at:

Aug 26, 2020

Hello stranger!

You need to Register an InfoQ account or or login to post comments. But there's so much more behind being registered.

Get the most out of the InfoQ experience.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Community comments

  • Again hello world coding and plus kubernetes

    by Armen Arzumanyan,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    Thanks for presentation, but in past 2 years we saw hello world and kubernetes together without frontend. In real world public clouds provide primitive services with high cost and with lot of problems. Microprofile is good, but it does not contain nothing new. I do not see any real reason to choose micro profile. Java EE main advantages are lost, we see only primitive web apps .

  • Interesting presentation!

    by Anit Shrestha Manandhar,

    Your message is awaiting moderation. Thank you for participating in the discussion.

    I would love to see a full fledged CI/CD involved demo next time. Thank you.

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

Allowed html: a,b,br,blockquote,i,li,pre,u,ul,p

BT