BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Cellery: A Code-First Approach to Deploy Applications on Kubernetes

Cellery: A Code-First Approach to Deploy Applications on Kubernetes

Key Takeaways

  • Despite the benefits of a microservice architecture (MSA), managing hundreds of loosely-coupled microservices can quickly become a hassle. This is why cell-based architecture (CBA) was designed. 
  • CBA, a microservice architecture pattern, primarily requires multiple microservices (and other components) to be grouped into easily manageable and reusable building blocks called cells. 
  • Creating a CBA on a container orchestration platform from scratch is laborious. Kubernetes is the defacto container orchestration platform in the industry at the time of this writing; however, writing Kubernetes artifacts using YAML for this purpose is not an easy task. 
  • Cellery allows developers to follow a code-first approach and takes care of the underlying complexities of implementing a CBA.
  • Cellery is a combination of an SDK, a runtime, and a management framework.
     

Introducing Cellery

What exactly is Cellery and how can it help with deploying and managing applications on Kubernetes? Cellery is a code-first approach to building, integrating, running, and managing composite applications on Kubernetes. The building blocks of such composite applications are called cells—hence the name Cellery, NOT Celery. In order to understand cells and Cellery, let’s see how an existing Kubernetes application written by Google can be deployed, managed, and observed using Cellery. But, before we look at that, let’s first learn what cells are and how Cellery works.

What’s a Cell?

Let’s take a look at why it is valuable to have composite components in a microservices architecture.

Microservices are a popular choice for building complex and evolving applications with shorter time to production and ability for faster innovation. Each service can be developed independently by a team that is focused on that service and they are free to choose whatever technologies make sense. Above all, microservices are reusable and each service can be scaled independently, allowing teams to use optimized deployment infrastructure that best matches a service’s resource requirements. Developers can make changes that are local to their service and deploy them as soon as they have been tested. So, what are the challenges?  

The use of microservices (including serverless functions) is growing at a rapid rate as organizations aim to increase the speed of development and improve scalability, and they must also adapt to business-capability-oriented teams. In an enterprise with tens or hundreds of applications, the prospect of managing so many loosely-coupled microservices is not only an operational nightmare, but also poses challenges with respect to team communications as well as discovery, versioning, and observability of services. There are more services, more communication paths, much more complicated networking arrangements, and more areas of potential failure. This creates a need for higher-level constructs, which aggregate multiple microservices and serverless functions into easily manageable and reusable building blocks.

Cell-based architecture, a microservice architecture pattern, proposes that microservices, data, and other functional components of a system (including front-end applications, legacy services, brokers, gateways, and adapters to legacy systems) should be grouped into cohesive, individually deployable architecture units, known as cells.

This grouping of components is usually done based on scope, ownership, and inter-dependencies between these components. Each cell should be individually designed and developed and should be independently deployable, manageable, and observable. Furthermore, components inside the cell can communicate with each other using supported transports for intra-cell communication. However, all incoming service requests must first come through the cell gateway, which provides secure APIs, events, or streams via governed network endpoints using standard network protocols. Teams can self-organize to produce cells which are continuously deployed and incrementally updated. Below is a depiction of a simple cell in a cell-based architecture:  

(Click on the image to enlarge it)

Figure 1: A self-contained architecture unit: cell

A Practical Approach to Cell-Based Architecture

Cellery is designed to create applications that follow the principles of cell-based architecture on Kubernetes. With Cellery, we can write code to define a cell and its components by pointing to existing container images of the cell’s constituent microservices (and other components) and define the relationship between those components, dependencies to other cells, and the cell API (gateway). The cell definition code can then be used to produce cell images. In fact, we can trigger a CI/CD pipeline once we commit the cell definition code to a version control repository. A CI/CD system such as Jenkins can build the cell image, test it, and push it to a container repository. Thereafter, the CI/CD system can pull the cell image and deploy it in the respective production environment. Not to mention, we can update, scale, and observe the deployed cells just like any other application deployed on Kubernetes.  

(Click on the image to enlarge it)


Figure 2: The DevOps flow for Cellery

Creating Cells with Cellery

In a nutshell, Cellery is a combination of an SDK, a runtime, and a management framework. When you install Cellery, you can run commands using the Cellery CLI to carry out various tasks.

First of all, you should create a Kubernetes cluster on your local machine or point to an existing Kubernetes cluster as your Cellery runtime environment. By typing in the simple command cellery setup, you will be prompted to select your deployment preferences via an interactive CLI and Cellery will configure the Kubernetes cluster for you based on your preferences.

Once the Cellery runtime environment is set up, you can start coding cells with the Cellery language. The Cellery language is based on the Ballerina programming language and therefore comes with IDE support via VSCode and IntelliJIdea. To auto-generate a cell definition file with the standard import statements and required functions, you can use the cellery init command. Next, you can define the components, complete the build, and run logic using the Cellery language. The Cellery compiler will thereafter compile the code and create the corresponding Kubernetes artifacts with a simple cellery build command.

To deploy the cells on the Cellery runtime environment, run the cellery run command with the necessary parameters. You can push a built image to a cell image repository with the cellery push command, pull a cell image from a repository with the cellery pull command, and integrate the build and deployment flows into a CI/CD pipeline. Moreover, you can view a visual representation of the cell through the cellery view command. Cellery also offers capabilities to test the cells and tools for observability, which allow cells to be monitored, logged, and traced.

Why Cellery? Why Not Configure a Kubernetes Deployment for Cells with YAML?

For organizations which have adopted containers, developers not only have to create microservices, but also understand the nuances of a container orchestration system such as Kubernetes. Furthermore, creating a CBA from scratch involves tasks such as configuring a service mesh, handling service authentication, and configuring security policies that adhere to CBA principles, in addition to provisioning Kubernetes clusters, and creating, deploying, and managing apps. Therefore, configuring cells with standard Kubernetes resources requires some serious Kubernetes expertise.

Moreover, Kubernetes resources, such as pods, services, and deployments, are created declaratively by using YAML files. So, as deployments increase in size, DevOps teams struggle with ever growing, and increasingly complex lines of YAML code without the support of sophisticated IDEs to make them productive. Also, by its nature, YAML encourages the repetition of huge chunks of code due to the lack of support of programming concepts such as functions, abstraction, and encapsulation. So, creating complex deployments on Kubernetes means that DevOps teams have to go through the tedious and daunting process of writing and maintaining YAML files that can be thousands of lines long. This can be highly error-prone.

Not only does Cellery use type-safe, validated code instead of YAML to define deployments, it also takes care of the underlying complexities of configuring the deployment, wiring between cells, services, autoscaling, etc. Also, cells written using Cellery are secure by default through security mechanisms such as single sign-on, tokens, policy-based access control, and mTLS. Cellery is designed around DevOps practices so that building, pushing, pulling, testing, deploying, and updating using blue/green and canary deployments can be done seamlessly. The deployments are also observable with monitoring and tracing.

In brief, Cellery aims to simplify the configuration, building, testing, and deployment of applications on Kubernetes. As explained above, the project has tried to address this problem from different angles, including development, DevOps, security, and observability.

Using Cellery: An Example

Let’s take a look at a real-world microservices sample that you can try out for yourself.  (To learn how to code cells with Cellery, you can check out the Cellery syntax and try out a few samples.)

We made use of Google’s "Hipster Shop" demo application for this purpose. All details, source code, Docker files, etc. of the original Hipster Shop demo application can be found here. This sample is compatible with Cellery version 0.3.0.

The Hipster Shop application is a multi-tier, polyglot microservices application, which is a web-based e-commerce application. Users can browse items, add them to the cart, and purchase them using the application. Hipster Shop is composed of a front end and several microservices, and they communicate with each other over gRPC. The service architecture is shown in Figure 3 and descriptions of the Hipster Shop microservices can be found in Table 1.

(Click on the image to enlarge it)

Figure 3: Service Architecture of the Hipster Shop Application

Service Language Description
frontend Go Exposes an HTTP server to serve the website. Does not require signup/login and generates session IDs for all users automatically.
cartservice C# Stores the items in the user’s shipping cart in Redis and retrieves it.
productcatalogservice Go Provides the list of products from a JSON file and ability to search products and get individual products.
currencyservice Node.js Converts one money amount to another currency. Uses real values fetched from European Central Bank. It's the highest QPS service.
paymentservice Node.js Charges the given credit card info (mock) with the given amount and returns a transaction ID.
shippingservice Go Gives shipping cost estimates based on the shopping cart. Ships items to the given address (mock).
emailservice Python Sends users an order confirmation email (mock).
checkoutservice Go Retrieves user cart, prepares order, and orchestrates the payment, shipping, and the email notification.
recommendationservice Python Recommends other products based on what’s given in the cart.
adservice Java Provides text ads based on given context words.
loadgenerator Python/Locust Continuously sends requests imitating realistic user shopping flows to the frontend.

Table 1: Existing services of the Hipster Shop Application

In order to map the Hipster Shop microservices to a cell-based architecture, we grouped the microservices into five cells: ads, products, cart, checkout, and front-end. We designed this categorization based on the individual tasks each microservice performs and how closely connected it is to the rest of the microservices in a cell. A cell will be owned by a single team, but a team can own one or more cells. Defining cell boundaries can be based on other criteria, such as the number of component-component connections, and is not limited to only functionality and ownership. Read more about cell granularity here.

It’s also important to note that none of the original Hipster Shop microservices have been changed in order to work with Cellery; Cellery merely refers to the existing container images of the microservices. The cells and their respective components are listed in Table 2 and depicted in Figure 4 below.

Cell Components
ads adservice
products  productcatalogservice, recommendationservice
cart cartservice, cacheservice
checkout     checkoutservice, emailservice, paymentservice, shippingservice, currencyservice
front-end frontendservice

Table 2: Hipster Shop services mapped to cells

(Click on the image to enlarge it)


 Figure 4: Cell-based architecture of the Hipster Shop Application

The cell front-end contains the front-end application as its sole component and HTTP traffic is allowed to the cell through its gateway. front-end talks to the rest of the cells over gRPC.

The cell checkout is the only other cell in this architecture that communicates with external cells: products and cart, while the rest of the cells, products, ads, and cart are independent cells, in which only internal communication between their components take place. All the complete cell definition files (files with the .bal extension) and instructions to run and deploy the Hipster Shop cells can be found here. Please note that this sample has been tested on Cellery version 0.3.0.

Creating a Simple Cell

Let’s take a look at the code of ads, which contains a single component: adservice

ads.bal

import ballerina/config;
import celleryio/cellery;

public function build(cellery:ImageName iName) returns error? {
   int adsContainerPort = 9555;
   // Ad service component
   // This component provides text ads based on given context words.
   cellery:Component adsServiceComponent = {
       name: "ads",
       source: {
           image: "gcr.io/google-samples/microservices-demo/adservice:v0.1.1"
       },
       ingresses: {
           grpcIngress: <cellery:GRPCIngress>{
           backendPort: adsContainerPort,
           gatewayPort: 31406
       }
       },
       envVars: {
           PORT: {
               value: adsContainerPort
           }
       }
   };

   // Cell Initialization
   cellery:CellImage adsCell = {
       components: {
           adsServiceComponent: adsServiceComponent
       }
   };
   return cellery:createImage(adsCell, untaint iName);
}
public function run(cellery:ImageName iName, map<cellery:ImageName> instances) returns error? {
   cellery:CellImage adsCell = check cellery:constructCellImage(untaint iName);
   return cellery:createInstance(adsCell, iName, instances);
}

First of all, a cell definition will begin with the standard import statements and will contain two functions: build and run (these are auto-generated if you use the cellery init command). The build function and run functions will be invoked when a user executes cellery build and cellery run commands respectively.

Within the build function, a component named adServiceComponent is defined to represent adservice by pointing to its public Docker image URL (source) and defining its network accessible entry points (ingresses) and environment variables (envVars). Thereafter, a cell is initialized and defined by the name adsCell, and the previously defined adServiceComponent is added to its list of components. Thereafter, a cell image is created with the method cellery:createImage.

Finally, the run function will take the built cell image (cellery:ImageName iName), which contains both the cell image and the corresponding instance name, and create a running instance out of the cell image with the method cellery:createInstance.

Inter-Component (Intra-Cell) Communication within a Cell

Now that we have seen the code structure of a basic cell file, let’s take a look at the code of a cell that has two or more components and how those components are configured to talk to each other.

The cell products has two components: productcatalogservice and recommendationservice. As shown in Figure 4, recommendationservice needs to talk to productcatalogservice because it recommends products based on what products are included in the cart. The communication between components in a cell is enabled through environment variables.

As shown in the code snippet below, recommendationServiceComponent expects the address of the productCatalogServiceComponent through the environment variable (envVars) PRODUCT_CATALOG_SERVICE_ADDR. Additionally, productCatalogServiceComponent is labeled as a dependency under the dependencies field, which ensures that productCatalogServiceComponent is up and running in order to resolve dependencies.

products.bal

..
 // Recommendation service component
 // Recommends other products based on what's given in the cart.
 cellery:Component recommendationServiceComponent = {
     name: "recommendations",
     source: {
        image: "gcr.io/google-samples/microservices-demo/recommendationservice:v0.1.1"
     },
     ingresses: {
        grpcIngress: <cellery:GRPCIngress>{
        backendPort: recommendationsContainerPort,
        gatewayPort: 31407
       }
     },
     envVars: {
        PORT: {
            value: recommendationsContainerPort
        },
        PRODUCT_CATALOG_SERVICE_ADDR: {
            value: cellery:getHost(productCatalogServiceComponent) + ":" + productCatalogContainerPort
        },
        ENABLE_PROFILER: {
            value: 0
        }
     },
     dependencies: {
        components: [productCatalogServiceComponent]
     }
   };
 ..

The cell is initialized and defined by the name productsCell, and both productCatalogServiceComponent and recommendationServiceComponent are added to its list of components as shown in the code snippet below.

..
   // Cell Initialization
   cellery:CellImage productsCell = {
       components: {
           productCatalogServiceComponent: productCatalogServiceComponent,
           recommendationServiceComponent: recommendationServiceComponent
       }
   };
..

Inter-Cell Communication

Now that we have covered inter-component communication, let’s take a look at how the components in one cell can talk to the components in another cell. We already know that CBA dictates that all external in-coming communication must happen through the cell gateway. So, let’s inspect the code of the cell front-end, which has the front-end web app as its sole component and has to talk to various components residing in different cells.

front-end.bal

..
   cellery:Component frontEndComponent = {
       name: "front-end",
       source: {
           image: "gcr.io/google-samples/microservices-demo/frontend:v0.1.1"
       },
       ingresses: {
           portal: <cellery:WebIngress> { // Web ingress is exposed globally.
           port: frontEndPort,
           gatewayConfig: {
               vhost: "my-hipstershop.com",
               context: "/"
               }
           }
       },
..


The code above shows how frontEndComponent exposes an HTTP server to serve the Hipster Shop website. The same component expects the values of several environment variables in order to talk to the relevant internal and external microservices. Let’s take a look at the code that allows frontEndComponent to talk to the components in products cell.

envVars: {
    ..
    PRODUCT_CATALOG_SERVICE_ADDR: {
      value: ""
    },
    RECOMMENDATION_SERVICE_ADDR: {
      value: ""
    },
    ..
},

As shown in the code snippet above, frontEndServiceComponent expects the addresses of the productCatalogServiceComponent and recommendationServiceComponent through the environment variables (envVars) PRODUCT_CATALOG_SERVICE_ADDR and RECOMMENDATION_SERVICE_ADDR respectively.

dependencies: {
    cells: {
productsCellDep: <cellery:ImageName>{ org: "wso2cellery", name: "products-cell", ver: "latest"},
..
 }
      }

The front-end cell depends on products cell, and this dependency is defined via the dependencies field in the frontEndComponent as shown above.

cellery:Reference productReference = cellery:getReference(frontEndComponent, "productsCellDep");

frontEndComponent.envVars.PRODUCT_CATALOG_SERVICE_ADDR.value = <string>productReference.gateway_host + ":" +<string>productReference.products_grpc_port;

frontEndComponent.envVars.RECOMMENDATION_SERVICE_ADDR.value = <string>productReference.gateway_host + ":" +<string>productReference.recommendations_grpc_port;

The method cellery:getReference(frontEndComponent, "productsCellDep") will give us a reference to the deployed products cell instance, and by using this reference, we can resolve the values of the environment variables PRODUCT_CATALOG_SERVICE_ADDR and RECOMMENDATION_SERVICE_ADDR as shown in the code above. Similarly, front-end cell communicates with the rest of the cells by following the above approach.

The two remaining cell definition files follow the same principles, and are available in the github repo.

cart.bal
cart cell is an independent cell with two components.

checkout.bal
checkout cell contains five components and talks to cart and products cells in order to invoke cartservice and productcatalogservice respectively from checkoutservice

Now that we have completed coding all the cell definitions, we can build and deploy these cells. You can also compare the complete Kubernetes YAML file required to deploy the Hipstershop microservices with the Hipstershop Cellery code, where the latter not only deploys the microservices on Kubernetes, but also creates a cell-based architecture around those microservices.

Building and Deploying Cells

Please follow the instructions provided here to build and run all the Hipster Shop cells.

Running Independent Cells

Let’s now see how we can build and run ads cell, which is an independent cell.
Open a terminal where the ads.bal file resides and run the following command to build the ads cell:

$ cellery build ads.bal wso2cellery/ads-cell:latest

Our organization name in Docker Hub is wso2cellery, and we used ads-cell as the name of the cell image and latest as the tag. The following output can be seen after executing the build command:

✔ Building image wso2cellery/ads-cell:latest
✔ Removing old Image
✔ Saving new Image to the Local Repository


✔ Successfully built cell image: wso2cellery/ads-cell:latest

What's next?
--------------------------------------------------------
Execute the following command to run the image:
  $ cellery run wso2cellery/ads-cell:latest
--------------------------------------------------------

To run the cell image wso2cellery/ads-cell:latest with the instance name ads-cell, run the following command:

$ cellery run wso2cellery/ads-cell:latest -n ads-cell

The following output can be observed:

✔ Extracting Cell Image wso2cellery/ads-cell:latest

Main Instance: ads-cell

✔ Reading Cell Image wso2cellery/ads-cell:latest
✔ Validating dependencies

Instances to be Used:


  INSTANCE NAME           CELL IMAGE            USED INSTANCE   SHARED  
 --------------- ----------------------------- --------------- --------
  ads-cell        wso2cellery/ads-cell:latest   To be Created    -      

Dependency Tree to be Used:

 No Dependencies

? Do you wish to continue with starting above Cell instances (Y/n)? y

✔ Starting main instance ads-cell


✔ Successfully deployed cell image: wso2cellery/ads-cell:latest

What's next?
--------------------------------------------------------
Execute the following command to list running cells:
  $ cellery list instances
--------------------------------------------------------   

Running Dependent Cells

Let’s now see how we can build and run a cell which is dependent on other cells. Let’s take front-end cell as our example. The build command will be similar to the command we executed for ads cell.

$ cellery build front-end.bal wso2cellery/front-end-cell:latest

However, when running a cell image with dependencies, we must also list out the names of the running instances of the other cells that the cell is dependent on. This can be seen in the run command for front-end cell, as shown below.

$ cellery run wso2cellery/front-end-cell:latest -n front-end-cell -l cartCellDep:cart-cell -l productsCellDep:products-cell -l adsCellDep:ads-cell -l checkoutCellDep:checkout-cell -d

The output will be as follows:

✔ Extracting Cell Image wso2cellery/front-end-cell:latest

Main Instance: front-end-cell

✔ Reading Cell Image wso2cellery/front-end-cell:latest
⚠ Using a shared instance cart-cell for duplicated alias cartCellDep
⚠ Using a shared instance products-cell for duplicated alias productsCellDep
✔ Validating dependency links
✔ Generating dependency tree
✔ Validating dependency tree

Instances to be Used:

  INSTANCE NAME               CELL IMAGE                  USED INSTANCE       SHARED  
 ---------------- ----------------------------------- ---------------------- --------
  checkout-cell    wso2cellery/checkout-cell:latest    Available in Runtime    -      
  products-cell    wso2cellery/products-cell:latest    Available in Runtime   Shared  
  ads-cell         wso2cellery/ads-cell:latest         Available in Runtime    -      
  cart-cell        wso2cellery/cart-cell:latest        Available in Runtime   Shared  
  front-end-cell   wso2cellery/front-end-cell:latest   To be Created           -      

Dependency Tree to be Used:

 front-end-cell
   ├── checkoutCellDep: checkout-cell
   ├── productsCellDep: products-cell
   ├── adsCellDep: ads-cell
   └── cartCellDep: cart-cell

? Do you wish to continue with starting above Cell instances (Y/n)? y

✔ Starting dependencies
✔ Starting main instance front-end-cell


✔ Successfully deployed cell image: wso2cellery/front-end-cell:latest

What's next?
--------------------------------------------------------
Execute the following command to list running cells:
  $ cellery list instances
--------------------------------------------------------

We can also view the graphical representation of a single cell along with its dependencies using the view command. For example, to view front-end cell, type the following command:  

cellery view wso2cellery/front-end-cell:latest

This gives us a web page depicting  front-end cell as shown in Figure 5.

(Click on the image to enlarge it)

 Figure 5: A generated graphical representation of "front-end"

Observability

In order to monitor and troubleshoot the deployed cells, Cellery provides observability tools, including a dashboard. The Cellery dashboard shows numerous views of the cells, which show dependency diagrams, runtime metrics of the cells, and end-to-end distributed tracing of the requests that pass through the gateways and the cell components. All metrics are collected from the components and gateways, and these include system metrics pertaining to Kubernetes pods and nodes (including CPU, memory, network, and file system usage) and request/response metrics (application metrics).

(Click on the image to enlarge it)

(Click on the image to enlarge it)

 Figure 6:  Cellery observability dashboard

Do You Really Need Cellery?

If you are looking for an answer to this question, you will also need to ask yourself whether your microservices project is going to be cloud-native and whether the project will grow and evolve over time. If the answer is yes, you will have to keep in mind that managing hundreds of loosely-coupled microservices can quickly become a nightmare. This is why cell-based architecture was designed, but creating a CBA on a container orchestration platform such as Kubernetes from scratch using YAML is by no means an easy task. This is where Cellery comes into the picture to enable developers to follow a code-first approach and take care of the underlying complexities of implementing a CBA, thus allowing them to truly reap the benefits of cloud-native microservices and avoid their pitfalls.

You can find more interesting content and learning material in the Cellery website and GitHub repository.

About the Author

Dakshitha Ratnayake is an enterprise architect at WSO2 with over 10 years of experience in software development, solution architecture, and middleware technology. This is the author’s first article for InfoQ.

Rate this Article

Adoption
Style

BT