BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Writing Cloud Native Network Functions (CNFs): One Concern per Container

Writing Cloud Native Network Functions (CNFs): One Concern per Container

Bookmarks

Key Takeaways

  • The Docker and Kubernetes documentation both promote the concept of packaging one application or "one concern" per container. This can also be a guideline for running "one process type" per application and container.
  • Telecommunication-based Cloud Native Network Functions (CNFs) have specific requirements, such as low latency, high throughput, and resilience, which incentivize a multi-concern/multi-process-type approach to containerization.
  • There are seven benefits to having "one concern, one process type" applications packaged within a container, and they are lost when tightly coupling process types.
  • Having multiple process types exacerbates the cloud native "zombie" process and "signal (SIGTERM)" problems.
  • High-performance Telco applications that are implemented with multiple process types should explore using unix domain sockets for communication instead of using TCP or HTTP, as this can speed communication between containers.

There is value in defining both thick and thin definitions for microservices. Thick microservices would be any code that harnesses Conway’s law and deploys code by the boundaries of a product team. Thin microservices would be those services that adhere to a coarse-grained deployment of code, typically in a container, with only one concern.

Cloud native network functions (CNFs) are network applications in the telecommunications space that have different non-functional requirements from most cloud native enterprise applications. CNFs are often stateful while requiring low latency, high throughput, and resilience. Any architecture that reduces or prohibits any of these requirements is either a bad fit for telecommunications development or requires a special exception in implementation. This is where the challenge arises for the thin microservice model, which promotes a "one concern, one process" design for containers and CNFs.

One Concern for Containers

The google cloud documentation, docker documentation, and Kubernetes documentation all promote the concept of one application or one concern per container. While the google cloud documentation uses the term "application," the docker documentation uses the term "concern" and further describes a concern as a set of parent/child processes that are one aspect of your application. A good example of this would be an implementation of nginx, which will create a set of child worker processes on startup. Another way to understand the one concern rule is to say that only one process type (such as the set of nginx worker processes) should exist within a container.

Why does this rule exist? While at first thought the rationale behind this rule would be to reduce complexity within a single module, component, object, etc., the real driver behind this rule is adherence to the code’s rate of change[1], a concept borrowed from traditional architecture and biology. An artifact should be deployed at a rate consistent with how often it changes. The cloud native way is to give maximum effort to decoupling code to make this the case. The pushback against decoupling is often driven by the need for performance optimizations, which we will return to later.
 
The telecommunications industry and other industries have a history of development in isolation. In other words, within the telecommunications industry, code, libraries for code, and code deployment have been developed within one large organization. Even when multiple sub-organizations were jointly involved in developing a large project, such as a commercial grade switch, the deployment of such libraries, projects, and end products were funneled together and deployed in a lock-step fashion. Given this history, which is problematic for even the thick definition of microservices referred to earlier, it shouldn’t be surprising that network functions have even more difficulty adhering to the thin definition of microservices and the one concern rule.

The Seven Benefits of One Concern, One Process Type

Tom Donohue illustrates good benefits for the one concern principle rephrased here:

  • Isolation: When processes use the container namespace system, they are protected from interfering with one another.
  • Scalability: Scaling one process or process type is easier to reason about than scaling multiple types. This could be for complexity reasons (one process type is harder to scale than multiple process types) or because the rate of change is different (one process needs to be increased based on different conditions than other processes).
  • Testability: When a process is assumed to be running by itself, it can be tested in isolation from other processes. This allows developers to locate the root cause of problems easier by eliminating extra variables.
  • Deployability: When a processes’ binary and dependencies are deployed in a container, the deployment is coarse grained relative to the rate of change of the binary and the container, but fine grained relative to the rate of change of other processes and their dependencies. This makes deployments adjustable to where and when a change happens in your dependency tree instead of redeploying everything in lockstep.
  • Composability: One concern, and therefore process type, per container is much easier to reason about because it is easier to share and verbally communicate about its contents digitally. This makes it easier to reuse in other projects.
  • Telemetry: It is easier to reason about the log messages that come from one concern or process type than log messages that are interleaved with other concerns. This is even more true in a container that prints all log messages to standard out, such as in a 12-factor cloud native app.
  • Orchestration: If you have more than one process type in a container, you will have to manage the lifecycle of the secondary concerns within the container, which effectively means creating your orchestrator within the parent process type.

The impact of the open-source cloud native movement on the telecommunications industry is an explosion of collaboration between vendors. Instead of developing tightly coupled software under the umbrella of one organization, the call for more collaboration and interoperability has driven multiple projects from different organizations to revisit the benefits of the one concern principle.

Cloud Native Best Practices for Processes

Processes order independence, a devil’s bargain

One of the arguments for putting multiple process types in the same container is the need for more control of the startup order of concerns. An example of this would be a traditional application that needs a database. The application and web server may fail to start properly if the database isn’t available first, so someone may manually have the database start first in a docker file and then have the application start. While this does work, you lose the seven benefits of having your concerns loosely coupled when doing this. A better approach is to have order independence for your concerns and process types wherever possible.

Your process will be terminated

Kubernetes has settings for the priorities of pods that allows users to preempt or terminate pods if a set of conditions are not met. This means pods need to be responsive to the graceful shutdown request of these scheduling policies or be exposed to data corruption and other errors. These graceful shutdown requests come in the form of a SIGTERM request, which typically gives 30 seconds before a SIGKILL request forcefully terminates the process. When running multiple processes, all of the child processes need to be able to handle graceful shutdown signals. As we will see later, handling graceful shutdowns for processes causes subtle problems that are made worse with multiple processes.

In telecommunications, process order independence and preemption have been typically handled by orchestrators and supervisors that have been tightly coupled to the processes they manage. With application-agnostic orchestrators like Kubernetes, the days of these custom and tightly coupled orchestrators are coming to an end since declarative scheduling configuration is now possible. A telecommunications cloud native approach should probably resemble the Erlang community’s "let it fail" approach to processes, where the calling process is more robust concerning the processes it calls.

Multiple processes and the application lifecycle

Google cloud recommends that you package a single "app" per container. At a more technical level, a single application is defined as a single-parent process with potentially multiple child processes. A major part of this rationale is harnessing the different rates of change in the application’s lifecycle. What do we mean by lifecycle? The lifecycle is the starting, executing, and termination of an application. Any process with different reasons for starting, executing, or termination should be separated from (i.e., not tightly coupled with) other processes. When we disentangle these concerns, we can express them as separate health checks, policies, and deployment configurations. We can then declaratively express these concerns, track them in source control, and have them semantically version controlled. This allows us to avoid upgrading things in lockstep, which would pin separate lifecycles together.

The problem of managing the lifecycles of multiple applications, or process types, in a container stems from the fact that they all have different states. For instance, if you have a parent process that starts Apache and then also starts Redis, the parent process needs to know how and when to start, monitor, and terminate both apache and redis. This problem is exceedingly more difficult with code or binaries that you don’t control since you don’t control how those applications express their health. This is why the best place to express process health (especially of a process that you don’t control) is within a configuration that is exposed to a container management system or orchestrator like K8s (Kubernetes), which is designed to accommodate lifecycles, and not within a makeshift bash script.

Multiple processes exacerbate the cloud native signal and zombie problems

Not handling what is known as the PID 1 process in a container is rife with insidious problems which are extremely hard to detect. These problems are exacerbated when there are multiple processes involved. The two main issues with properly handling PID 1 are handling termination signals and zombies.

SIGTERM

All applications and processes must be aware of the two types of shutdowns: graceful shutdowns and immediate shutdowns. Suppose a stateful application expects to open an important file, write data, and close that file, all while being uninterrupted. In that case, that application will eventually corrupt that file given the preemptive capabilities of K8s. One way to handle this type of problem is to have graceful shutdowns. This is what a SIGTERM signal does. It tells the application that it is going to be shut down and to start taking action to gracefully avoid corruption or other errors. Within orchestrated systems, all processes should be designed to handle a graceful shutdown if needed. But what about processes that start other processes? To handle the graceful termination of child processes, a parent process needs to pass on the SIGTERM signal to all the children so that they can, in turn, gracefully shutdown as well. This is where inappropriately handling PID 1 is a problem. Simple scripts like bash won’t pass the SIGTERM signal on to processes they start unless told to do so explicitly. Without this passing on of SIGTERM, very hard-to-detect errors will pop up.

An insidious SIGTERM error example

Some insidious errors that are caused by having multiple processes have been documented by Gitlab. They had an issue where a 502 error would appear on a page but would mysteriously fix itself after a certain amount of time. This problem was because the aforementioned graceful termination signal (SIGTERM) was not being sent to child processes, which had open connections running after the page-serving resources were already removed. This problem was notoriously difficult to track down.

Zombies

The PID 1 process in a container also cleans up child processes after they terminate. This may seem simple enough, but by default, a PID 1 bash script will not do the proper cleanup. What are the implications of not cleaning up or reaping child processes? These unclean processes, also known as zombies, fill up what is known as the process table. They eventually prevent you from starting new processes, effectively stopping your whole node from functioning.

What do you do about this? One solution is to use a proper init system that is appropriate for your containers. This system would register the correct signal handles and pass those signals on to child processes. It also calls the "waitpid()" function on terminated child processes, removing them as zombies.

A proper init system handles zombies and signals

One way to limit the effect of zombie processes is to have a proper init system. This is especially true if you are thinking about running a PID 1 process that has code you don’t control, e.g., a Postgres database. This process could start other processes and then forget to reap them. With a proper init system, any child processes that terminate will eventually be reaped by the init system.

There are proper init systems and sophisticated supervisors that you can run inside of a container. Sophisticated supervisors are considered overkill because they take up too many resources and are sometimes too complicated. Some examples of sophisticated supervisors are supervisord, monit, and runit. Proper init systems are smaller than sophisticated supervisors and, therefore, suitable for containers. Some proper container init systems are tini, dumb-init, and s6-overlay.

Performance and Cloud Native Telco Processes

One of the main motivators for running multiple processes in containers is the desire for performance. It seems that running processes in separate containers instead of the same container (assuming the interprocess communication is the same) can decrease performance. This performance decrease can be attributed to the isolation and security measures built into the container system. It may also be removed by running the container in privileged mode, but this has the tradeoff of reduced security.

One misconception of separating processes into multiple containers is that any communication will take a performance hit because it will have to occur over TCP and, even worse, HTTP. This isn’t entirely true. You can retain the performance of multiple processes while separating them into different containers by using unix domain sockets for communication. This can be configured in Kubernetes by using a volume mount that is shared between all containers within a pod.

In the context of telecommunications, data planes require maximum performance between concerns and therefore use threading, shared memory, and interprocess communication for performance increases. This comes at the expense of increased complexity if those concerns are all tightly coupled. Interprocess communication implemented between separate containers but within the same pod should help here. Telecommunications control planes usually require less performance and, therefore, can be developed as traditional applications

Conclusion

To reap the maximum interoperability and upgradeability benefits of the cloud native ecosystem, the telecommunications industry will need to adhere to the one concern rule for containers and deployments. Vendors that can do this will enjoy a competitive advantage over those that can not.

To learn more about cloud native principles, join the CNCF’s cloud native network function working group. For information on CNCF’s CNF certification program, which verifies cloud native best practices in your network function.

Special thanks go to Denver Williams for his technical review of this article.

Endnotes

[1] "O’Neill’s A Hierarchical Concept of Ecosystems. O’Neill and his co-authors noted that ecosystems could be better understood by observing the rates of change of different components. Hummingbirds and flowers are quick, redwood trees slow, and whole redwood forests even slower. Most interaction is within the same pace level-hummingbirds, and flowers pay attention to each other, oblivious to redwoods, who are oblivious to them. Meanwhile, the forest is attentive to climate change but not to the hasty fate of individual trees." Brand, Stewart. How Buildings Learn (p. 33). Penguin Publishing Group. Kindle Edition.

About the Author

Rate this Article

Adoption
Style

BT