Key Takeaways
- The security of Docker is dependent on how you use it: it is overly simplistic to ask ‘Is it Secure?’ when security lies in fine-tuning it for your use-case.
- You must have a thorough understanding of the difference between Docker image and Docker container runtime and the separate security priorities for each.
- Working with Docker should involve the principle of least privilege: minimum permissions given whilst still achieving functionality.
- You can reduce the permissions your Docker images have by: avoiding images running as root user, reducing access to the binaries, and only including binaries that are absolutely necessary at runtime - even going back and removing those used during build.
- For the Container runtime, ensure your containers are isolated from the host, adjust the default security profile to suit your project and make sure to use newer implementations like containerid and CRI-O to reduce binaries.
Docker is a platform most developers are now familiar with. It makes it easier to create, deploy and run your applications in packages called containers. The required dependencies are ‘packaged’ and run as a process on the host Operating System, rather than the Operating System being duplicated for each workload as with virtual machines. This avoids small configuration differences between machines.
Since Docker made this approach popular, many of us talk about Docker Containers and Docker Images. In fact, images and containers don't need to be 'Docker' but they can be based on a similar framework.
As cloud-native programming grows in popularity, so does Docker and a Docker-style approach. Cloud-native is a term with a number of definitions, but it largely means running an application, most likely one with a microservices architecture, on cloud infrastructure. It uses automation tools, and the resources and functionality of cloud providers. A containerization tool like Docker is often useful with this style of programming, because the container content and setup result in a repeatable environment regardless of the underlying system.
If you are using Docker, you will also want to know if it is secure enough for your application. As with many systems, you cannot answer the question "Is Docker Secure?" with a simple yes or no. It is possible to use Docker in a secure way but you need to take some actions into consideration.
In this blog, we will explore the most important security considerations around Docker.
Docker vs Docker Images
To address Docker security, we need to understand the difference between Docker that is running the images in a container, and the Docker images themselves. These are two different things.
You start a container from a Docker image. A Docker image is a layered structure where you define the process which needs to be run and the files needed to run it. For example, if you are a Jakarta EE developer, this could be the Jakarta EE server installation and your application.
Docker Hub is a repository where you can store and share your Docker images. You can use these images to start a container directly from them, or you can extend these images, customising them to your needs and using them instead. The way you customise your image, choosing the binaries to include and their permissions, has an impact on the security of your application.
You then have the program that actually runs your container. It has a daemon process (a background process not under direct control of the user) that hosts images, containers, networks, and storage volumes. This could be Docker Engine, or another implementation. It is responsible for running your process in an isolated fashion. How you run your containers also has an impact on security.
Security Considerations for the Image
The Docker images you build, which comply with the rules of the Open Container Initiative (OCI) specification, do not necessarily provide for comprehensive security out of the box. You can take certain steps to ensure this process is fairly secure within the container and the host system
The main problem with running your process in a container is that when the application is 'hacked' by a person, it can gain access through the underlying host and thus pose a great risk for many systems.
When running in a container, we need to be more alert for security concerns as a container has tighter integrations with the host (as mentioned above, running on the host’s operating system) than a virtual machine. A security vulnerability is more severe when it happens in a container. This is because, when applications run on different physical machines, they are separated to an extent from each other. But when there is a vulnerability in the container software, it might be possible that one application/process accesses an application in another container - and therefore accesses its vulnerabilities or opens it up to its own vulnerabilities.
One of the precautions you should take with your application or process within the container is that it should never run as 'root' user. As root, the process has much more permissions and thus can access more low-level resources processes. At some point in your container script, you should always have the indication of the user that runs the main process.
USER myuser
And ideally, all the binaries of your process and application should be owned by root but have only read and executable permissions by the user running your process. In that way, the process itself is unable to modify the binaries and scripts that make up your application in the container, and thus in case of a breach, less severe things can happen.
The above scenario is the implementation of the principle of the least privilege; forcing code to run with the lowest permission level possible. When the process doesn't have the permission, it can't be abused. Another principle is reducing the potential attack surface area. When your image doesn't contain binaries that aren't strictly needed, they can be the source of a security vulnerability.
So, only include in the Image the binaries that are absolutely necessary. If possible, start from a 'scratch' image and add only those binaries that you need. Or start from a small starting image like the Alpine images. The fewer binaries and executables there are present, the less they can contribute to potential security vulnerabilities.
A third option to remove unnecessary parts in the image is the usage of the multistage build, especially if you also use 'image' itself to build the final application that needs to be run in the container, all the extra steps can be done in a separate stage. This only allows you to structure the image correctly within layers but allows you to remove everything that is not needed at runtime.
#
# Build stage
#
FROM maven:3.6.0-jdk-11-slim AS build
COPY src /home/app/src
COPY pom.xml /home/app
RUN mvn -f /home/app/pom.xml clean package
#
# Package stage
#
FROM payara/micro:5.2021.10-jdk11
COPY --from=build /home/app/target/hello.war ${DEPLOY_DIR}
The above multistage build shows an example of keeping only the required files and processes in the final image. The source code and the maven tool are not of any use in the final image where we only need the web application WAR file. By having two separate stages, we don’t make unnecessary things available at runtime. We should use the same methodology around processes and applications, even if they are part of some standard images. If possible, we should start from a base image (fromscratch) and only add what is really needed.
Security Considerations for the Container Runtime
The way you run and with which software you run the Images can also lead to security vulnerabilities.
We already mentioned that you should not use the root user to run your process in the container. But when starting the container, you can also indicate that it runs in a privileged way. With this flag, you give all capabilities to the container and also lift all the limitations enforced by the device cgroup controller. And thus in the event there is a security issue it has greater impact.
Containers should be running in a 'sandbox' so they are isolated from the host and other containers that are running. This privileged flag removes this sandbox and thus should never be used. Also avoid setting the option --net=host
as it can also affect the sandbox. This allows the container to open low numbered ports just like root processes which potentially affects the isolation.
When using the host network option when you run the container, there is no port mapping in effect and no isolation for the host network. The container is using the same network resources as the host. Ports in this lower range are considered well-known ports and typically only connected to by super-user processes and thus people might be less attentive when they connect such a port. But the container process also has access to the entire network stack and might perform a scan for other well-known ports. They are probably not accessible from the outside world but can be polled from within the container process since it uses the host's network.
The Docker runtime is not the only program that can use a Docker image to start a container. As mentioned, Docker was the tool that made working with containers popular but since it was the first implementation, a lot is learned over the years about how such a container runtime should operate. Together with the definition of the Container Runtime Interface (CRI) within Kubernetes, other implementations following this CRI with a better and more secure implementation arose.
Today, the containerd and also the CRI-O runtimes can also be used to run containers based on the Docker images. The implementation omits several binaries and processes so that they are leaner, faster, and more secure. For example, they don't include SSH access into the running container by default as dockerd (the name of the Docker Runtime process) does. Since the attack surface is smaller, fewer issues can arise.
But even with these newer binaries, security risks are still not at zero. So it is recommended to tailor the security to your process. There is a default security profile associated with containers, but you can fine-tune it by making use of the AppArmor Linux Security Module. You can define capabilities like folder access, network access, and permission (or not) to read, write, or execute files. Defining the following entries in a AppArmor file denies the write and listing access to the /etc and /home directories:
deny /etc/** wl,
deny /home/** wl,
It. Based on the knowledge of what the process within the container requires, you should only open up those permissions that are required for the proper functioning of the application. This profile can be specified when we run a container
docker run <other options> --security-opt apparmor=my_profile <container-image>
In Conclusion: Fine-tune for Maximum Security
Since the Docker Images and containers need to be used in a wide variety of scenarios, you need to tune them for your specific use case. The general principles of security are still the guidelines to determine what is needed for your case.
The principle of the least privilege says we should give minimum permissions possible whilst still achieving functionality, to avoid security breaches. For containerization, this means we should not run the main process in the container with the root user. We should also use the appropriate permissions on files and restrict the access using a specific AppArmor profile.
To reduce the Surface attack, we should only include what is strictly required for our use case and for example use the newer implementations like containerd and CRI-O to run our containers as they include fewer binaries and processes.