Securing Docker Containers

Securing Docker Containers

Securing our infrastructure is essential, and safeguarding our Docker instances is a key component to prevent security risks within our organization. Here, we outline the most important points to maintain a strong security posture for our Docker instances. These practices help ensure our environment remains resilient against potential threats.

This list comes from the article: An Overview of Docker Security Essentials
💡
This is an unordered list that takes on some important topics aimed to harden our containers. But although encouraged, it does not mean that we must always complain with all of these points, it should be based on our analysis.
💡
When using a third-party Docker image, it's best to obtain the Dockerfile for the official image and customize it to suit our specific needs.

Securing the hosts.

Since docker makes use of our hosts kernel to run, we need to make sure our hosts containing docker instances are hardened to prevent any issues associated with security.

Run the latest Version of docker and used packages.

Self explanatory

Always run containers with unprivileged users

This is so we can prevent privilege escalation attacks.

Here I have to mention that it depends on what kind of container we are talking about, because if we are talking about a container whose lifecycle is very short and is basically used for build scenarios inside of the ci/cd pipeline, we might not be as exposed as if we talk about a container that’s available to the public or even only available in the LAN. There are still ways in which a container could be harmful if it is only for building inside of de CI/CD but I just mention it so that we understand that there are priorities.

One way to ensure we only run containers with non privileged users is to control the container build process through securing the image definition or Dockerfile. To do this we will need to incorporate a line into our Dockerfiles that defines the user with whom we will be running the containers.

Example:

RUN grupadd -r unprivileged-user && useradd -r -g user-grup  unprivileged-user

# Rest of the configuration here

And that is now an unprivileged user and he can’t do anything that requires root privileges.

To run containers under this user we now need to build:

docker build . -t imagename

And then run the container with the unprivileged user:

docker run -u unprivileged-user -it image-id-here123123 /bin/bash

While using docker compose it would be the equivalent of:

 my_service:
    image: my_image
    user: "username"  # specify the user by name or by UID and GUID
    volumes:
      - ./my_data:/data
    environment:
      - ENV_VAR=value

Now we can’t escalate to root in this container because although we haven’t removed the root user, we cannot switch to it because we did not set up a password for it.

Block root access in the container.

To do this we will have to change the default shell for the root user inside of the dockerfile:

RUN chsh -s /usr/sbin/nologin root

Now, after we build and run the image, we will not be able to switch to the root user even if we had a root password setup.

Restrict running containers on privileged mode

If we perform some easy common penetration testing then we might be familiar with a set of vulnerabilities or abuses that can come with the use of SUID binaries.

As an attacker we can leverage the functionality of SUIDs to perform privilege escalation.

To prevent this we can use a set of options prided to us by docker.

An example of this with the CLI is:

docker run -it --security-opt=no-new-privileges image-id

As an added security measure we will also run the container with the non privileged user created before:

docker run -u unprivileged-user -it --security-opt=no-new-privileges image-id

Limit Docker Container Kernel Capabilities.

When we give a container the --privileged flag, we're essentially granting it all Linux kernel capabilities. This is a powerful but potentially risky option.

If you're unfamiliar with Linux capabilities, you can learn more by referring to:

capabilities(7) - Linux manual page

The docker documentation for this can be found at:

Running containers

In this section, our goal is to ensure that, even if there is a way to escalate privileges within a container to a privileged process, we still won't have access to all kernel capabilities.

We can either drop all capabilities in the given case we don’t need them like this:

docker run --cap-drop all --cap-add <CAPABILITY> <IMAGE-ID>

Or we can give specific kernel capabilities to the container this way:

docker run --cap-drop all --cap-add <CAPABILITY> <IMAGE-ID>

# Make sure to also add the non-priviled user to the command to not use root

Again, the list of kernel capabilities can be found at the Capabilities man-page cited above.

File System Permissions and Access

When needed, to restrict file permissions to the container we can use the command:

docker run --read-only -u unprivileged-user -it image-id /bin/bash

And then if we try to make any file changes inside of the container we will get this error even if we are the root user:

bash-4.2# touch somefile.txt
touch: cannot touch 'somefile.txt': Read-only file system
bash-4.2#

Now if we want to have some way to temporarily make changes to files to store sensitive data that you don't want to persist in either the host or the container writable layer, we can make use of tmpfs mounts.

Example:

docker run \
  --read-only \
  --mount type=tmpfs,destination=/tmp,tmpfs-size=64M \
  my-container

Disabling Inter-Container Communication

By default, Docker does not isolate containers from one another so they can be easily communicate with each other. If we want to disable this option we will need to create a new network with that option set to false.

If we take a look at the bridge adapter from the docker network adapters:

docker network inspect --format='{{json .Options}}' f9b75399c00f

We get a a result:

{
  "com.docker.network.bridge.default_bridge": "true",
  "com.docker.network.bridge.enable_icc": "true",
  "com.docker.network.bridge.enable_ip_masquerade": "true",
  "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
  "com.docker.network.bridge.name": "docker0",
  "com.docker.network.driver.mtu": "1500"
}

We can see that enable_icc that stands for “inter-container comunication” is set to true.

To get an isolated container we can create a new network with that option set to false the command:

 docker network create -d bridge -o com.docker.network.bridge.enable_icc=false isolated-net

If we now inspect that network, we get:

docker network inspect isolated-net
[
    {
        "Name": "isolated-net",
        "Id": "9edf164992a83efad96f5f632b274c10de24294eb15644a31fc5cec1630d1787",
        "Created": "2024-11-10T22:53:16.933394181Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.28.0.0/16",
                    "Gateway": "172.28.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {},
        "Options": {
            "com.docker.network.bridge.enable_icc": "false"
        },
        "Labels": {}
    }
]

We can see that the enable_icc option is set to false.

And then we can add a container to that network:

docker run --network <NETWORK-NAME> <IMAGE-ID>

This means the containers will not have communication to that container but it will still have access to the network.