..

Podman

Podman

Why I switched to Podman in my DMZ1

When I created my DMZ, I realized that the next weak link would be my docker host. I was made aware that docker has its disadvantages security-wise:

  • central daemon running as the root user, which provides a big attack surface for container breakouts
  • Lack of isolation

Some examples:

  • CVE-2018-15664: A flaw in Docker’s symlink resolution that allowed an attacker to escape container file systems and access the host system.
  • CVE-2019-5736: Runc container breakout vulnerability
  • CVE-2019-14271: This vulnerability allowed an attacker to bypass Docker’s security controls and execute arbitrary code on the host system by exploiting a flaw in the Docker API.
  • CVE-2021-21285: Uncontrolled resource consumption
  • CVE-2022-0847: Dirty Pipe
  • etc.

I heard of Podman a long time ago. It should be a drop-in for Docker and address its disadvantages. So far I was happy with Docker and kept it up to date to keep it as secure as possible , but I always wanted to try out Podman and this was the right time. I decided to run a docker instance in my private network, where nothing is exposed to the open internet and to install a Podman instance in the DMZ.

In general, I found following sources very valuable:

Why I use a RHEL2 based OS for Podman

I really liked Portainer for Docker for managing my containers and stacks and knew that this wouldn’t be possible with Podman. Some people tried and got it running more or less, but they couldn’t convince me. I read about the cockbit web interface of RHEL based distributions like CentOS, Alma Linux etc., which allows some sort of managing the entire host, Podman included. It does almost all Portainer does plus it gives you an overview on the entire OS (e.g. services, storage, logs, a web console, etc.). Additionally, Podman is by default installed on RHEL, because it is developed by Container Tools SIG, which is a working group within the Red Hat OpenShift organization. On top of that it provide SELinux3, which makes a lot of sense for such an exposed system.

This convinced me to give the new CentOS Stream 9 a shot. CentOS Stream has a rolling release model and it’s positioned between Fedora and RHEL. I wouldn’t recommend it for production, then I would rather take AlmaLinux or Rocky Linux. They replace the old CentOS as direct rebuilds of RHEL and therefore provide more stability.

Commands to get started

Here I will describe how I set up my Podman instance for my personal documentation. Maybe others find it useful too, because there is a lot of guides on Docker, but almost nothing on Podman, which goes a bit deeper than the basic podman run command… The main documentation and other Red Hat sources provide excellent information, but in my opinion it can be a bit overwhelming. Nevertheless, I also provide these sources. If one finds time to study them it is worth it!

This RedHat lesson: https://www.youtube.com/watch?v=frMRuhtMafk is what I used for setting up my first pod.

Pods instead of docker-compose stacks

A pod is a group of one or more tightly-coupled containers that are scheduled to run on the same host and share the same network namespace.

Create a pod:

podman pod create --name comments-stack --publish 27018:27017 --publish 8072:8080

As one maybe used to publish the ports by container. In Podman / K8s it is configured at the pod level. When the pod gets created, a so called “pause” container gets added. The purpose of this container is to keep the pod’s network namespace open, so that other containers can be added to the pod and share the same network namespace. The “pause” container is a standard practice also used in other container orchestration platforms.

$ podman ps

CONTAINER ID  IMAGE
99bc0f426600  localhost/podman-pause:4.3.1-1669638068

Create a container and attach it to the pod:

podman run -dt --pod comments-stack \
--name comments-db \
--restart=always \
-p 27018:27017 \
-e MONGO_INITDB_ROOT_USERNAME=root \
-e MONGO_INITDB_ROOT_PASSWORD=aStrongPassword \
-v comments_db_data:/data/db \
docker.io/library/mongo:5.0.8

One can also add –pod new:comments-stack and omit the first command.

Create another container:

podman run -dt --pod comments-stack \
--name comments-app \
--restart=always \
--label io.containers.autoupdate=registry \
-p 8072:8080 \
-e SPRING_DATA_MONGODB_HOST=comments-db \
-e SPRING_DATA_MONGODB_DATABASE=comments \
-e SPRING_DATA_MONGODB_AUTHENTICATION_DATABASE=admin \
-e SPRING_DATA_MONGODB_USERNAME=root \
-e SPRING_DATA_MONGODB_PASSWORD=aStrongPassword \
registry.tobisyurt.net/comments

Export as Kubernetes config:

podman generate kube comments-stack >> comments-stacke-kube.yml

Now I want to run this stack as systemd service, so that I can benefit from further features such as auto-update and rollbacks (more on that later). With the following command I generated the necessary systemd unit files:

cd ~/.config/systemd/user
podman generate systemd --new --files --name comments-stack

This generates three service files. One per container and an additional master service, which orchestrates the others. I only enabled the master service called pod-comments-stack and left the other services disabled. I assume that is the way it is supposed to, because the master-service takes care of the proper start order.

Here as an example my master-service, the others look kind of normal to me:

# pod-comments-stack.service
# autogenerated by Podman 4.3.1
# Sat Feb 18 19:41:20 CET 2023

[Unit]
Description=Podman pod-comments-stack.service
Documentation=man:podman-generate-systemd(1)
Wants=network-online.target
After=network-online.target
RequiresMountsFor=/run/user/1000/containers
Wants=container-comments-app.service container-comments-db.service
Before=container-comments-app.service container-comments-db.service

[Service]
Environment=PODMAN_SYSTEMD_UNIT=%n
Restart=on-failure
TimeoutStopSec=70
ExecStartPre=/bin/rm \
        -f %t/pod-comments-stack.pid %t/pod-comments-stack.pod-id
ExecStartPre=/usr/bin/podman pod create \
        --infra-conmon-pidfile %t/pod-comments-stack.pid \
        --pod-id-file %t/pod-comments-stack.pod-id \
        --exit-policy=stop \
        --name comments-stack \
        --publish 27018:27017 \
        --publish 8072:8080 \
        --replace
ExecStart=/usr/bin/podman pod start \
        --pod-id-file %t/pod-comments-stack.pod-id
ExecStop=/usr/bin/podman pod stop \
        --ignore \
        --pod-id-file %t/pod-comments-stack.pod-id  \
        -t 10
ExecStopPost=/usr/bin/podman pod rm \
        --ignore \
        -f \
        --pod-id-file %t/pod-comments-stack.pod-id
PIDFile=%t/pod-comments-stack.pid
Type=forking

[Install]
WantedBy=default.target

For further reading; How to run pods as systemd services with Podman.

auto-updates and rollbacks

The following is, what I read about it. Here I only summarize how I applied it to my Podman containers (see Red Hat SysAdmin Post).

One can enable an auto-update mechanism, which is built-in in Podman by setting the label;

io.containers.autoupdate={registry,local}

I already did that above, so I tested the feature straight away. I pushed a new image with the tag latest to my private registry (with a minor change, so that I can see it). Then I ran (without –dry-run it would update it immediately):

podman auto-update --dry-run

which outputs:

UNIT                            CONTAINER                    IMAGE                            POLICY      UPDATED
container-comments-app.service  32eeb4ff9cdf (comments-app)  registry.tobisyurt.net/comments  registry    pending

If one would want to automate it somehow an alternative json format is a nice option:

podman auto-update --dry-run --format json

outputs:

[
    {
        "Unit": "container-comments-app.service",
        "Container": "32eeb4ff9cdf (comments-app)",
        "ContainerName": "comments-app",
        "ContainerID": "32eeb4ff9cdf1f119563e047dc80df804168a6eee4b6cf19c206ba2ec0a1a358",
        "Image": "registry.tobisyurt.net/comments",
        "Policy": "registry",
        "Updated": "pending"
    }
]

ROOT/USER inside and outside the container…

I thought I was clear on this topic, but following posts helped me to understand it better and I try to summarize it here. I highly recommend reading these and also experiment with the examples provided:

  1. https://www.redhat.com/en/blog/understanding-root-inside-and-outside-container
  2. https://www.redhat.com/sysadmin/user-flag-rootless-containers

I set up my first pod described above as an unprivileged user, but I assumed that if somehow I am facing a container breakout, that the entire namespace of my unprivileged user on the host is compromised. Fortunately, this is not the case, because (quoted from 1.);

Well, the short answer is because with newer kernels and newer shadow-utils packages (useradd, passwd, etc.) each new user is given a range of user IDs at their disposal. Traditionally, on a Unix system, each user only had one ID, but now it’s possible to have thousands of UIDs at each user’s disposal for use inside containers.

Which means the user ids inside the container maps to completely other ids on the host system. Therefore, if a container breakout happens, it will be severely restricted on the container host, yeah!

The second link mentions a possibility to go a step further and use the –user flag, which is not obsolete with rootless containers. This would protect the mapped user namespace from the host system even further…


  1. A DMZ, or “demilitarized zone,” is a network segment that acts as a buffer between a company’s internal network and the Internet. 

  2. RHEL stands for Red Hat Enterprise Linux, and it is a commercial open-source Linux distribution. 

  3. Security-Enhanced Linux (SELinux)is a Linux kernel security module that provides a mechanism for supporting access control security policies, including mandatory access controls (MAC).