Building Containers with Podman vs Umoci in a Yocto Workflow
Introduction
Containers have revolutionized how we package, deploy, and manage software across environments. They enable consistency, portability, and isolation — three critical features for modern application development, especially in embedded systems where reproducibility and deterministic builds are key.
In this post, we’ll walk through:
- Traditional container build approaches (single- and multi-stage)
- Their limitations in embedded and secure environments
- How to build application containers using Yocto with umoci
- A practical example with OpenCV and GStreamer
Why Use Containers in Embedded Development?
Some benefits of using containers in embedded systems:
- Consistency: Identical environments across dev, test, and production.
- Isolation: Cleaner dependency management and runtime separation.
- Deployability: Application and its dependencies bundled together.
- Security: Ability to tightly control the runtime surface.
Traditional Approach: Single-Stage Builds
The most straightforward container image is a single-stage build:
#+begin_src Dockerfile
FROM debian:bullseye
RUN apt update && apt install -y build-essential libopencv-dev ...
COPY . /app
RUN make -C /app
ENTRYPOINT ["appmy-binary"]
#+end_src
Drawbacks
- Security risk: Container includes compilers, headers, and tools.
- Bloated size: Everything, including cache and build tools, stays in image.
- Ugly RUN lines: To save layers, you end up chaining commands:
#+begin_src Dockerfile
RUN apt update && apt install -y ... && \
make -C /app && \
apt purge -y build-essential && apt clean
#+end_src
Traditional Approach: Multi-Stage Builds
Multi-stage builds improve on single-stage by separating build from runtime:
#+begin_src Dockerfile
# Stage 1: build
FROM buildpack-deps as builder
RUN apt install -y build-essential libopencv-dev ...
COPY . /src
RUN make -C /src
# Stage 2: runtime
FROM debian:bullseye-slim
COPY --from=builder srcmy-binary usrlocalbin
ENTRYPOINT ["usrlocalbinmy-binary"]
#+end_src
Benefits
- Smaller images: No compiler or package manager leftovers.
- Cleaner: No need to manually clean up.
- More secure: Less attack surface in production container.
However, this approach is still imperative and can be inconsistent over time.
Yocto + Umoci for Container Builds
Yocto provides a declarative, reproducible environment to build full Linux systems — and now also container images using umoci.
Why use umoci + Yocto?
- Reproducibility: Exact same image every time.
- No compilers/tools: Only runtime components go in.
- Fine-grained control: Easily configure build-time features.
- Layer optimizations: Done automatically via recipes.
- OCI-compliant: Output works with Podman, Docker, containerd, etc.
Drawbacks
- Longer setup time: Initial Yocto builds can be slow.
- Steeper learning curve: Recipes instead of containerfiles.
- Maintenance: Keeping meta-layers and sstate cache clean.
Building Containers the Yocto Way: meta-virtualization, multiconfig, and umoci
When working with containers in Yocto, a few powerful features and layers enable this advanced build setup.
meta-virtualization
The [[https:git.yoctoproject.orgmeta-virtualization][meta-virtualization]] layer is the cornerstone of container support in Yocto. It provides:
Recipes for container runtimes (Docker, Podman, containerd)
Tools like umoci and runc
Classes to build OCI-compatible images directly from BitBake
To use it, add it to your bblayers.conf:
#+begin_src bitbake
${TOPDIR}..meta-virtualization
#+end_src
This brings in image-oci.bbclass, which handles OCI image layout generation and metadata.
multiconfig: Separating container and system builds
Yocto's multiconfig
feature allows you to build containers alongside your system image but in a completely separate configuration.
You might have:
#+begin_src
meta-viso-containers/
├── conf/
│ └── multiconfig/
│ ├── visosystemcontainer.conf
│ └── visoappcontainers.conf
#+end_src
This makes it easy to build containers independently, targeting different architectures (e.g., system image for ARM, container for AMD64), or with their own dependencies.
In conf/multiconfig/visoappcontainers.conf
:
#+begin_src bitbake
TMPDIR="${TOPDIR}/tmp-visoappcontainer"
DISTRO="${@bb.utils.contains('VISO_BUILD', 'production', 'visolinux-appcontainer', 'visolinux-appcontainer-dev', d)}"
IMAGE_VERSION_SUFFIX="-${VISOVERSION}"
OPENCV_CUDA_SUPPORT=' '
DISTRO_FEATURES:remove = " pam"
PACKAGECONFIG:pn-opencv ?= " python3 v4l libv4l gstreamer"
PACKAGECONFIG:pn-opencv:remove = " cuda gtk"
# It must be installed for python
# TBD Upstream bug report
IMAGE_INSTALL:append = " busybox"
#+end_src
umoci: OCI image builder
[[https:umo.ci][umoci]] is a tool used by Yocto to turn rootfs images into OCI-compliant bundles. It's responsible for setting up the image layout, config.json, metadata, and filesystem layers — fully compatible with Docker, Podman, and Kubernetes.
In your container image recipe, include:
#+begin_src bitbake
inherit image
inherit image-oci
#+end_src
The image-oci class takes care of umoci-based image construction.
IMAGE_FSTYPES: Generating containers
By setting:
#+begin_src bitbake
IMAGE_FSTYPES = "container oci"
#+end_src
...you tell BitBake to output the root filesystem as an OCI-compliant container image. This avoids the need for Dockerfiles altogether and lets you define the container using pure Yocto metadata.
Supported values:
- container: Legacy, simple tarball
- oci: Modern OCI bundle format (via umoci)
You can then load or push the image using tools like podman:
#+begin_src shell
podman load < tmpdeployimagesqemux86-64video-feed-ip-camera-container-image.qemux86-64.oci.tar
#+end_src
Use Case: Computer Vision Camera Feed with Secure Streaming
The container we're building is a lightweight computer vision application designed to capture video input from a camera and perform basic image processing using OpenCV. It's intended for embedded edge devices that stream video — for example, IP cameras in surveillance or industrial monitoring.
In typical Python setups, OpenCV is installed via pip, which brings a prebuilt version that relies on FFmpeg as its video backend. However, FFmpeg in OpenCV lacks support for some modern RTSP authentication methods — notably, SHA-256 digest authentication. This is a significant issue when connecting to secured video streams, such as those from modern IP cameras.
The solution is to use GStreamer as the video backend for OpenCV, which supports SHA-256 authentication. Unfortunately, the OpenCV Python wheels on PyPI do not include GStreamer support, and manually rebuilding OpenCV with the correct flags and dependencies is non-trivial, especially if you want to avoid dragging in unwanted features like OpenGL.
This is where Yocto shines. You can declaratively configure OpenCV with exactly the features you want, such as:
#+begin_src bitbake
PACKAGECONFIG:pn-opencv ?= " python3 v4l libv4l gstreamer"
#+end_src
This enables GStreamer support, disables OpenGL, and results in a minimal, secure, and reproducible runtime container — built automatically, with no need to touch cmake, pip, or complex Docker build logic.
Real-World Container Recipe with Umoci
#+begin_src bitbake
SUMMARY = "Video feed Video IP camera container"
LICENSE = "MIT"
PREFERRED_PROVIDER_virtual/kernel = "linux-dummy"
IMAGE_FSTYPES = "container oci"
NO_RECOMMENDATIONS = "1"
FORCE_RO_REMOVE = "1"
IMAGE_INSTALL:append = " \
viso-video-feed-ip-camera-app viso-sdk-python python3-magic vidgear \
gstreamer1.0-plugins-base-meta \
gstreamer1.0-plugins-bad-openh264 \
gstreamer1.0-plugins-bad-videoparsersbad \
gstreamer1.0-plugins-good-rtp \
gstreamer1.0-plugins-good-rtpmanager \
gstreamer1.0-plugins-good-rtsp \
gstreamer1.0-plugins-good-udp \
"
OCI_IMAGE_ENTRYPOINT = "usrbin/python3"
OCI_IMAGE_ENTRYPOINT_ARGS = "-u usrsrcappmain.py"
OCI_IMAGE_WORKINGDIR = "usrsrc/app"
OCI_IMAGE_TAG = "video-feed-ip-camera:amd64-1.2.0-gst-dev"
inherit image
inherit image-oci
#+end_src
This container can be built, signed, and deployed via Podman or containerd — all without writing a single Dockerfile.
Podman vs Umoci: Feature Comparison
| Feature | Podman | Umoci via Yocto |
|----------------------------------+-------------------------------+----------------------------------|
| Build style | Imperative | Declarative |
| Reproducibility | Medium | High |
| Security (no compilers/tools) | Manual (via multistage) | Out of the box |
| Image size optimization | Manual | Automatic |
| Build speed | Fast | Slower (initially) |
| Use case | Dev/test containers | Production-grade containers |
Podman vs Umoci: Resulting container size comparison
[[imagespodman-unoci-size.png]]
Conclusion
If you're building embedded applications with strict requirements around reproducibility, security, and image size — combining Yocto and umoci gives you full control.
While Podman and multi-stage Dockerfiles are convenient for quick iteration, they fall short for deterministic builds at scale.
Use Yocto + umoci when:
- You already use Yocto for firmware or rootfs
- You need reproducible, controlled containers
- You care about keeping unnecessary tools out of production