Adding docker and docker-compose to a Yocto build

Introduction

Containers, and especially docker as the management tool are seeing an ever increasing interest and demand also in embedded Linux. Lets look at how you can add those to your Yocto Project® (YP) based build!

This tutorial will guide you through how to add the necessary layers and configuration to an existing initial setup, such as described in the setup guide on Mender Hub, so docker, and eventually docker-compose can be installed on the device.

This is a high-level tutorial and the intention is not to cover the Yocto Project in detail. For detailed information we recommend that you read the Yocto Project Mega-Manual

The Yocto Project is an open source collaboration project that helps developers create custom Linux-based operating systems regardless of the hardware architecture.

The project provides a flexible set of tools and a space where embedded developers worldwide can share technologies, software stacks, configurations, and best practices that can be used to create tailored Linux images for embedded and IOT devices, or anywhere a customized Linux OS is needed.

Version notes

The tutorial has been verified on Debian 11, as of 2023-08-02

This tutorial uses kirkstone as the primary target, which is the current LTS release by the Yocto Project. You can find more infomation on releases here. Supported releases for following the tutorial are:

Yocto Project Tutorial applies Maintenance
scarthgap (5.0) :test_works: :test_works: development
nanbield (4.3) :test_works: :test_works: stable
mickledore (4.2) :test_works: :test_fails: EOL
langdale (4.1) :test_works: :test_fails: EOL
kirkstone (4.0) :test_works: :test_works: LTS
honister (3.4) :test_works: :test_fails: EOL
hardknott (3.3) :test_works: :test_fails: EOL
gatesgarth (3.2) :test_fails: :test_fails: EOL
dunfell (3.1) :test_works: :test_works: LTS
zeus (3.0) :test_fails: :test_fails: EOL
warrior (2.7) :test_fails: :test_fails: EOL
thud (2.6) :test_fails: :test_fails: EOL
sumo (2.5) :test_fails: :test_fails: EOL
rocko (2.4) :test_fails: :test_fails: EOL

Please note: a failure in the “tutorial applies” column indicates that the instructions do not work without modification. Depending on the combination of release and host Linux distribution, installing other python versions or dependencies might provide a functional state.

Prerequisites

To follow this tutorial, you will need:

  • A prepared build setup We will be using an exisiting Yocto Project build setup and the Raspberry Pi 4 as an example target. As the example for the instructions, the setup as described on the Mender Hub is used.
  • An initialized shell The commands assume that the shell has been initialized for the build setup, which is usually done by invoking source poky/oe-init-build-env. Given the linked setup, we start in the yocto directory

The meta-virtualization layer

The meta-virtualization layer is the canonical place for all virtualization and containerization-related pieces of metadata. This mostly means recipes, but also commonly required kernel configuration fragments.

Prominent examples for provided recipes are

Getting the layer

Clone the repository into the sources directory. We are checking out the kirkstone branch here, adapt in case you are using a different release.

cd sources
git clone git://git.yoctoproject.org/meta-virtualization -b kirkstone
cd ..

Adding the layer

Change into the build directory:

cd build

meta-virtualization has a an additional layer dependency compared to our base setup,
so lets add that first:

bitbake-layers add-layer ../sources/meta-openembedded/meta-filesystems

Then add the virtualization layer, and check the layer listing:

bitbake-layer add-layer ../sources/meta-virtualization
bitbake-layer show-layers

docker

Building

docker is by itself a recipe, as there are various providers for it. You can test its build via

bitbake virtual/docker

or by selecting a specific provider, such as docker-ce by

bitbake docker-ce

Note: this just checks if docker builds for your setup. As with every package, it will not be installed without adding it to the image.

Enabling the virtualization features

You might have noticed this warning:

WARNING: You have included the meta-virtualization layer, but 'virtualization' has not been enabled in your DISTRO_FEATURES. Some bbappend files may not take effect. See the meta-virtualization README for details on enabling virtualization support.

This means, you need to add a distribution-wide setting in order to fully enable the things that meta-virtualization provides. An example would be the overlayfs support, which docker requires. Without this kernel option, docker could be installed on the device but would not work.

For our example setup, you can just add the DISTRO_FEATURES extension to local.conf. In a real life situation, this should go into your custom distribution configuration file.

echo 'DISTRO_FEATURES:append = " virtualization"' >> conf/local.conf

Adding docker to the image and testing it

To add docker to your resulting image, you can append it to the IMAGE_INSTALL variable. For testing, we put that modification into local.conf:

echo 'IMAGE_INSTALL:append = " docker-ce"' >> conf/local.conf

Then build the desired images such as core-image-base, and test docker on your machine:

bitbake core-image-base
<put on board, boot and log in>
docker run -it --rm hello-world

Congratulations, you have added docker to your build!

For proper maintenance, it is highly recommended to move the DISTRO_FEATURES extension into your custom distribution configuration now, and add the IMAGE_INSTALL modification to your project image recipe!

Advanced: docker-compose

The mickledore release

Unfortunately, docker-compose is only available from the mickledore release and onwards, see the OpenEmbedded layerindex for reference. This means, if you want to install it you need to forward all of your build setup to the mickledore release branch. Usually this is straightforwards by checking out the mickledore branch of all layers in the sources directory. After that, the next image build will be based on the new release. Please note that this will probably require a noticeable amount of time!

Adding docker-compose

Once the build setup and especially meta-virtualization are moved to mickledore, docker-compose can be added to the IMAGE_INSTALL variable. Again, for testing, putting this into local.conf can be done like this:

echo 'IMAGE_INSTALL:append = " docker-compose"' >> conf/local.conf

This will add docker-compose to any image.

Adding iptables NAT to the kernel

By default, meta-virtualization brings a kernel configuration fragment which enables NAT as a module. This can cause problems with software defined network configurations in docker-compose. To get around that, you can add a configuration fragment which moves this functionality to built-in.

This requires that you have a custom layer prepared which can hold recipes. Please see steps 1 and 2 of this article if you still need to create one.

In your layer, create a directory structure to hold kernel modifications. On the Raspberry Pi, the linux-raspberrypi kernel recipe is used, adapt accordingly if you use a different board.

mkdir -p recipes-kernel/linux-raspberrypi
cd recipes-kernel/linux-raspberrypi

Add a .bbappend file to incorporate the additional configuration fragment, and create the fragment:

cat >> linux-raspberrypi_6.1.bbappend <<EOF
FILESEXTRAPATHS:prepend := "${THISDIR}/${PN}:"
SRC_URI += "file://netfilter.cfg"
EOF

mkdir linux-raspberrypi && cd linux-raspberrypi

cat >> netfilter.cfg <<EOF
CONFIG_NETFILTER_NETLINK=y
CONFIG_NF_NAT=y
CONFIG_NF_TABLES=y
CONFIG_NFT_NAT=y
CONFIG_NETFILTER_XTABLES=y
CONFIG_NETFILTER_XT_NAT=y
CONFIG_NETFILTER_XT_TARGET_NETMAP=y
CONFIG_NETFILTER_XT_TARGET_REDIRECT=y
CONFIG_NETFILTER_XT_TARGET_MASQUERADE=y
CONFIG_IP_NF_IPTABLES=y
CONFIG_IP_NF_NAT=y
CONFIG_IP_NF_TARGET_MASQUERADE=y
CONFIG_IP_NF_TARGET_NETMAP=y
CONFIG_IP_NF_TARGET_REDIRECT=y
EOF

Note: this matches the Linux kernel version 6.1 as correlated to the mickledore release. Adapt as necessary to your specific board and release.

After rebuilding the image, which will also trigger a kernel rebuild, you are able to fully use docker-compose on your device.

Conclusion

In this tutorial we covered the general requirements and strategy to install docker and eventually docker-compose on your Yocto Project based Linux image. We used the Raspberry Pi 4 as a reference platform, with the core concepts being applicable to any target which has a Yocto Project based board support package.

Addendum - Troubleshooting

docker: x509: certificate has expired

As the Raspberry Pi used in this example does not have a buffered RTC not is synchronizing the clock with NTP by default, you might get the error docker: Error response from daemon: Get "https://registry-1.docker.io/v2/": x509: certificate has expired or is not yet valid: current time 2018-03-09T12:35:33Z is before 2023-05-05T00:00:00Z., or a similar timestamp mentioned.

Fix

Manually set the system time on your Raspberry Pi using the date command. An example to set August 3rd 2023, 13:15PM is

date -s 2308031315

For more details, please consult the manpage


If this tutorial was useful to you, please press like, or leave a thank you note to the contributor who put valuable time into this and made it available to you. It will be much appreciated!

1 Like

Dear TheYoctoJester,

Thank you for this write up. This is amazing. Having remote docker compose support on yocto builds is great.

Forgive me if I am making novice mistakes here.

I followed your tutorial, but I tried to modify your example in two ways:

  1. Create a meta-layer where your modifications are put in their correct places
  2. Modify the MACHINE such that I can run this example in qemux86-64

To this extend I uploaded my work as an example to github. It is unfinished, but compiles.

And I modified the conf/local.conf as such:

MACHINE ??= "qemux86-64"
DISTRO ?= "robotics-deployment"

EXTRA_IMAGE_FEATURES ?= "debug-tweaks"
BB_NUMBER_THREADS = "${@oe.utils.cpu_count() - 1}"
PARALLEL_MAKE = "-j ${@oe.utils.cpu_count() - 1}"

USER_CLASSES ?= "buildstats"

PATCHRESOLVE = "noop"

BB_DISKMON_DIRS ??= "\
    STOPTASKS,${TMPDIR},1G,100K \
    STOPTASKS,${DL_DIR},1G,100K \
    STOPTASKS,${SSTATE_DIR},1G,100K \
    STOPTASKS,/tmp,100M,100K \
    HALT,${TMPDIR},100M,1K \
    HALT,${DL_DIR},100M,1K \
    HALT,${SSTATE_DIR},100M,1K \
    HALT,/tmp,10M,1K"

PACKAGECONFIG:append:pn-qemu-system-native = " sdl"

CONF_VERSION = "2"

For brevity, I removed all comments that come default in local.conf

Along with the conf/bblayers.conf:

# POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
POKY_BBLAYERS_CONF_VERSION = "2"

BBPATH = "${TOPDIR}"
BBFILES ?= ""

BBLAYERS ?= " \
  ${TOPDIR}/../meta \
  ${TOPDIR}/../meta-poky \
  ${TOPDIR}/../meta-yocto-bsp \
  ${TOPDIR}/../sources/meta-openembedded/meta-oe \
  ${TOPDIR}/../sources/meta-openembedded/meta-filesystems \
  ${TOPDIR}/../sources/meta-openembedded/meta-python \
  ${TOPDIR}/../sources/meta-openembedded/meta-networking \
  ${TOPDIR}/../sources/meta-virtualization \
  ${TOPDIR}/../sources/meta-robotics-deployment \
  "

After all this, I did manage to compile it all and that is all well.

However, after

runqemu qemux86-64

and a successful boot, I run into the following:

I have a feeling I did something wrong with the netfilter.cfg , can it be that your configuration does not apply to qemux86-64 ? I have not tried this yet for a raspberry pi.

I have never seen the http: invalid Host header error

What I tried is modifying the following in local.conf:

PREFERRED_PROVIDER_virtual/docker = "docker-moby"

The installed docker version went from 23.0.2 to 24.0.0, thinking that there would be an issue in docker, but sadly the issue remains.

Edit: I decided to build it with MACHINE ??= “raspberrypi4” instead and flashed my raspberry pi and I am getting the identical error. I would love to get your input on this?

Some debug output that could maybe help:

Greetings and thank you very much for your post.

Deniz

In addition to this, it seems that podman does work out of the box, also while removing the netfilter.cfg configuration from the build:

root@qemux86-64:~# podman run hello-world
Resolved "hello-world" as an alias (/var/cache/containers/short-name-aliases.conf)
Trying to pull docker.io/library/hello-world:latest...
Getting image source signatures
Copying blob 719385e32844 done  
Copying config 9c7a54a9a4 done  
Writing manifest to image destination
Storing signatures

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

root@qemux86-64:~# 

Hi @dHofmeister,

Thanks a lot for getting in touch. I have verified the behaviour, and talked to the meta-virtualization maintainers about it. It is actually a currently known bug. Fixes should already be on the master branch, and are planned to appear on mickledore shortly. I will keep you in the loop!

Greetz,
Josef

1 Like

@TheYoctoJester What would be the best way to get docker-compose working in a kirkstone yocto environment? I have tried upgrading my entire build process to the mickledore branch and master-next for the meta-mender layer, but I’m not really comfortable running development branches in a production build. Is there another way to get docker-compose into kirkstone without upgrading?

For reference, I’m running python3-docker-compose right now but for usage with app-gen to generate a docker-compose artifact need to use docker-compose v2.

Hi @sieger,

No path to kirkstone that I could really recommend. I tried to build and run as-is, but the go versions were giving me a hard time. In the current situation, my bet would rather be to move forward as much as possible, nanbield or even master until the next release scarthgap directly branches out into LTS. This might mean a bit more integration, maintenance and testing right now but will give the best long term strategy.

Greetz,
Josef

1 Like

4 posts were split to a new topic: Install docker-compose on Yocto, bbappend not being picked up