Adding k3s to a Yocto build


Containers are seeing an ever increasing interest, and based upon their technology, kubernetes has emerged as a de-facto standard for their orchestration. Let’s look at how you can add this to your Yocto Project® (YP) based build!

This tutorial will guide you through how to add the necessary layers and configuration to an existing initial setup, such as described in the setup guide on Mender Hub, to add k3s. K3s is a Kubernetes distribution suitable for resource constrained devices, such as IoT or edge computing.

This is a high-level tutorial and the intention is not to cover the Yocto Project in detail. For detailed information we recommend that you read the Yocto Project Mega-Manual

The Yocto Project is an open source collaboration project that helps developers create custom Linux-based operating systems regardless of the hardware architecture.

The project provides a flexible set of tools and a space where embedded developers worldwide can share technologies, software stacks, configurations, and best practices that can be used to create tailored Linux images for embedded and IoT devices, or anywhere a customized Linux OS is needed.

Version notes

The tutorial has been verified on Debian 12, as of 2024-03-08

This tutorial uses kirkstone as the primary target, which is the current LTS release by the Yocto Project. You can find more infomation on releases here. Supported releases for following the tutorial are:

Yocto Project Tutorial applies Maintenance
scarthgap (5.0) :test_works: :test_works: development
nanbield (4.3) :test_works: :test_works: stable
mickledore (4.2) :test_works: :test_fails: EOL
langdale (4.1) :test_works: :test_fails: EOL
kirkstone (4.0) :test_works: :test_works: LTS
honister (3.4) :test_works: :test_fails: EOL
hardknott (3.3) :test_works: :test_fails: EOL
gatesgarth (3.2) : test_works: :test_fails: EOL
dunfell (3.1) :test_works: :test_works: LTS
zeus (3.0) :test_fails: :test_fails: EOL
warrior (2.7) :test_fails: :test_fails: EOL
thud (2.6) :test_fails: :test_fails: EOL
sumo (2.5) :test_fails: :test_fails: EOL
rocko (2.4) :test_fails: :test_fails: EOL

Please note: a failure in the “tutorial applies” column indicates that the instructions do not work without modification. Depending on the combination of release and host Linux distribution, installing other python versions or dependencies might provide a functional state.


To follow this tutorial, you will need:

  • A prepared build setup We will be using an exisiting Yocto Project build setup and QEMU for AArch64 as an example target. As the example for the instructions, the setup as described on the Mender Hub is used.
  • An initialized shell The commands assume that the shell has been initialized for the build setup, which is usually done by invoking source poky/oe-init-build-env. Given the linked setup, we start in the yocto directory

The meta-virtualization layer

The meta-virtualization layer is the canonical place for all virtualization and containerization-related pieces of metadata. This mostly means recipes, but also commonly required kernel configuration fragments.

Prominent examples for provided recipes are

Getting the layer

Clone the repository into the sources directory. We are checking out the kirkstone branch here, adapt in case you are using a different release.

cd sources
git clone git:// -b kirkstone
cd ..

Adding the layer

Change into the build directory:

cd build

meta-virtualization has additional layer dependencies compared to our base setup,
so lets add that first:

bitbake-layers add-layer ../sources/meta-openembedded/meta-filesystems
bitbake-layers add-layer ../sources/meta-openembedded/meta-networking
bitbake-layers add-layer ../sources/meta-openembedded/meta-python

Then add the virtualization layer, and check the layer listing:

bitbake-layer add-layer ../sources/meta-virtualization
bitbake-layer show-layers



You can test building k3s via

bitbake k3s

Note: this just checks if k3s builds for your setup. As with every package, it will not be installed without adding it to the image.

Enabling the virtualization features

You might have noticed this warning:

WARNING: You have included the meta-virtualization layer, but 'virtualization' has not been enabled in your `DISTRO_FEATURES. Some bbappend files may not take effect. See the meta-virtualization README for details on enabling virtualization support.

This means, you need to add a distribution-wide setting in order to fully enable the things that meta-virtualization provides. In the k3s context, we need two additional DISTRO_FEATURE flags to be set.

  • some kernel configuration tweaks. Those are triggered by the DISTRO_FEATURE setting k3s.
  • seccomp. This is enabled by the DISTRO_FEATURE setting k3s.

For our example setup, you can just add the DISTRO_FEATURES extension to local.conf. In a real life situation, this should go into your custom distribution configuration file.

echo 'DISTRO_FEATURES:append = " k3s"' >> conf/local.conf
echo 'DISTRO_FEATURES:append = " seccomp"' >> conf/local.conf
echo 'DISTRO_FEATURES:append = " virtualization"' >> conf/local.conf

Image adaptions

The k3s recipes in meta-virtualization rely on systemd as init manager for automated start up. Just like DISTRO_FEATURES, this should be set in your distribution configuration file, for the sake of this tutorial we will use local.conf.

echo 'INIT_MANAGER = "systemd' >> conf/local.conf

In order to have enough free storage for running containers, allocate 2GB of additional image space. This should be done in a custom image recipe, but local.conf is also possible:

echo 'IMAGE_ROOTFS_EXTRA_SPACE= "2097152' >> conf/local.conf

Adding k3s to the image and testing it

To add a working standalone k3s instantiation to your resulting image, you need to add a few items it to the IMAGE_INSTALL variable. For testing, we put that modification into local.conf:

echo 'IMAGE_INSTALL:append = " packagegroup-k3s-host"' >> conf/local.conf
echo 'IMAGE_INSTALL:append = " packagegroup-k3s-node"' >> conf/local.conf
echo 'IMAGE_INSTALL:append = " ca-certificates"' >> conf/local.conf
echo 'IMAGE_INSTALL:append = " kernel-modules"' >> conf/local.conf

Then build the desired image such as core-image-base.

bitbake core-image-base

Unlike the standard images which can be tested with runqemu, running the k3s cluster requires a substantially larger amount of ram than the default, which is just 256MB. Start runqemu with 2048MB of RAM:

runqemu nographic slirp qemuparams="-m 2048"

Log into the started instance, and after waiting a few moments to let the start up process complete, you can test the k3s cluster by running:

root@qemuarm64:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.6-k3s1", GitCommit:"c29ac40427297294b851f1edecce76b17eaf0e96", GitTreeState:"clean", BuildDate:"2022-01-20T00:22:17Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/arm64"}
Server Version: version.Info{Major:"1", Minor:"22", GitVersion:"v1.22.6-k3s1", GitCommit:"c29ac40427297294b851f1edecce76b17eaf0e96", GitTreeState:"clean",
BuildDate:"2022-01-20T00:22:17Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"linux/arm64"}
root@qemuarm64:~# kubectl get nodes
NAME           STATUS   ROLES                  AGE     VERSION
qemuarm64   Ready    control-plane,master   2m13s   v1.22.6-k3s1

Congratulations, you have added k3s to your build!

For proper maintenance, it is highly recommended to move the DISTRO_FEATURES extension into your custom distribution configuration now, and add the IMAGE_INSTALL modifications to your project image recipe!


In this tutorial we covered the general requirements and strategy to install k3s on your Yocto Project based Linux image. We used QEMU as a reference platform, with the core concepts being applicable to any target which has a Yocto Project based board support package.

Additional resources

meta-virtualization has a README file specifically for k3s: « k3s « recipes-containers - meta-virtualization - Layer enabling hypervisor, virtualization tool stack, and cloud support.

If this tutorial was useful to you, please press like, or leave a thank you note to the contributor who put valuable time into this and made it available to you. It will be much appreciated!