Need advice on setting up CI/CD for Yocto

, ,

Hi everyone, I need to set up CI/CD for our Yocto environment. But, I’m not sure where to start. Could you help me out? What tools do you use? What would you recommend?

Thanks in advance.

Note: I have server so I be able to use self-host runner.

Hi @stzn,

Thanks for the thread, as discussed on IRC I’ll focus on the self-hosted, single runner GitHub Actions scenario.

In fact I’ve been using something very similar for the meta-mender-community builds. It is defunct now as I have moved to a fully self-hosted ForgeJo instance for this now, but the files are still visible.

I think the closest blueprint for you can be meta-mender-community/.github/workflows/build_boards.yml at f42b2a0f3bab0b23c3368eaae9b3cafa10a3cc63 · mendersoftware/meta-mender-community · GitHub

The concept is essentially these steps:

(one time)
0.1. prepare a container which holds a known good build tooling set, including the setup automation you want. I use kas currently, but pick whatever suits you. common starting points might be crops and pyrex, but also creating a new one is kinda trivial.
0.2. prepare the directories on the build server where you want to persist sstate, downloads, and artefact collection.

the build workflow:

  1. prepare: remove eventually leftover top level directory which holds the builds
  2. build all boards in sequence. create a subdirectory of the top level directory, and kick off the build container inside. Give the container the following key information: uid/gid of the runner user, bind mounts of build directory, sstate and downloads.
  3. collect the build output, by copying the relevant files from the build directory to the artifact collection. (remember to implement some clean up strategy here). some ideas for artefact collection are in meta-mender-community/.github/workflows/build_boards.yml at 2d14506873aa98b2357c4309a43ad26704114997 · mendersoftware/meta-mender-community · GitHub

the test workflow:

  1. make sure you have an automated way of testing on an actual hardware in the loop setup, maybe using lab grid.

  2. this is usually quite straightforward - invoke the test with a known artifact set.
    The magic here is more in passing the information on which artifact is the one to test. Combining it into one workflow would solve that problem, but it means you can’t trigger the hardware test without all the build before, which makes turnaround time usually super bad, so I definitely recommend splitting that.

What are the problems I’ve run into:

  • passing informations between workflows. GitHub Actions just doesn’t expect anything to be persistent, only through its own artifact uploading. That totally does not work for Yocto style things though. It has its reasons, as they assume workflows can be scheduled across a huge number of runners, so you never know where you end up. It doesn’t translate well to a single runner instance.
  • uid/gid woes. Especially if the box has other duties too. Make sure you really have a grip on which uid/gid owns sstate, downloads, and so on
  • container name and registry: for whatever reason those have to be hardcoded (or at least had to when I was using them). Don’t waste time working around that.

That’s what comes to my mind immediately, if you have additional specific question, please let me know and I’ll try to help.

Greetz,
Josef