Giada VM23

The official Mender documentation explains how Mender works. This is simply a board-specific complement to the official documentation.

Board description

Giada VM23 is versatile PC optimized for digital signage, desktop virtualization, and noise sensitive situations. The VM23 is available with two Intel Apollo Lake based options. There is an Intel®Celeron N3350 based, dual-core version with 2GB Low-power DDR3 embedded, and an Intel®Celeron N3450 based version with 4GB of Low-power DDR3. Both models are supported by 32GB of on-board eMMC, with a full-size mSATA connector available for expanded storage options. The VM23 supports Windows 10 (64-bit) and Linux.


Test results

The Yocto Project releases in the table below have been tested by the Mender community. Please update it if you have tested this integration on other Yocto Project releases:

Yocto Project Build Runtime
thud (2.6) :test_works: :test_works:

Build Means that the Yocto Project build using this Mender integration completes without errors and outputs images.
Runtime Means that Mender has been verified to work on the board. For U-Boot-based boards, the integration checklist has been verified.

Getting started


  • A supported Linux distribution and dependencies installed on your workstation/laptop as described in the Yocto Mega Manual
    • NOTE. Instructions depend on which Yocto version you intend to use.
  • Google repo tool installed and in your PATH.

Configuring the build

Setup Yocto environment

Set the Yocto Project branch you are building for:

# set to your branch, make sure it is supported (see table above)
export BRANCH="thud"

Create a directory for your mender-nuc setup to live in and clone the
meta information.

mkdir mender-giada && cd mender-giada

Initialize repo manifest:

repo init -u \
           -m meta-mender-intel/scripts/manifest-intel.xml \
           -b ${BRANCH}

Fetch layers in manifest:

repo sync

Setup build environment

Initialize the build environment:

source setup-environment intel

Configure Mender server URL (optional)

This section is not required for a successful build but images that are generated by default are only suitable for usage with the Mender client in Standalone deployments, due to lack of server configuration.

You can edit the conf/local.conf file to provide your Mender server configuration, ensuring the generated images and Mender Artifacts are connecting to the Mender server that you are using. There should already be a commented section in the generated conf/local.conf file and you can simply uncomment the relevant configuration options and assign appropriate values to them.

Build for Hosted Mender:

# To get your tenant token:
#    - log in to
#    - click your email at the top right and then "My organization"
#    - press the "COPY TO CLIPBOARD"
#    - assign content of clipboard to MENDER_TENANT_TOKEN
MENDER_TENANT_TOKEN = "<copy token here>"

Build for Mender demo server:

# Update IP address to match the machine running the Mender demo server

Building the image

You can now proceed with building an image:

MACHINE=intel-corei7-64 bitbake core-image-full-cmdline

Replace core-image-full-cmdline with your desired image target.

Using the build output

After a successful build, the images and build artifacts are placed in tmp/deploy/images/intel-corei7-64/.

  • tmp/deploy/images/intel-corei7-64/core-image-full-cmdline-intel-corei7-64.sdimg
  • tmp/deploy/images/intel-corei7-64/core-image-full-cmdline-intel-corei7-64.mender

The disk image with .uefiimg suffix is used to provision the device storage for devices without Mender running already. Please proceed to the official documentation on provisioning a new device for steps to do this.

On the other hand, if you already have Mender running on your device and want to deploy a rootfs update using this build, you should use the Mender Artifact files, which have .mender suffix. You can either deploy this Artifact in managed mode with the Mender server (upload it under Releases in the server UI) or by using the Mender client only in Standalone deployments.

Booting from on-board eMMC disk

The default settings assume that you will write the uefiimg to the internal eMMC disk. To be able to do this you must boot the system using a USB flash drive. Please choose some favorite distribution and create live usb image.
NOTE! Flashing yocto generated uefiimg image won’t boot on device

To build an image suitable to flash on on the on-board eMMC disk you must set the following in your local.conf and re-build:

MENDER_STORAGE_DEVICE = "/dev/mmcblk0"

Once the system is booted from the USB flash drive you must transfer the core-image-full-cmdline-intel-corei7-64.uefiimg.gz file, e.g using scp:

scp tmp/deploy/images/intel-corei7-64/core-image-full-cmdline-intel-corei7-64.uefiimg.gz root@

NOTE! Replace IP address from the example to the actual IP address of your device

Once the file is transferred you need to write the uefiimg to the on-board eMMC disk by running the following command:

zcat /tmp/core-image-full-cmdline-intel-corei7-64.uefiimg.gz | dd of=/dev/mmcblk0 bs=32M

Known issues

  • Booting from usb fails (not investigated yet why) as workaround boot live distro image and copy uefiimg to eMMC as described above

If this post was useful to you, please press like, or leave a thank you note to the contributor who put valuable time into this and made it available to you. It will be much appreciated!

1 Like