Read only compressed rootfs in raw flash?

On my platform I have NOR flash that is smaller than my rootfs size so would like to use a compressed rootfs that decompresses and runs out of RAM on boot.

As far as I can tell, this doesn’t seem to be something that’s supported by Mender out of the box (please correct me if I’m wrong!) so I’ve been looking into how I can achieve this. I wanted to ask if I’m on the right track or if I’m missing anything with my current plan:

  1. I’m thinking of using squashfs to compress the rootfs, but am not strictly tied to it and could be convinced to use something different if it’s better supported with Mender or if squashfs is a bad idea.
  2. It looks like Mender only supports ubifs for flash devices, so I was thinking of having a standard setup (like what’s in meta-mender-qemu) that’s something like:
  • mtd0: uboot
  • mtd1: ubi layer with ubifs volumes rootfsA, rootfsB & data

ubifs doesn’t look like it’s strictly necessary for what I want to do (I could just use mtd raw flash devices) but does give the advantage of wear leveling and allows file based access, which is useful for point 2. below.

  1. In each of the rootfs ubifs volumes, I would then have:
  • Kernel image
  • Device tree binary
  • Squashfs rootfs file
  1. It looks as though the Mender uboot code assumes that the ubifs volume it boots from contains the rootfs directly:

So I would need to update the boot arguments to tell the kernel to boot with the squashfs rootfs on the current partition.

  1. Will I need to do any work to allow mender-artifact to wrap the above three items into an artifact that clients can install? I suppose all it really need to do is consider the kernel, squashfs file and dtb as a ubifs volume, so should just work?

  2. Are there any hints about how I can go about achieving this in Yocto? Should I be making a new image.bbclass and extending the mtdimg class similar to what meta-mender-qemu does?

Just FYI, we have done integrations similar to what you are explaining in your thread, e.g OpenWRT uses squashfs + UBI volumes. So it is fully possible but might require some integration work.

squashfs should work just fine, and Mender has no real hard requirement on filesystem type. It is just an payload that gets transferred and written as is to the storage medium.

  1. It looks like Mender only supports ubifs for flash devices, so I was thinking of having a standard setup (like what’s in meta-mender-qemu ) that’s something like:
  • mtd0: uboot
  • mtd1: ubi layer with ubifs volumes rootfsA , rootfsB & data

Mender supports writing UBI volumes and the UBIFS part is not involved at all. To avoid confusion in terminology :slight_smile:

For this reason I would suggest for you to use UBI volumes in “raw mode” where you store your squashfs images. This will work out of the box pretty much.

  1. It looks as though the Mender uboot code assumes that the ubifs volume it boots from contains the rootfs directly:

Yes, the default logic assumes that the kernel image and dtb are in the rootfs image. Though this will be problematic when using squashfs, because there is no native support in U-boot to load files from sqashfs images.

For this reason, in our project, we put the kernel images in separate RAW UBI volumes, so the layout would be:

mtd1: ubi layer with the following volumes:

- kernelA + dtb B (can be solved by appending dtb to kernel image or using FIT images, in our project we had FIT images)
- rootfsA
- kernelB + dtb B (can be solved by appending dtb to kernel image or using FIT images, in our project we had FIT images)
- rootfsB
- data

We had to adjust the boot logic scripts to load kernel from kernelA/B depending on if rootfsA/B is booted.

You would also need to write a custom state-script to update the kernalA/B partitions when you update rootfsA/B.

Will I need to do any work to allow mender-artifact to wrap the above three items into an artifact that clients can install? I suppose all it really need to do is consider the kernel, squashfs file and dtb as a ubifs volume, so should just work?

Should work with “stock” mender-artifact, as long as you package everything inside the squashfs image.

Here is the state-script that we used,

ArtifactReboot_Enter_50

#!/bin/sh

set -e

function cleanup {
    if mount | grep -q /mnt; then
        umount /mnt/boot_candidate
    fi
}
trap cleanup EXIT

echo "$(cat /etc/mender/artifact_info): $(basename "$0") was called!" >&2

# At the point when this script is called the "mender_boot_part" variable
# is set to the boot candidate, meaning that we can use this variable to pick
# up the "new" kernel image from the boot candidate to make sure the next
# boot will use it.
boot_candidate=$(fw_printenv mender_boot_part | sed 's/[^=]*=//')

if [ ${boot_candidate} -eq 3 ]; then
    inactive_kernel="/dev/ubi0_1" # kernel A
else
    inactive_kernel="/dev/ubi0_2" # kernel B
fi

mkdir -p /mnt/boot_candidate

echo "Mounting boot_candidate: /dev/ubiblock0_${boot_candidate}"
mount -o ro /dev/ubiblock0_${boot_candidate} /mnt/boot_candidate

echo "Updating kernel part: ${inactive_kernel}"
ubiupdatevol ${inactive_kernel} /mnt/boot_candidate/boot/kernel

echo "Cleaning up mount points"

cleanup

echo "Finished updating the Linux kernel"

exit 0

Thanks @mirzak! Very helpful.

Good to hear! Any chance that these implementations are online somewhere? Would be really helpful to use as a reference.

I think that makes sense (sorry, still coming up to speed with all the flash terminology). I’m guessing you would still need to use UBIFS on the /data/ volume though? As it will need some kind of r/w file system on it?

Trying to get my head around this from what you’ve posted, is this accurate?

  1. Mender artifact contains the squashfs rootfs, in this rootfs there is a fitImage that contains the kernel and dtb.
  2. Mender receives an update and flashes the squashfs to the inactive rootfs volume (I imagine essentially doing something like ubiupdatevol /dev/ubi0_{inactive_volume} rootfs.sqsh)
  3. ubiblock is used so that the (recently written to) inactive root volume can be mounted as a block device
  4. Mender state script mounts the inactive rootfs volume and copies out the fitImage into its own ubi volume, so that it can be read by UBoot.

When booting:

  1. UBoot loads the kernel from the active kernel UBI volume
  2. Via changing the kernel command line arguments, UBoot tells the kernel to mount the active rootfs volume as a block device (this post looks helpful for this: linux - Using squashfs on top of ubi as root file system - Unix & Linux Stack Exchange)

Does this mean that I’m “wasting” a bit of my flash storage to store the fitImage twice (once in the squashfs file and once in the separate volume?). I wonder how hard it would be to modify the Mender client to take two separate artifacts - a squashfs rootfs (without fitImage) and a fitImage?

Maybe a stupid question, but if the UBI volumes that contain the fitImages on flash are ‘raw’ volumes, how does UBoot know how much of the volume to read in order to load the kernel/fitImage? Or is it safe to read the entire volume and not mind that you may have read past the end of the kernel?

Currently not available online, but I will look in to making it available. Note that this was not a Yocto integration, and was based on OpenWRT (who used a similar setup to what you are describing)

I think that makes sense (sorry, still coming up to speed with all the flash terminology). I’m guessing you would still need to use UBIFS on the /data/ volume though? As it will need some kind of r/w file system on it?

Yes, you are correct. You would need UBIFS for the /data volume to allow read/write.

Trying to get my head around this from what you’ve posted, is this accurate?

  1. Mender artifact contains the squashfs rootfs, in this rootfs there is a fitImage that contains the kernel and dtb.
  2. Mender receives an update and flashes the squashfs to the inactive rootfs volume (I imagine essentially doing something like ubiupdatevol /dev/ubi0_{inactive_volume} rootfs.sqsh )
  3. ubiblock is used so that the (recently written to) inactive root volume can be mounted as a block device
  4. Mender state script mounts the inactive rootfs volume and copies out the fitImage into its own ubi volume, so that it can be read by UBoot.

You have understood it correctly :slight_smile:

  1. UBoot loads the kernel from the active kernel UBI volume
  2. Via changing the kernel command line arguments, UBoot tells the kernel to mount the active rootfs volume as a block device (this post looks helpful for this: linux - Using squashfs on top of ubi as root file system - Unix & Linux Stack Exchange)

Correct.

Does this mean that I’m “wasting” a bit of my flash storage to store the fitImage twice (once in the squashfs file and once in the separate volume?). I wonder how hard it would be to modify the Mender client to take two separate artifacts - a squashfs rootfs (without fitImage) and a fitImage?

Yes, you are “wasting” a bit of space to reduce complexity I would say.

If you want, you are able to split up the update in two:

  • update only rootfs
  • update only kernel

And you can create a Mender Artifact for each. The rootfs support is already there (rootfs-image update), the kernel update you can implement utilizing Update Modules.

Currently Mender does not support deploying multiple payloads in one Artifact (will support it sometime in the future), so you will need to create two separate deployments to update rootfs and kernel.

But there is certainly some additional flexibility in the Update Modules framework, and you might be able to solve your use-case with one custom Update Module.

Maybe a stupid question, but if the UBI volumes that contain the fitImages on flash are ‘raw’ volumes, how does UBoot know how much of the volume to read in order to load the kernel/fitImage? Or is it safe to read the entire volume and not mind that you may have read past the end of the kernel?

Apologies, I mixed up the terminology. What I am referring to is static UBI volumes, from the UBI docs,

There are 2 types of UBI volumes - dynamic volumes and static volumes. Static volumes are read-only and their contents are protected by CRC-32 checksums, while dynamic volumes are read-write and the upper layers (e.g., a file-system) are responsible for ensuring data integrity.

And the UBI volume should contain the information of how large the image that is written to it (as it is CRC-32 protected as well).

And in U-boot you would run:

ubi read $loadaddr kernel; bootm $loadaddr#config@1

The ubi read command should take care of how many bytes to read.

1 Like

Thanks again, very helpful! :slight_smile:

I like the idea of using update modules for this. My kernel is ~5MB so I’d be forfeiting ~10% of storage to store the kernel twice, which is acceptable now but, if my rootfs were to grow, might not be in the future. I’m thinking for now to ‘waste’ the space using your original suggested approach and maybe migrate (through OTA updates) to the update modules approach if necessary in the future.

I also noticed that there’s squashfs support in fitImages. Do you have any opinion about using this as an alternative? I think it would mean that an update artifact could contain a fitImage with:

  • kernel
  • dtb
  • squashfs rootfs (without the kernel installed)

and then I could have a more ‘traditional’ Mender setup with:

  1. boot volume
  2. active fitImage volume
  3. inactive fitImage volume
  4. data volume

Which has the nice benefits of:

  • Not ‘wasting’ space to store each kernel twice
  • Simplifies both the boot and update processes as you only need to boot/update a single volume rather than 2

The downsides are that there’s probably a greater integration effort involved as:

  • It doesn’t look like yocto has great (or any?) support for building fitImages with a squashfs
  • I’m currently unsure how well uboot supports fitImages that contain a squashfs rootfs

This looks like a great approach to me and probably preferable over the previously mentioned.

The downsides are that there’s probably a greater integration effort involved as:

  • It doesn’t look like yocto has great (or any?) support for building fitImages with a squashfs

Would not expect this to be hard to extend, either with a custom image class or extending the existing with support for “squashfs”.

  • I’m currently unsure how well uboot supports fitImages that contain a squashfs rootfs¨

I would not expect this to be a problem, U-boot will only copy it from the FIT image to an RAM address to pass it along to the kernel. This should be a “file system type agnostic” operation.

Based on the provided example in, https://lists.denx.de/pipermail/u-boot/2018-January/318553.html. It seems to be using a gzip compressed squashfs.

Looking at https://git.yoctoproject.org/cgit.cgi/poky/plain/meta/classes/kernel-fitimage.bbclass,

#
# Emit the fitImage ITS ramdisk section
#
# $1 ... .its filename
# $2 ... Image counter
# $3 ... Path to ramdisk image
fitimage_emit_section_ramdisk() {

	ramdisk_csum="${FIT_HASH_ALG}"
	ramdisk_ctype="none"
	ramdisk_loadline=""
	ramdisk_entryline=""

	if [ -n "${UBOOT_RD_LOADADDRESS}" ]; then
		ramdisk_loadline="load = <${UBOOT_RD_LOADADDRESS}>;"
	fi
	if [ -n "${UBOOT_RD_ENTRYPOINT}" ]; then
		ramdisk_entryline="entry = <${UBOOT_RD_ENTRYPOINT}>;"
	fi

	case $3 in
		*.gz)
			ramdisk_ctype="gzip"
			;;
		*.bz2)
			ramdisk_ctype="bzip2"
			;;
		*.lzma)
			ramdisk_ctype="lzma"
			;;
		*.lzo)
			ramdisk_ctype="lzo"
			;;
		*.lz4)
			ramdisk_ctype="lz4"
			;;
	esac

	cat << EOF >> ${1}
                ramdisk@${2} {
                        description = "${INITRAMFS_IMAGE}";
                        data = /incbin/("${3}");
                        type = "ramdisk";
                        arch = "${UBOOT_ARCH}";
                        os = "linux";
                        compression = "${ramdisk_ctype}";
                        ${ramdisk_loadline}
                        ${ramdisk_entryline}
                        hash@1 {
                                algo = "${ramdisk_csum}";
                        };
                };
EOF
}

I do not see why it would not support squashfs. The ${3} argument is simply: # $3 ... Path to ramdisk image and there is nothing filesystem specific here.

The part that is lacking is:

	#
	# Step 4: Prepare a ramdisk section.
	#
	if [ "x${ramdiskcount}" = "x1" ] ; then
		# Find and use the first initramfs image archive type we find
		for img in cpio.lz4 cpio.lzo cpio.lzma cpio.xz cpio.gz ext2.gz cpio; do
			initramfs_path="${DEPLOY_DIR_IMAGE}/${INITRAMFS_IMAGE_NAME}.${img}"
			echo "Using $initramfs_path"
			if [ -e "${initramfs_path}" ]; then
				fitimage_emit_section_ramdisk ${1} "${ramdiskcount}" "${initramfs_path}"
				break
			fi
		done
	fi

Should be possible to just add squashfs to the above list. And update fitimage_emit_section_ramdisk to map squashfs to gzip.

Great, I will proceed with this approach then! There does seem to be a slight lack of references for using Mender with raw flash (the documentation is a good starting point but there’s seemingly a large amount of manual integration still needed after what’s prescribed there) and I would think that using squashfs/flash would be a fairly common use case for embedded devices so, if there’s interest and I get something working, I might contribute my findings back to the community with a blog post. Will update here if I do so. :slight_smile:

I did get a fitImage with kernel, dtb and squashfs (xz compressed) booting on my hardware today, although I manually hacked it all together. Will have to look into getting it built with yocto, so thanks for your comments on that! The only other downside I didn’t mention before is that the squashfs filesystem is copied to a RAM disk on boot (as you mention above), rather than being read from flash as it would in your initial approach. I’m lucky in that I have an abundance of RAM (512MB) and a fairly small compressed rootfs (~30MB) so I think it is ok for me, but worth noting.

Because of the copy to RAM disk, I also needed to increase the ramdisk_size that the kernel used. I’m still trying to find if there’s any downside (apart from using up a chunk of memory) but, so far, it seems ok.

On the brightside, it is a bit of a simplification that Mender doesn’t need to modify bootargs in uboot in order to boot, because the bootargs of root=/dev/ram0 work regardless of which ubi volume it boots from.

There does seem to be a slight lack of references for using Mender with raw flash (the documentation is a good starting point but there’s seemingly a large amount of manual integration still needed after what’s prescribed there)

One reference integration that you can take a look at is the Toradex iMX7 (note that there is a iMX7 with eMMC as well, but you can ignore parts that apply to that ),

meta-mender/meta-mender-toradex-nxp at rocko · mendersoftware/meta-mender · GitHub

meta-mender-community/meta-mender-toradex-nxp at rocko · mendersoftware/meta-mender-community · GitHub

Another one is one of the Variscite SoM`s,

meta-mender-community/meta-mender-variscite at sumo · mendersoftware/meta-mender-community · GitHub

if there’s interest and I get something working, I might contribute my findings back to the community with a blog post. Will update here if I do so. :slight_smile:

This would definitely we valuable to our community and looking forward to reading it :slight_smile:

@mirzak I am able to flash squashfs image in standalone mender.

[quote=“mirzak, post:3, topic:950”]artifcat
ArtifactReboot_Enter_50
[/quote]

I renamed your mender state scripts and run once inside rootfs and with artifact.

  1. renamed to Download_Leave_50 and kept as part of rootfs.
  2. renamed to ArtifactInstall_leave_50 and integrated with release-1.mender.
    Both made mender exit with ERRO[0007] stderr collected while running script
    please see the log for details,let me know on how to fix this issue.

DEBU[0000] statescript: timeout for executing scripts is not defined; using default of 1h0m0s seconds module=executor
INFO[0000] no public key was provided for authenticating the artifact module=installer
INFO[0000] Update Module path “/usr/share/mender/modules/v3” could not be opened (open /usr/share/mender/modules/v3: no such file or directory). Update modules will not be available module=modules
DEBU[0000] checking if device [sav530] is on compatible device list: [sav530]
module=installer
DEBU[0000] installer: successfully read artifact [name: release-1; version: 3; compatible devices: [sav530]] module=installer
.DEBU[0000] Trying to install update of size: 14036992 module=dual_rootfs_device
DEBU[0000] Active partition: ubi0_3 module=partitions
WARN[0000] Could not resolve path link: ubi0_3 Attempting to continue module=partitions
WARN[0000] Could not resolve path link: ubi0_3 Attempting to continue module=partitions
DEBU[0000] Detected inactive partition ubi0_2, based on active partition ubi0_3 module=partitions
INFO[0000] native sector size of block device /dev/ubi0_2 is 126976, we will write in chunks of 2031616 module=dual_rootfs_device
… 7% 1024 KiB
…INFO[0000] opening device /dev/ubi0_2 for writing module=block_device
INFO[0000] partition /dev/ubi0_2 size: 16887808 module=block_device
. 14% 2048 KiB
… 22% 3072 KiB
… 29% 4096 KiB
… 37% 5120 KiB
… 44% 6144 KiB
… 52% 7168 KiB
… 59% 8192 KiB
… 67% 9216 KiB
… 74% 10240 KiB
… 82% 11264 KiB
… 89% 12288 KiB
… 97% 13312 KiB
…INFO[0006] wrote 14036992/14036992 bytes of update to device /dev/ubi0_2 module=dual_rootfs_device
100% 13715 KiB
DEBU[0006] statescript: timeout for executing scripts is not defined; using default of 1h0m0s seconds module=executor
DEBU[0006] start executing script: Download_Leave_50 module=executor
ERRO[0007] stderr collected while running script /etc/mender/scripts/Download_Leave_50 [artifact_name=release-1: Download_Leave_50 was called!
] module=executor
DEBU[0007] statescript: timeout for executing scripts is not defined; using default of 1h0m0s seconds module=executor
DEBU[0007] Inactive partition: ubi0_2 module=partitions
DEBU[0007] Marking inactive partition (ubi0_2) as the new boot candidate. module=dual_rootfs_device
INFO[0007] Enabling partition with new image installed to be a boot candidate: 2 module=dual_rootfs_device
DEBU[0007] Have U-Boot variable: mender_check_saveenv_canary=0 module=bootenv
DEBU[0007] List of U-Boot variables:map[mender_check_saveenv_canary:0] module=bootenv
DEBU[0008] Marking inactive partition as a boot candidate successful. module=dual_rootfs_device

indent preformatted text by 4 spaces