Mender-Convert 4.0.2 to 4.3.0 Upgrade Causes 2.5GB Rootfs Size Increase, Is This Expected?

Hi everyone,

I’m experiencing a significant rootfs size increase after upgrading mender-convert from 4.0.2 to 4.3.0, and I’m trying to understand if this is expected behavior or if something has changed that I’m not aware of.

The Problem:

  • 4.0.2: Rootfs filesystem size was 23,074 MiB
  • 4.3.0: Rootfs filesystem size is now 25,715 MiB (+2.6GB increase)

The 2.5GB increase seems excessive for just a version update.
This causes my conversion to fail with:

[WARN] The calculated rootfs partition size 23224 MiB is too small.
[WARN] The actual rootfs image size is 25715 MiB
[FATAL] You can try adjusting the MENDER_STORAGE_TOTAL_SIZE_MB variable to increase available space

My Setup:

  • Same golden image input (25GB total size in both cases)
  • Identical configuration file
  • Same overlay directory
  • Both runs install the same addons (Connect + Configure)

Questions:

  1. Is this size increase expected when upgrading to mender-convert 4.3.0?
  2. What specifically changed between these versions that could cause such a large increase?
  3. Are there any configuration options I should adjust to maintain the previous behavior?

Has anyone else experienced this issue during the upgrade? Any insights into what changed in mender-convert 4.3.0 that could cause this would be greatly appreciated.

I have attached the detailed logs (log 1708435600-129446 for 4.0.2 and log 1756398228-171055 for 4.3.0) to help with analysis.

1708435600-129446-convert.yaml (45.2 KB)
1756398228-171055-convert.yaml (35.7 KB)
custom_x86-64_hdd_config.yaml (2.4 KB)

Thanks in advanced.

Hi @Trab40,

That’s definitely surprising. By a short glance over the changelog, I couldn’t spot anything which would cause such a change.

I take it that both tests were conducted starting from the same image?
What you could do is to add a step which shows the storage usage to your configuration, like

du -h --max-depth 2 work/

or similar, to figure out where in the system it actually ends up.

Greetz,
Josef

Hi @TheYoctoJester,

You were absolutely right, and thanks for the quick reply!

That was a classic case of not comparing apples to apples. I had updated the newer image with additional dependencies and didn’t register that this would increase the rootfs size by a 2.5 GB. Definitely a facepalm moment on my part :person_facepalming:.

What I think tripped me up was a misunderstanding of how IMAGE_OVERHEAD_FACTOR works.

  • My incorrect assumption: I thought the overhead was calculated based on the total partition size. I figured as long as my partition didn’t change, the overhead would remain constant, regardless of how full the root filesystem was. After updating my mender-convert repo and running the command, I thought the additional 2.5 was something that Mender was injecting into the image.

  • The reality: the overhead is a multiplier applied to the actual populated size of the root filesystem’s content. If that size is larger than what is defined in MENDER_STORAGE_TOTAL_SIZE_MB, an error gets thrown.

The actual calculation is:
Final Image Size ≈ rootfs content size × IMAGE_OVERHEAD_FACTOR
and
Final Image Size <= MENDER_STORAGE_TOTAL_SIZE_MB/2

As my rootfs content grew, the calculated overhead also grew proportionally, which caused the final image to exceed my fixed partition size. I had mistakenly attributed this to a change in Mender, but the problem was my own expanding image.

I have now set

IMAGE_ROOTFS_SIZE=-1

which tell mender-convert to use the maximum size that will fit inside the created partition.

Cool Fact:
While investigating this I also figure out something else: the partition size made on the device the golden image is made on does not seem to matter, as long as the rootfs size fits in the size indicated by MENDER_STORAGE_TOTAL_SIZE_MB/2 in the config, a successful image can be made. Interesting!

1 Like