Further to this discussion, I’m currently attempting to use Mender to update my flash storage.
In my setup, I’m loading a rootfs from flash and booting from RAM, so that my mount looks like:
$ mount
/dev/ram0 on / type squashfs (ro,relatime)
This seems to confuse the Mender client as it believes it needs to update /dev/ram0 rather than /dev/ubi0_x (for example). Looking at the mender source code this makes sense, as mender first looks at the mount output to determine the device to update.
Has anyone had any success telling Mender to update a device other than the root device? This seems like it might be a candidate for update modules but, imo, update modules seem like they might be a bit of overkill as a config setting or environment variable could also fulfill the same need. Is there any easier method I’m missing?
This should be possible and I have came across this problem a couple of times when working with a similar setup. There has been a couple of iterations on the probe logic in the Mender client and e.g this one was an attempt to solve the case that you are describing.
Hmm, maybe I’m misunderstanding the logic but won’t this line always cause it to pick /dev/ram0 as the root device? The device file is mounted as the root directory, after all, and it matches up what I’m seeing (Mender prints Setting active partition from mount candidate: /dev/ram0 when run). Should there be a check to see if the detected root directory differs from the RootfsPartA/RootfsPartB devices set in the config?
At the moment I have rootfsA and rootfsB as mmcblk0p2 & mmcblk0p2 as I’m still booting my image from the SD card for the moment (don’t think this should matter? the mmcblk devices still exist on my device and are accessible).
Hmm, maybe I’m misunderstanding the logic but won’t this line always cause it to pick /dev/ram0 as the root device?
You are correct.
Should there be a check to see if the detected root directory differs from the RootfsPartA / RootfsPartB devices set in the config?
Yes, I believe there should be and you want to to reach this code, which only take the U-Boot variables and configuration in /etc/mender/mender.conf in to consideration which I believe is the only valid way to support this specific case because probing for it will not work.
// First check if mountCandidate matches rootDevice
if mountCandidate != "" {
if rootChecker(p, mountCandidate, rootDevice) {
p.active = mountCandidate
log.Debugf("Setting active partition from mount candidate: %s", p.active)
return p.active, nil
}
// If mount candidate does not match root device check if we have a match in ENV
if checkBootEnvAndRootPartitionMatch(bootEnvBootPart, mountCandidate) {
p.active = mountCandidate
log.Debug("Setting active partition: ", mountCandidate)
return p.active, nil
}
// If not see if we are lucky somewhere else
}
Should be converted to:
// First check if mountCandidate matches rootDevice and ENV
if mountCandidate != "" {
if rootChecker(p, mountCandidate, rootDevice) && checkBootEnvAndRootPartitionMatch(bootEnvBootPart, mountCandidate) {
p.active = mountCandidate
log.Debugf("Setting active partition from mount candidate: %s", p.active)
return p.active, nil
}
// If not see if we are lucky somewhere else
}
I think that’s a good start but still fails because of the call to getRootFromMountedDevices which returns /dev/ram0 but then fails because of this check failing.
To me it makes sense for there to be more of a differentiation between the root device and the active partition. At least in this example, they are not one and the same.
How about this, as a start? It works for my use-case but does change the current functionality quite a bit, as you can see by the changes in the unit tests. (I understand such a big change is probably a no-go?).
I probably don’t full understand all the nuances in the current implementation, but the way it works currently doesn’t completely make sense to me. Shouldn’t the rootfsXdevice devices specified mender.conf file be used in preference to the detected names of the root devices?
It’s possible it would work if both checks are removed. But TBH I’m a bit reluctant to do so. They are important safety checks, to make sure that the environment actually matches what we see in mounted devices. If they don’t match, the environment could be corrupted and we might end up making things even worse by modifying it.
Using rootfs in RAM is a relatively esoteric use case. You might want to look into using an Update Module for this instead, which will essentially achieve exactly the same thing, but with the ability to customize it. See our demo rootfs-image-v2 Update Module for inspiration, and the docs on Update Modules.