Workflow of updates

Another question from a user came in over email that was generic enough to warrant a reply here for further discussion. Essentially the question relates to how to handle updates when using a combination of full image and application (or other incremental update module) updates.

Obviously when installing, for instance, app-v2, on top of full-v1, then you need to consider how to roll the app update back into your full golden master image so that future devices can be provisioned to match. There are a couple of strategies I can envision here.

  1. Do nothing. This basically means when you provision new devices, you will also need to explicitly do all the application based updates on top of the golden master to ensure the devices are in sync. This can work for small device fleets with only a small number of updates but it can quickly become unwieldy. Additionally, it doesn’t necessarily account for future updates to the golden master which we all know are coming as issues are discovered in upstream project.

  2. Keep the golden master updated with all the application updates as they are made. This may be difficult depending on your build tooling, number of developers, and number of updates. The big advantage here is that you always have an up-to-date golden master that can be provisioned onto new devices in a single step. With a little thought in your build tooling, this can be simplified but you will need to plan up front. This is one of the big advantages of using a build system of some sort.

  3. A hybrid of the above where you update the golden master on a less frequent basis.

Additionally, assuming you are using mender-convert, there are basically two workflows for full image updates that will have interactions with the above:

  1. Do all your work on a non-mender-integrated device and then run mender-convert when you want to generate the next round of full system updates. In this case the non-mender-integrated system is your golden master which is then post-processed with mender-convert to generate the OTA artifact.

  2. Do all your work on a live device that already has Mender integrated using mender-convert, and then use the new Mender snapshotting feature to create an OTA artifact directly from the live device. Full details of this feature are here: https://docs.mender.io/2.3/artifacts/snapshots

We’d love to hear from other users how they are handling the workflows with both full image and incremental updates.

2 Likes

This is exactly what I’m working through to come up with the best workflow to have updated master images and still have regular updates using artifacts.

If you just use a new snapshotting tool (2) you’ll end up only with an artifact. If there was a way to generate a full image out of it, that’d be the most pleasant workflow to have.

What I currently consider as the only viable solution is to have a master image per device type which would get loaded onto the device (without having any mender bits integrated) whenever you want to do an upgrade. You’d do whatever changes you might need on it, unmount the thing and dd/shrink it back and run mender-convert again. That way you end up with a new master image and with an artifact to let your existing devices to get updated seamlessly. (1)

One downside to this approach (1) is the inability to do end to end tests with a testing device that is fully integrated with mender and has all the pending updates installed. That’d be possible via the snapshot approach (2) if we could also get the entire image out of it somehow.

Any thoughts?

Hi @mkozjak thanks for contributing your thoughts here.

Your comments about keeping your master image without Mender integrated match up with how I recommend people use mender-convert. That way you are free to do whatever you want with the image since there is no way to break it from Mender’s perspective. One potential problem with the snapshot approach is that distro vendor changes could potentially interfere with Mender; consider a kernel update via apt-get. One of the things mender-convert does is to copy the kernel from the vendor provided location into the rootfs so that it is part of any OTA update. Not all distros store their kernels in the rootfs by default so that is something that needs to be carefully managed. Using mender-convert as a solely post-processing step will bypass this potential.

I think the main problem with generating the SDIMG using a snapshot mechanism is where does that image go? With the artifact, it’s a straightforward decision to load it directly to the server but the SDIMG does not good there. We would have to devise a mechanism to deliver the snapshotted SDIMG back to your developer workstations somehow and I don’t know how that can be done securely given the remote nature of Mender-enabled device fleets.

@eystein some ideas for consideration here.

Drew

1 Like

Good discussion!

From my PoV I think it is OK to have Mender integrated into the snapshot/sdimg if you consider Mender as part of the system and you’re not planning to remove it later, e.g. just like a graphics driver. But if you don’t want that then you can certainly use the mender-convert flow of building the sdimg then dump, shrink and covert. This is more advanced, cumbersome and takes longer but in some cases it would be desirable.

I was also thinking a bit about dumping the full .sdimg using the snapshot feature, which I think should be OK design-wise and makes a lot of sense for provisioning use-cases. The Mender Artifact is already being dumped to the workstation/laptop (and can be uploaded to the Mender server from there), so adding a .img file there in addition should be OK? Where it goes from there is a bit out of scope for Mender I think?

If that makes sense, perhaps we should add an option to dump the .img together with the Mender Artifact in the snapshot feature?

1 Like

Can we have system and application updates on a single device. Like the case where sometimes I want to do a whole system update but on most occasions I will only be doing an application update (Something like a directory overlay).
Thanks

Yes, that’s no problem. Just remember that a system update will overwrite everything on the root filesystem, so if you have a persistent application you don’t want to overwrite, make sure it is stored on the data partition.

is the method of @mkozjak still the best approach ? Considering that he proposed it 3 years ago ?

furthermore in response to the following remark

One downside to this approach (1) is the inability to do end to end tests with a testing device that is fully integrated with mender and has all the pending updates installed. That’d be possible via the snapshot approach (2) if we could also get the entire image out of it somehow.

can this not be mitigated by having a device inhouse that is running the mender-convert’ed image so that can be used for testing ?

Using mender-convert, yes, I believe it’s still the best approach.

Support for doing apt-get upgrades directly on Mender enabled devices has improved in the latest years though. With grub.d integration (which is the default at least in mender-convert 4.0.0) Mender is directly integrated with the normal GRUB boot process, and therefore doing bootloader upgrades should be safe. So probably many upgrades can be done directly in the Mender integrated device now.