Docker Update Module without registry

Hi, im wondering about how safe is that. I dont want anyone to just download my docker img. So i was thinking about making that repo privat but than i would need to add some Tokens to allow devices to download it from registry. Im just wondering how it should be done to make it safe and easy to maintain. Also it makes additional step for errors ( eg problems with your account token )

Hi @livelove1987,

Why not just create an Update Module that packs up the container in the generator, and directly imports it in the installation? It shouldn’t be too complicated, presuming that you can build the container on the development host respectively build host. Rough pseudocode idea:

Generator script:

docker build -t mypayload Dockerfile
docker save --output=mypayload.tar mypayload
mender-artifact write module-image \
  -T packaged_docker \
  -t yourdevice
  -o mycontainer_1.0.mender \
  -n mycontainer-1.0 \
  -f mypayload.tar

And in the packaged_docker Update Module:

  docker import $FILES/mypayload.tar

Adjust as needed, but I think it conveys the concept. For the basics of writing an Update Module, look at the single-file one at mender/support/modules/single-file at master · mendersoftware/mender · GitHub, along with its generator at mender/support/modules-artifact-gen/single-file-artifact-gen at master · mendersoftware/mender · GitHub, plus the documentation: Create-a-custom-update-module | Mender documentation


Hi, im having a trouble with that solution ( im trying to solve that all day with different aproaches ) - Last error which i indicated “2024-05-10 20:02:57.199 +0000 UTC error: Input/output error: Unexpected number of written bytes to download stream”. Generally im deploying my app and after like 10m im getting that error. Also my tar.gz file is like 3gb so that could be the reason aswell. Also i tried with single-file module and custom one aswell and both resoulted in errors. I added a screen shot showing my single-file artefact which failed to deploy.
Also i tried with mender-artifact write module-image:

./mender-artifact write module-image -t "raspberrypi4" -n "test-license-plate" \
> -o "test-license-plate.mender" \
> -T "license_plate_production" \
> -f image.tar.gz \
> --software-name test-app \
> --software-version 1.2

Also failed
Is it possible to reach you via discord ( my nickname livelove ) i could send you some screen shots and find the solution much faster ?

Hi @livelove1987,

Sorry, I am not available for 1:1 support over Discord.

Generally speaking, 3GiB is already very heavy, but should still be okay, given enough free storage on the device. Does deploying smaller artefacts work? Is this on Hosted Mender? How about a different device and/or storage medium? The single-file Update Module is definitely not known to be flaky, so the question is where your setup goes wrong.


Okay, I completely understand.
Im using My raspberry pi has a lot of free space. Also when im trying to deploy it there is no progression on UI. It worked with deployment from guide, didn’t try with others. I don’t have currently different devices available so i cant check if it works on another ones.


Umm no, there is not exactly a lot of space. You have a large /data partition, but that’s about it. The root filesystem has less than 1GiB free, and I guess that you also don’t have 4GB of RAM free? If I remember correctly, intermediate data is not put on /data, and hence you probably just run out of space somewhere along the way. I strongly suggest to try with something smaller to figure it out, like a 500MiB artifact, a 1GiB one, etc.

The simple approach, which also the single-file Update Module uses, does require the full download to happen first, then the file is copied to the destination. The Mender Client supports direct streaming, but by default this is just used for rootfs updates.


1 Like

Okay, i will check it out. I have 1.5GB of free ram , does it mean that i can only handle 1.5gb artifacts? Also is it possible to relocate intermediate data?

I’m really glad for helping me figuring it out.

@livelove1987 I just gave it a rough test and it seems that I’m wrong, at least for the artifact download the /data partition is used. However, unless you have modified your docker setup to also use that storage it will use /var/something, and that definitely doesn’t have the free space.


I tried uploading just a file ( as i send on picture above ) and it also failed. I used single-file artifact for that. So there was no docker operation. So there is a problem with uploading the single-file aswell

Hello folks,

The payloads of the artifacts get decompressed in /var/lib/mender/modules/v3/payloads/0000/tree/files after the Download state. (During the Download state they can be handled as Streams if needed).

In devices with the 4 partitions, this folder will point to /data/mender/modules/v3/payloads/0000/tree/files.

Do you have the deployments log?


1 Like

Ye you’re right, it’s there. That’s weird because in my UI progress bar doesn’t move and my logs also show that it just started installing it. And at the end it fails.
So i realized that files are downloading to …/tree/files/… but there is something else which indicates that server returns error. But there are also same files in /var/lib/mender/modules/v3/payloads/0000/tree/files as you wrote. So maybe there is just not enough space for them thats why im getting errors. But i checked disk usage and there wasnt anything which could potentially be threat for lack of storage

I’ve got it. The ram usage is problem here ( check out buff/cache ). Any ideas how to solve that Im sure i will need to buy device with more ram for production usage but any ideas for testing usage ?

EDIT by @TheYoctoJester - moved this over from another thread which does not have any docker context so does not apply. Keeping it here for the sake of transparency.

Hi im having same issue, while deploying docker img compiled to tar.gz format. Im deploying the mender img which is the size of 5gb and im also running out of ram memory. Could you tell me how did you solve that problem?

Do i need as much ram as my mender artifact contains to make deployments possible?

Back to the original question, packing the container as a tar into the mender artifacts has the downside that you loose containers layered updates. You always have to transfer the complete container, even if only one layer of the container changed.

Another approach would be to pack the credentials into the artifact and use Mender State Scripts to do a docker login before pulling the container.

For me, this approach works pretty good

When im packing credentials to artifact i could as well login to docker inside custom update modules but i’m also worried about how secure is that method? I mean passing your docker access token to artifact doesn’t sound really secure


So, getting back to that after the weekend. Here’s some food for thought:

  • try to understand the problem properly first. Is it the Mender Client that gives you a headache, or rather docker? A simple test would be shipping a file of similar size to the device, and just storing it on /data without any docker import. If that passes without issues, then its the import process that is the actual problem.
  • docker import should support importing from a stream instead of a fully downloaded tarball. Maybe you can use the stream-based API that is available in the Update Module during the Download stage. man docker-import (1): Create an empty filesystem image and import the contents of the tarball (.tar, .tar.gz, .tgz, .bzip, .tar.xz, .txz) into it, then optionally tag it.
  • generally, I would suggest to revisit your update strategy as a whole. Shipping a multiple GiB size artifact might work in a lab setting, but it’s not exactly desirable in a production setting. Not each of your devices will be a on a fast connection. So think about these ideas:
    • what does actually change between updates? Maybe split the update into a smaller part that changes frequently, and a larger one?
    • do you really need docker, or do you just use it because its the first and most comfortable thing? How about moving the actual application into the root filesystem?
    • moving to a registry based approach can also help, as it enables the layered feature as @ruben pointed out.
    • what do you actually want to protect? What are the threat models? What are the mitigations? Do you have a strategy for it, or is it currently all based on gut feeling that says “I don’t want to use a registry or credentials”? I know this might sound a bit offensive now, but if you take a step back and find that this applies, then it definitely is not a good approach.



Im really thankful for that answer.

  1. Mender fails when pushing single-file artifacts which contains docker saved in .tar.gz format, which works same as regular files saved in such format. So there is no any docker Import operations.
  2. About updates, the main updates which will be pushed in the future would be probably some code related stuff, like fixing bugs or adding some additional features.
  3. I decided to use docker because it gives me high control over application environment i can make sure that there wont be any problems with installation on other devices
  4. Okay i think that, the docker registry idea may make sens, im just quite skeptical about passing my login credentials - access token ( in this case account used for integration ) to mender artifacts

Hi @livelove1987,

I see. Well from a purely practical point of view: you trust Mender to ship the container, but not to ship the credentials to a container? That sounds a bit illogical, assuming that the registry has proper credential management.
Personally I think that (3) is not exactly true, but if you consider it beneficial, well - it’s your project. Have fun :slight_smile:

Concerning (1), it would be interesting to dive a bit deeper where the out of memory situation arises, just out of curiosity.


1 Like

Okay you’re right, trusting mender but still don’t want to use docker credentials doesnt make sens. I’m also curious about out of memory situation so i will also investigate. Also i decided to use docker because i think that it will be more stabel than deploying just “pure” code ( btw im really curios about other peaople opinion because deploying raw code is much easier but i prefer stabel solutions than easier ones haha )