We have a script to migrate the user information which is similar to this which works fine.
But then we don’t want to migrate user info on data partition updates. So an approach would be to put the script inside the artifact.
So we put the script inside ArtifactInstall_Enter_10 (also tried with Install_Leave), which works sometimes and sometimes doesn’t. When it doesn’t work, it complains that the /mnt/etc doesn’t exist, and upon further putting in a ls -al into the new root, we got something like this,
Am I doing something wrong that we are not suppose to do user migration during the install? If I need to do it during Download_Leave, how can I distinguish between a rootfs update and a data partition update?
not sure if I can add something enlightening, but some thoughts here.
distinguishing between a full root filesystem update and a data update would usually be done through the Update Module which is being used.
the “sometimes works, sometimes doesn’t” case is not exactly revealing. Have you tried adding debug statements which dump information to a place like /data/debug_log.txt or such and looking at it?
a long shot would be that you are migrating users and groups, but not the shadow files?
Do we have that info in the download statescripts? We are already using the available rootfs-image for rootfs and the directory update module for the data partition updates.
Ahh, I missed mentioning that the output I posted is from a ls -al I put in the ArtifactInstall_Enter script, so yes, I tried to log the partition details, artifact details. Upon reboot it is actually fine, but only when the script is running it is a problem, which I am unable to understand why?
The output I posted, is between mounting the new partition and migrating the users, where we check if the /etc directory exist or not, similar to this, where it complains the directory doesn’t exist in the new partition.
The simplest trick is to take the existing directory Update Module, rename it to data, and add your logic. If you’re in the specific Update Module script, you obviously know where you are. Concerning artifact generation it just needs a changed type string, and it removes the need for handling the state scripts.
My guess would be that there is some effect going on that the system/shell don’t like if the user defining configuration files are touched in a live session, possibly in context of mounts. But that’s really just a guess, no real idea.
For the record, I won’t be able to add anything here in the next about two weeks for personal reasons, sorry.
Sorry totally forgot to update the thread. So after sniffing around a bit, with the log files, I realized that since we had udev-extraconf installed, it was also auto mounting the other root partition. And seems like flashing a disk while mounted would mess it up. Looking at the mender-flash source, it avoids writing data if the data it reads is same, which added up, since the only updates which were successful were the ones, we were writing very little.