But my question is quite simple. Is how much will it save in terms of data ?
The question is simple, unfortunately there is no simple answer and best is to test on your specific use-cases.
This features operators on binary filesystem images, so it will depend on what type of file-system you use, and how that behaves in regards to changes and how good the algorithm used is to catch these changes. How you build the images also might have an impact, e.g reproducible builds is something that probably will improve the performance.
Unfortunately I do not have any numbers to share, but generally the tests we have done have shown very good results with savings up to 70-90 %. But as mentioned, it is slightly dependent on use-case.
If I only change a file and the size is 10mb. The delta update will be only 10mb ?
In many cases this would probably be less then 10mb, as it would calculate the difference between the file you have changed on binary level.
It should be fairly easy to test by following,
You can get access to above by creating an account,
Thank you for your answer we are actually already running with the starter plan but due to the size of the update it’s not viable with the basic solution. So does it means that we have to change the plan to have access to the feature to test it ?
Our image is built through a cloud build and all the sstate cache are backed up to save time. No manual intervention in the build process.
You can create a “test” account to get access to test it. We have reworked the “trial” account recently, and now you get access all the Mender features for certain period for evaluation.