Self hosted Mender - error during device decommission

These are memory occupation for each docker

CONTAINER ID        NAME                                      CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
8271b9813742        menderproduction_mender-api-gateway_1     0.12%               3.477MiB / 991.9MiB   0.35%               64.6MB / 90.3MB     99.3MB / 306MB      5
222850e7955c        menderproduction_mender-device-auth_1     0.55%               8.008MiB / 991.9MiB   0.81%               27.7MB / 18.2MB     91.3MB / 0B         6
88d0d3153d89        menderproduction_mender-conductor_1       1.76%               111.5MiB / 991.9MiB   11.25%              2.7kB / 3.26kB      0B / 0B             46
52fb18aa747a        menderproduction_mender-useradm_1         0.54%               7.906MiB / 991.9MiB   0.80%               26.8MB / 15.5MB     91.5MB / 0B         6
9e5d0a269169        menderproduction_mender-inventory_1       0.56%               7.09MiB / 991.9MiB    0.71%               11.2MB / 7.84MB     80MB / 0B           6
4f36b3ca7f89        menderproduction_mender-deployments_1     0.52%               4.59MiB / 991.9MiB    0.46%               6.22MB / 5.67MB     126MB / 0B          6
543e949b71bf        menderproduction_storage-proxy_1          0.00%               1.031MiB / 991.9MiB   0.10%               253kB / 37.2kB      24.1MB / 0B         2
73f9137bf553        menderproduction_mender-mongo_1           0.60%               14.7MiB / 991.9MiB    1.48%               32.6MB / 40.2MB     284MB / 897MB       43
297735564d25        menderproduction_mender-redis_1           0.13%               1020KiB / 991.9MiB    0.10%               11.5MB / 4.7MB      16.1MB / 12.3kB     5
1483f14a242b        menderproduction_mender-elasticsearch_1   0.00%               3.98MiB / 128MiB      3.11%               180B / 0B           0B / 0B             3
a27d2c1a5d2e        menderproduction_minio_1                  0.00%               6.004MiB / 991.9MiB   0.61%               252kB / 22kB        213MB / 0B          8
c722e452bc35        menderproduction_mender-gui_1             0.00%               328KiB / 991.9MiB     0.03%               438kB / 42MB        161MB / 20.8MB      1

I have 120Mb free but elasticsearch continuosly reboot

I have upgraded my cloud server with using 2Gb of ram.
Now everything works.
However ElasticSearch is taking quite all ram…
It should be usefull to understand where to change max ram of jvm

CONTAINER ID        NAME                                      CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
8271b9813742        menderproduction_mender-api-gateway_1     0.00%               2.34MiB / 1.953GiB    0.12%               4.18MB / 5.33MB     616MB / 1.65MB      5
222850e7955c        menderproduction_mender-device-auth_1     0.49%               6.762MiB / 1.953GiB   0.34%               610kB / 430kB       449MB / 7.09MB      5
88d0d3153d89        menderproduction_mender-conductor_1       3.00%               30.92MiB / 1.953GiB   1.55%               574kB / 1.07MB      788MB / 5.43MB      62
52fb18aa747a        menderproduction_mender-useradm_1         0.28%               2.602MiB / 1.953GiB   0.13%               766kB / 382kB       420MB / 2.49MB      5
9e5d0a269169        menderproduction_mender-inventory_1       0.46%               4.348MiB / 1.953GiB   0.22%               173kB / 108kB       358MB / 3.54MB      4
4f36b3ca7f89        menderproduction_mender-deployments_1     0.37%               1.168MiB / 1.953GiB   0.06%               166kB / 99.1kB      428MB / 1.45MB      6
543e949b71bf        menderproduction_storage-proxy_1          0.00%               628KiB / 1.953GiB     0.03%               8.37kB / 3.3kB      20.1MB / 0B         2
73f9137bf553        menderproduction_mender-mongo_1           0.47%               9.867MiB / 1.953GiB   0.49%               603kB / 669kB       817MB / 12.3MB      41
297735564d25        menderproduction_mender-redis_1           0.15%               1004KiB / 1.953GiB    0.05%               826kB / 356kB       58.1MB / 6.4MB      5
1483f14a242b        menderproduction_mender-elasticsearch_1   89.63%              1.559GiB / 1.953GiB   79.83%              1.06kB / 312B       603MB / 17MB        16
a27d2c1a5d2e        menderproduction_minio_1                  0.00%               848KiB / 1.953GiB     0.04%               19.2kB / 5.36kB     374MB / 0B          7
c722e452bc35        menderproduction_mender-gui_1             0.00%               408KiB / 1.953GiB     0.02%               30.7kB / 3.59MB     88MB / 19.9MB       1

Thank you again

@zumiani
you can also try to modify Xms and Xmx java options by something like the following:

diff --git a/production/prod.yml b/production/prod.yml
index 56a40fd..7f644b6 100644
--- a/production/prod.yml
+++ b/production/prod.yml
@@ -131,6 +131,8 @@ services:
         volumes:
             - mender-elasticsearch-db:/usr/share/elasticsearch/data:rw
             - ./conductor/elasticsearch/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
+        environment:
+            - ES_JAVA_OPTS=-Xms0m -Xmx256m
     mender-redis:
         volumes:
             - mender-redis-db:/var/lib/redis:rw

Elasticsearch behaviour is not tested with such memory limitation but considering that load on ES in this use case is minimal I suppose it can work normally, so, you can try if you want.

Thanks @0lmi, this solves the issue for mender-elasticsearch. I am using mender-server v2.2
Can you provide a benchmark metric for elasticsearch that can help understand how much memory can be allotted to it based upon the need?

I think the question about Elasticsearch benchmarks is better to address to Elasticsearch community.

I am using the above provided patch, while running the mender server on an Amazon EC2 instance with 1 gb RAM. I see the RAM usage going up with time despite the fix.

@0lmi what could have possibly gone wrong?

It suppose to go up a little bit. Are you out of memory?

Almost on the verge. It reached 800M+ when the provided small and max heap values were 10M and 150M respectively. In some instances, the Mender UI stopped responding properly as it was unable to process requests because of it, while an active deployment was going on for a Singel device.

Temporary-Solution:
I have temporarily tried to solve it using a Swap partition. The RAM start with 700M+ and later occupies about 500M-600M range over a period of time as the Swap is being utilized. Occupied Swap size is 160

Here’s the link to a Screenshot for the HTOP command

Let me know what do you think can be done? @0lmi

If you want to optimise memory usage by Elasticsearch then you can contact Elasticsearch community with a questions regarding minimal heap size and memory usage.

Your’s temporary solution should work ok in this case because Mender services doesn’t generate significant load on Elasticsearch, it’s important part but with no intensive usage. Alternatively you can just increase memory to 2Gb as @zumiani did.

I will try doing that. Thank you for the help @0lmi