MongoDB pod stopped

I need some help with this issue I’m facing? I deployed the server to GCP by following the tutorial a few days ago. The only change I did was used Storage instead of MinIO. Today I the MongoDB pod failed and I found the following logs:

{"t":{"$date":"2022-10-31T12:50:52.663+00:00"},"s":"I",  "c":"STORAGE",  "id":22430,   "ctx":"WTCheckpointThread","msg":"WiredTiger message","attr":{"message":"[1667220652:663673][1:0x7f4e9c24c700], WT_SESSION.checkpoint: [WT_VERB_CHECKPOINT_PROGRESS] saving checkpoint snapshot min: 99854, snapshot max: 99854 snapshot count: 0, oldest timestamp: (1667220640, 1) , meta checkpoint timestamp: (1667220645, 1) base write gen: 261156"}}
{"t":{"$date":"2022-10-31T12:50:57.373+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"","connectionId":80023,"connectionCount":51}}
{"t":{"$date":"2022-10-31T12:50:57.379+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn80023","msg":"client metadata","attr":{"remote":"","client":"conn80023","doc":{"driver":{"name":"nodejs|mongosh","version":"4.4.0"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"5.10.133+"},"platform":"Node.js v14.19.1, LE (unified)","version":"4.4.0|1.3.1","application":{"name":"mongosh 1.3.1"}}}}
{"t":{"$date":"2022-10-31T12:50:57.391+00:00"},"s":"I",  "c":"NETWORK",  "id":22943,   "ctx":"listener","msg":"Connection accepted","attr":{"remote":"","connectionId":80024,"connectionCount":52}}
{"t":{"$date":"2022-10-31T12:50:57.393+00:00"},"s":"I",  "c":"NETWORK",  "id":51800,   "ctx":"conn80024","msg":"client metadata","attr":{"remote":"","client":"conn80024","doc":{"driver":{"name":"nodejs|mongosh","version":"4.4.0"},"os":{"type":"Linux","name":"linux","architecture":"x64","version":"5.10.133+"},"platform":"Node.js v14.19.1, LE (unified)","version":"4.4.0|1.3.1","application":{"name":"mongosh 1.3.1"}}}}
{"t":{"$date":"2022-10-31T12:50:57.461+00:00"},"s":"I",  "c":"ACCESS",   "id":20249,   "ctx":"conn80024","msg":"Authentication failed","attr":{"mechanism":"SCRAM-SHA-256","speculative":true,"principalName":"root","authenticationDatabase":"admin","remote":"","extraInfo":{},"error":"AuthenticationFailed: SCRAM authentication failed, storedKey mismatch"}}



It looks like the MongoDB credentials configured on the server is not matching the connection string configured for the Mender deployment.
Assuming you have followed the production installation using helm, could you compare the output of the following commands:

kubectl get secret mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 -d && echo
kubectl get secret mongodb-common -o jsonpath="{.data.MONGO_URL}" | base64 -d

Example output:


Can you verify that the password in the two secrets matches for your deployment?

Thank you for responding. Yeah they don’t match. That’s strange because the UI was working for a few day.

If I modify the password in the mongodb connections string using:

helm upgrade mender mender/mender -f values.yaml

Will that solve it?

@alfrunes updating the connection string didn’t work. What could I do to get the passwords synchronized and the MongoDB pod up again?

Yes, that should solve it. You might need to recreate the pods afterwards to load the new connection string.

for deploy in $(kubectl get deploy -l '' -o name); do kubectl rollout restart $deploy; done

Note that if you re-apply the MongoDB instructions you also generate a new password.

@alfrunes very sorry to keep dragging this post. I modified the yaml file from the tutorial to make the passwords match, deleted all releases and PVCs, and then reinstalled them in the same order as the tutorial. However, all the pods are ready again except for the mongodb-0 one and there are no errors in the pod’s log.