Kubernetes setup fail. Secret Mongodb-common no found

I have been trying to setup the openrsource mender server according to the instructions for version 3.5. As mongodb v11.2.0 is now not available I have used version 12.1.31 as is on the readme on the mender-helm github. Setup all appears to go fine but when i attempt to install mender 3.5.0 the helm command just hangs. Looking with kubectl I can see a pod “index-reporting” in status “CreateContainerConfigError”. Inside its description the error reported is

  Normal   Scheduled  76s                default-scheduler  Successfully assigned default/index-reporting-2s9g6 to pool-8ygyp6lie-q6uq2
  Warning  Failed     22s (x6 over 75s)  kubelet            Error: secret "mongodb-common" not found
  Normal   Pulled     22s (x6 over 75s)  kubelet            Container image "docker.io/mendersoftware/deployments:mender-3.5.0" already present on machine
  Warning  Failed     22s (x6 over 75s)  kubelet            Error: secret "mongodb-common" not found
  Normal   Pulled     7s (x7 over 75s)   kubelet            Container image "docker.io/mendersoftware/deviceauth:mender-3.5.0" already present on machine

the description is refering to the image “docker.io/mendersoftware/deviceauth:mender-3.5.0”.
I have tried a few versions of mongodb which no effect. I am using credentials for an s3 bucket on aws so have not installed minio but I can no see how that could effect it.
the mender-3.5.0.yml file , produced as per the instructions is shown bellow

global:
  enterprise: false
  mongodb:
    URL: "mongodb://root:<rootpass>@mongodb-0.mongodb-headless.default.svc.cluster.local:27017,mongodb-1.mongodb-headless.default.svc.cluster.local:27017"
  nats:
    URL: "nats://nats:4222"
  s3:
    AWS_URI: "https://<bucket-name>.s3.eu-west-1.amazonaws.com"
    AWS_BUCKET: "<bucket-name>"
    AWS_REGION: "eu-west-1"
    AWS_ACCESS_KEY_ID: "<keyid>"
    AWS_SECRET_ACCESS_KEY: "<access key>"
    AWS_FORCE_PATH_STYLE: "false"
  url: "https://mender.example.com"

api_gateway:
  env:
    SSL: false

device_auth:
  certs:
    key: |-
              <KEY>
useradm:
  certs:
    key: | 
            <KEY>

Any ideas or suggestions how to proceed would be much appreciated

1 Like

As an update to this I found the job “index-reporting” runs before the template that creates the “mongodb-common” secret. I do not know enough about kubernetes and helm to know why this is however I am working around this by creating the secret myself first and modifying the helm package to remove the template that creates that secret. This is not really a fix though

The hack mentioned in the previous post got a server up and running only just. I suspect there may be some issue with the database as I now have 3 pods that are constantly crashing . “create-artifact-worker”,“workflows-server” and “workflows-worker”. The logs from each of these 3 services is identical

time="2023-03-09T13:29:01Z" level=info msg="migrating workflows" file=migrations.go func=mongo.Migrate line=38
time="2023-03-09T13:29:01Z" level=info msg="migration to version 1.0.0 skipped" db=workflows file=migrator_simple.go func="migrate.(*SimpleMigrator).Apply" line=125
time="2023-03-09T13:29:01Z" level=info msg="DB migrated to version 1.0.0" db=workflows file=migrator_simple.go func="migrate.(*SimpleMigrator).Apply" line=140
2023/03/09 13:29:06 context deadline exceeded

The interface is active and I can log in but without these services running the page javascript is reporting 502 errors and an experimental upload of an artifact just returns a http 413 (too large).

Hello @rlaybourn

thank you for the report, and for using Mender. we will take a close look at it.
in the meantime could you try to check it you have nats up and running? you should see something like:

# kubectl get pods | grep nats
nats-0                                     3/3     Running            12 (4m19s ago)   14d
nats-box-845d4d68f5-w4g97                  1/1     Running            4 (4m19s ago)    14d

if not you can use the following to install and deploy it:

helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm repo update
helm install nats nats/nats --version 0.15.1 --set "nats.image=nats:2.7.4-alpine" --set "nats.jetstream.enabled=true"

in case you already have it, you can remove it with helm uninstall nats and repeating the install command. after nats is up and running just delete the pods that are misbehaving (kubectl delete pod pod_name_from_kubectl_get_pods)

best regards,
peter

Thanks for the reponse I found a solution to the problem in this other ticket Mender on K8S problems. I needed to add the correct storage type in the nats.yml for the cluster i am using . In this case I am testing setting up the server using minikube. (in this case I needed storageClassName: “standard”). I now have a server mostly running but I am currently unable to upload any artifacts to this test . The error reported back from the interface when i try is and error 413

Ok the 413 size error was from me misconfiguring my ingress

I am facing the same problem. In order to get past the index-reporting-abdef CreateContainerConfigError, I rendered the secret-mongodb-common.yaml locally:

helm template secret-mongodb-common.yaml ./mender-3.5.1.tgz -f mender-values.yaml > secret-mongodb.yaml

and then applied it:

kubectl apply -f secret-mongodb.yaml

UPDATE: Not unexpectedly, a bunch of the pods are now in CrashLoopBackOff state, but that’s for another day.

But, given that I am trying to do the installation fully automated with Terraform, this is rather annoying.

I’ll also note that the 3.5.1 helm chart has not been published yet and the 3.5 docs still refer to the obsolete 11.2.0 mongodb helm chart.

I adjusted the versions of the charts of various services and solved that problem by using mender 3.4. However I’m facing now another problem where mender-deployments in in crash loopback. To be honest the documentation is a mess. I tried several mender versions starting from 3.1.

Here are the commands I used that. It’s not a full-auto install script.

curl -sfL https://get.k3s.io | sh -s - --write-kubeconfig-mode 644 --data-dir /mnt
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

# Install Helm
curl https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 | bash

# Install Cert-Manager
export CERT_MANAGER_CHART_VERSION="v1.10.0"
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
  --version $CERT_MANAGER_CHART_VERSION \
  --set installCRDs=true \
  --set namespace=kube-system

# Provide ClusterIssuer
export LETSENCRYPT_SERVER_URL="https://acme-v02.api.letsencrypt.org/directory"
export LETSENCRYPT_EMAIL="<email>"

cat >issuer-letsencrypt.yml <<EOF
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: ${LETSENCRYPT_SERVER_URL}
    email: ${LETSENCRYPT_EMAIL}
    privateKeySecretRef:
      name: letsencrypt
    solvers:
    - http01:
        ingress: {}
EOF

# Wait until cert-manager is ready
kubectl apply -f issuer-letsencrypt.yml



# Install MongoDB
# Install password generator
sudo apt install pwgen

export MONGODB_ROOT_PASSWORD=$(pwgen 32 1)
export MONGODB_REPLICA_SET_KEY=$(pwgen 32 1)
export MONGODB_CHART_VERSION="12.1.31"
export MONGODB_TAG="5.0.10-debian-11-r7"

cat >mongodb.yml <<EOF
architecture: "replicaset"
replicaCount: 1
arbiter:
  enabled: true
auth:
  rootPassword: ${MONGODB_ROOT_PASSWORD}
  replicaSetKey: ${MONGODB_REPLICA_SET_KEY}
readinessProbe:
  timeoutSeconds: 20
image:
  tag: "${MONGODB_TAG}"
persistence:
  size: "4Gi"
EOF

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm upgrade --install mongodb bitnami/mongodb --version $MONGODB_CHART_VERSION -f mongodb.yml 

# Get connection string
echo mongodb://root:${MONGODB_ROOT_PASSWORD}@mongodb-0.mongodb-headless.default.svc.cluster.local:27017,mongodb-1.mongodb-headless.default.svc.cluster.local:27017

# Install NATS, message broker
export NATS_IMAGE="nats:2.7.4-alpine"
export NATS_CHART_VERSION="0.15.1"

cat >nats.yml <<EOF
cluster:
  enabled: true
  replicas: 2
nats:
  image: "${NATS_IMAGE}"
  jetstream:
    enabled: true
    memStorage:
      enabled: true
      size: "1Gi"

    fileStorage:
      enabled: true
      size: "2Gi"
      storageDirectory: /data/
EOF

helm repo add nats https://nats-io.github.io/k8s/helm/charts/
helm repo update
helm install nats nats/nats --version $NATS_CHART_VERSION -f nats.yml

# Install MinIO, storage interface provider
export MINIO_TAG="RELEASE.2021-06-17T00-10-46Z"
export MINIO_CHART_VERSION="4.1.7"
export MINIO_ACCESS_KEY=$(pwgen 32 1)
export MINIO_SECRET_KEY=$(pwgen 32 1)

cat >minio-operator.yml <<EOF
tenants: {}
EOF

helm repo add minio https://operator.min.io/
helm repo update
helm install minio-operator minio/minio-operator --version $MINIO_CHART_VERSION -f minio-operator.yml


cat >minio.yml <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: minio-creds-secret
type: Opaque
data:
  accesskey: $(echo -n $MINIO_ACCESS_KEY | base64)
  secretkey: $(echo -n $MINIO_SECRET_KEY | base64)
---
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
  name: minio
  labels:
    app: minio
spec:
  image: minio/minio:${MINIO_TAG}
  credsSecret:
    name: minio-creds-secret
  pools:
    - servers: 2
      volumesPerServer: 2
      volumeClaimTemplate:
        metadata:
          name: data
        spec:
          accessModes:
            - ReadWriteOnce
          resources:
            requests:
              storage: 10Gi
          storageClassName: "local-path"
  mountPath: /export
  requestAutoCert: false
EOF

kubectl apply -f minio.yml

# Create ingress for minio where artifacts will be uploaded
export MINIO_DOMAIN_NAME="<domain>"

cat >minio-ingress.yml <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minio-ingress
  annotations:
    cert-manager.io/issuer: "letsencrypt"
spec:
  tls:
  - hosts:
    - ${MINIO_DOMAIN_NAME}
    secretName: minio-ingress-tls
  rules:
  - host: "${MINIO_DOMAIN_NAME}"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: minio
            port:
              number: 80
EOF

kubectl apply -f minio-ingress.yml

# Mender deployment
# Need two keys. For device authentication and user generation
openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:3072 | openssl rsa -out device_auth.key
openssl genpkey -algorithm RSA -pkeyopt rsa_keygen_bits:3072 | openssl rsa -out useradm.key

helm repo add mender https://charts.mender.io
helm repo update

export MENDER_SERVER_DOMAIN="<domain>"
export MENDER_SERVER_URL="https://${MENDER_SERVER_DOMAIN}"
export MENDER_VERSION="3.4.0"

cat >mender-${MENDER_VERSION}.yml <<EOF
global:
  enterprise: false
  mongodb:
    URL: "mongodb://root:${MONGODB_ROOT_PASSWORD}@mongodb-0.mongodb-headless.default.svc.cluster.local:27017,mongodb-1.mongodb-headless.default.svc.cluster.local:27017"
  nats:
    URL: "nats://nats:4222"
  url: "${MENDER_SERVER_URL}"

api_gateway:
  env:
    SSL: false

device_auth:
  certs:
    key: |-
$(cat device_auth.key | sed -e 's/^/      /g')

useradm:
  certs:
    key: |-
$(cat useradm.key | sed -e 's/^/      /g')
EOF

helm upgrade --install mender mender/mender --version $MENDER_VERSION -f mender-$MENDER_VERSION.yml


cat >mender-ingress.yml <<EOF
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: mender-ingress
  annotations:
    cert-manager.io/issuer: "letsencrypt"
spec:
  tls:
  - hosts:
    - ${MENDER_SERVER_DOMAIN}
    secretName: mender-ingress-tls
  rules:
  - host: "${MENDER_SERVER_DOMAIN}"
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: mender-api-gateway
            port:
              number: 80
EOF

kubectl apply -f mender-ingress.yml

# Create a new user
export USER_EMAIL="<mail>"
export USER_PASSWORD="<password>"
USERADM_POD=$(kubectl get pod -l 'app.kubernetes.io/name=useradm' -o name | head -1)
kubectl exec $USERADM_POD -- useradm create-user --username "${USER_EMAIL}" --password "${USER_PASSWORD}"
1 Like

I was able to get the helm chart to at least install (still CrashLoopBackOff issues) with the following:

diff --git a/mender/values.yaml b/mender/values.yaml
index bcb805f..12cc2ab 100644
--- a/mender/values.yaml
+++ b/mender/values.yaml
@@ -39,7 +39,7 @@ global:
 
 dataMigration:
   reindexReporting:
-    enable: true
+    enable: false
     annotations: {}
     backoffLimit: 5
     affinity: {}

And then:

make package
helm install mender ./mender-3.5.1.tgz --namespace mender -f mender-values.yaml
1 Like

Hi @moto-timo ,
thanks for your report. We updated today the Mender Helm Chart v3.5.0 with the fix for the secret creation.

1 Like

@mister_kanister thank you for sharing your complete steps.

The minio-operator has disappeared:

$ helm install \
  --namespace minio-operator \
  --create-namespace \
  minio-operator minio/operator
Error: INSTALLATION FAILED: chart "operator" matching  not found in minio index. (try 'helm repo update'): no chart name found

If you want to install the 4.1.7 version of the minio-operator helm chart as shown in the documentation, you can use:

$ helm repo add minio-github https://raw.githubusercontent.com/minio/operator/v4.1.3/
$ helm search repo minio-github
NAME                            CHART VERSION   APP VERSION     DESCRIPTION                    
minio-github/minio-operator     4.1.7           v4.1.3          A Helm chart for MinIO Operator