Hi I’ve hosted a mender server on aws ec2 which seems to function properly, and i have a ubuntu24 machine on which i installed mender client(v5) through apt install mender-client4. The client can be properly registered to my mender server, but when i tried to deploy an artifact, the client reported the below error
Jun 30 08:01:14 konnexthome mender-update[810890]: record_id=18 severity=info time="2025-Jun-30 08:01:14.570687" name="Global" msg="Sending status update to server"
Jun 30 08:01:14 konnexthome mender-update[810890]: record_id=19 severity=error time="2025-Jun-30 08:01:14.860295" name="Global" msg="Request to push status data failed: PUT http://127.0.0.1:39193/api/devices/v1/deployments/device/deployments/d65af14a-681c-443f-9868-ceba3b4864cd/status: "
Jun 30 08:01:14 konnexthome mender-update[810890]: record_id=20 severity=error time="2025-Jun-30 08:01:14.860475" name="Global" msg="Could not send deployment status: bad version: PUT http://127.0.0.1:39193/api/devices/v1/deployments/device/deployments/d65af14a-681c-443f-9868-ceba3b4864cd/status: "
Jun 30 08:01:15 konnexthome mender-update[810890]: record_id=21 severity=info time="2025-Jun-30 08:01:15.142817" name="Global" msg="Installing artifact..."
Jun 30 08:01:15 konnexthome mender-update[810890]: record_id=22 severity=info time="2025-Jun-30 08:01:15.197851" name="Global" msg="Sending status update to server"
Jun 30 08:01:15 konnexthome mender-update[810890]: record_id=23 severity=error time="2025-Jun-30 08:01:15.200322" name="Global" msg="Request to push status data failed: PUT http://127.0.0.1:39193/api/devices/v1/deployments/device/deployments/d65af14a-681c-443f-9868-ceba3b4864cd/status: "
Jun 30 08:01:15 konnexthome mender-update[810890]: record_id=24 severity=error time="2025-Jun-30 08:01:15.200493" name="Global" msg="Could not send deployment status: bad version: PUT http://127.0.0.1:39193/api/devices/v1/deployments/device/deployments/d65af14a-681c-443f-9868-ceba3b4864cd/status: "
I’ve tried to use the same client installation process and artifact on the hosted mender, and the artifact can be properly deployed without reporting this error. The server version is update-to-date(5.0.1), and it is installed using the k3s process. I’m kinda stuck on this, could you please kindly provide me with some advicie in regards to this issue? Really appreciate that.
I tried to downgrade my mender client to v3.5 and the BAD VERSION issue didn’t occur. I could see related issues in the release notes ( * Fix download failure to always do a proper cancellation and cleanup of internal HTTP stuctures to avoid breaking future HTTP requests. Fixes bad_version error. (MEN-7810, is it possible that the issue still exists?
It does not occur in hosted mender or mender-client(v3.5), but persists across mender client v4-v5. Mender-updated service kept reporting error when communicating with mender-authd service.
mender server version is v4.0.1, the latest version i suppose
helm chart version is v3.18.3
I’ve actually disabled the storage_proxy feature. When the feature is enabled, artifacts could not be properly downloaded from my mender website. (All artifacts are stored in S3)
AS for the logs, i could not find much helpful information, are there anything that I should pay special attention to? Thanks a lot for your assistance.
You mentioned the helm chart version v3.18.3, but I guess that’s the version of the helm as a package. Could you please check which Mender Helm Chart version did you installed?
helm ls
For a complete list you can check:
helm search repo mender/mender --versions
as for
I’ve actually disabled the storage_proxy feature. When the feature is enabled, artifacts could not be properly downloaded from my mender website. (All artifacts are stored in S3)
This raises a warning to me: can you please share your helm chart values files, after removing all the sensitive secrets?
ubuntu@ip-172-31-7-3:~$ helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
mender default 7 2025-07-01 07:18:50.835959514 +0000 UTC deployed mender-6.6.1 v4.0.1
And here is my mender-values.yaml file. It basically strictly follows the default and recommended setup process in the mender docs server installation section, except that I’ve disabled storage_proxy to enable artifact download.
Moreover, I’ve attached a middle ware with some configurations about traefik so that I could upload artifacts with size more than 300mb. This can be seen in the ingress annotation part traefik.ingress.kubernets...
ingress:
enabled: true
annotations:
cert-manager.io/issuer: "letsencrypt"
traefik.ingress.kubernetes.io/router.middlewares: "default-large-upload@kubernetescrd"
ingressClassName: traefik
path: /
hosts:
- ${COMPANY_DOMAIN}
tls:
# this secret must exists or it can be created from a working cert-manager instance
- secretName: mender-ingress-tls
hosts:
- ${COMPANY_DOMAIN}
global:
s3:
AWS_URI: "https://s3.ap-southeast-2.amazonaws.com"
AWS_BUCKET: "${S3_BUCKET_NAME}"
AWS_REGION: "ap-southeast-2"
AWS_ACCESS_KEY_ID: ""
AWS_SECRET_ACCESS_KEY: ""
AWS_FORCE_PATH_STYLE: "false"
url: "${MY_MENDER_WEBSITE_URL}"
api_gateway:
storage_proxy:
enabled: false
url: "https://s3.ap-southeast-2.amazonaws.com"
customRule: "PathRegexp(`^/konnext-mender`)"
@nicktech the thing is that it should work with the storage_proxy enabled as well: it simply would route the requests from https://${COMPANY_DOMAIN}/konnext-mender to "https://s3.ap-southeast-2.amazonaws.com" as an origin. Which issues did you find with the storage proxy enabled?
I’m not sure here.. I need to reproduce the S3 and the storage proxy issue.
Anyway, this could have been a possible lead on your specific client issue, but now I can see no relation if you’re using plain S3. I’m not sure I could help you more than this on your case, sorry
Hello @nicktech, just making sure I understood the problem. Are you saying that after disabling storage_proxy, now the client doesn’t error with:
Jun 30 08:01:15 konnexthome mender-update[810890]: record_id=23 severity=error time="2025-Jun-30 08:01:15.200322" name="Global" msg="Request to push status data failed: PUT http://127.0.0.1:39193/api/devices/v1/deployments/device/deployments/d65af14a-681c-443f-9868-ceba3b4864cd/status: "
Jun 30 08:01:15 konnexthome mender-update[810890]: record_id=24 severity=error time="2025-Jun-30 08:01:15.200493" name="Global" msg="Could not send deployment status: bad version: PUT http://127.0.0.1:39193/api/devices/v1/deployments/device/deployments/d65af14a-681c-443f-9868-ceba3b4864cd/status: "
I don’t see the relationship on downloading the artifact with the device not being able to report its status to the server. Or am I missing anything?
Assuming these are not related. I created internally a bug report MEN-8554
Hi, it was two different problems. After disabling storage proxy, I could normally download artifaicts from the self-hosted mender website. The current problem is, my mender client could not properly interact with mender server due to the “bad version” HTTP error.
exact same issue. I would appreciate any help on fixing this. The deployment keeps on failing due to this issue
2025-07-12 00:56:55.601 +0000 UTC info: Sending status update to server
2025-07-12 00:56:55.604 +0000 UTC error: Request to push status data failed: PUT http://127.0.0.1:40663/api/devices/v1/deployments/device/deployments/13764df3-554b-445b-a994-b58c4191daa0/status:
2025-07-12 00:56:55.604 +0000 UTC error: Could not send deployment status: bad version: PUT http://127.0.0.1:40663/api/devices/v1/deployments/device/deployments/13764df3-554b-445b-a994-b58c4191daa0/status:
2025-07-12 00:56:55.613 +0000 UTC info: Running State Script: /var/lib/mender/scripts/ArtifactInstall_Enter_00_fw-env-fix
2025-07-12 00:56:55.85 +0000 UTC info: Sending status update to server
2025-07-12 00:56:55.861 +0000 UTC error: Request to push status data failed: PUT http://127.0.0.1:40663/api/devices/v1/deployments/device/deployments/13764df3-554b-445b-a994-b58c4191daa0/status:
2025-07-12 00:56:55.862 +0000 UTC error: Could not send deployment status: bad version: PUT http://127.0.0.1:40663/api/devices/v1/deployments/device/deployments/13764df3-554b-445b-a994-b58c4191daa0/status: