Hi to all! I’m now trying to set up a Mender Server, erverything seems cools but the download of a new artificat on production system.
I’m using the latest integration version, that is the 2.7.0. That stopped using port 9000 i guess and now all the traffic is routed against 443. I’m using, as the tutorial says, the run script to manage the containers.
I have uploaded correctly all the artifacts i want but on the download i receive some errors, the process mender-deployments_1 shows:
"mender-deployments_1 | time="2021-05-17T17:04:57Z" level=error msg="error reaching artifact storage service: SerializationError: failed to decode REST XML response\n\tstatus code: 200, request id: \ncaused by: XML syntax error
on line 8: element <link> closed by </head>" file=response_helpers.go func=rest_utils.restErrWithLogMsg line=110 request_id=b964b0ad-8b6e-40fe-88cd-f5fe8e171120"
Every other service is up and healthy.
I’m using the production environment, in demo environment the problem is not happening.
I’ve setup the repo as the official documentation for the 2.7.0 says, what could be causing this? Thanks to all!
We still use port 9000 for the storage proxy even in version 2.7.0. The error message indicates that somehow you are not running a storage proxy. Can you post your prod.yml file and any other relevant bits?
Hi drewmoseley, thank you a lot for the quick response! Mender is fantastic!
Oook, so that means, I guess that now port 9000 is just not exposed, but used internally inside the docker network, right?
This is the prod.yml i’m using, what could be another relevant bit?
Minio logs?
version: '2.1'
services:
mender-workflows-server:
command: server --automigrate
mender-workflows-worker:
command: worker --automigrate --excluded-workflows generate_artifact
mender-create-artifact-worker:
command: --automigrate
mender-useradm:
command: server --automigrate
volumes:
- ./production/keys-generated/keys/useradm/private.key:/etc/useradm/rsa/private.pem:ro
logging:
options:
max-file: "10"
max-size: "50m"
mender-device-auth:
command: server --automigrate
volumes:
- ./production/keys-generated/keys/deviceauth/private.key:/etc/deviceauth/rsa/private.pem:ro
logging:
options:
max-file: "10"
max-size: "50m"
mender-inventory:
command: server --automigrate
logging:
options:
max-file: "10"
max-size: "50m"
mender-api-gateway:
ports:
# list of ports API gateway is made available on
- "443:443"
networks:
mender:
aliases:
# mender-api-gateway is a proxy to storage
# and has to use exactly the same name as devices
# and the deployments service will;
#
# if devices and deployments will access storage
# using https://s3.acme.org:9000, then
# set this to https://s3.acme.org:9000
- mydomain.it
command:
- --accesslog=true
- --providers.file.filename=/config/tls.toml
- --providers.docker=true
- --providers.docker.exposedbydefault=false
- --entrypoints.http.address=:80
- --entrypoints.https.address=:443
- --entryPoints.https.transport.respondingTimeouts.idleTimeout=7200
- --entryPoints.https.transport.respondingTimeouts.readTimeout=7200
- --entryPoints.https.transport.respondingTimeouts.writeTimeout=7200
- --entrypoints.http.http.redirections.entryPoint.to=https
- --entrypoints.http.http.redirections.entryPoint.scheme=https
volumes:
- ./tls.toml:/config/tls.toml
- ./production/keys-generated/certs/api-gateway/cert.crt:/certs/cert.crt:ro
- ./production/keys-generated/certs/api-gateway/private.key:/certs/private.key:ro
- ./production/keys-generated/certs/storage-proxy/cert.crt:/certs/s3.docker.mender.io.crt
- ./production/keys-generated/certs/storage-proxy/private.key:/certs/s3.docker.mender.io.key
logging:
options:
max-file: "10"
max-size: "50m"
environment:
ALLOWED_HOSTS: mydomain.it
mender-deployments:
command: server --automigrate
volumes:
- ./production/keys-generated/certs/storage-proxy/cert.crt:/etc/ssl/certs/s3.docker.mender.io.crt:ro
environment:
STORAGE_BACKEND_CERT: /etc/ssl/certs/s3.docker.mender.io.crt
# access key, the same value as MINIO_ACCESS_KEY
DEPLOYMENTS_AWS_AUTH_KEY: mender-deployments
# secret, the same valie as MINIO_SECRET_KEY
DEPLOYMENTS_AWS_AUTH_SECRET: im1Eikohkahw6Loh
# deployments service uses signed URLs, hence it needs to access
# storage-proxy using exactly the same name as devices will; if
# devices will access storage using https://s3.acme.org:9000, then
# set this to https://s3.acme.org:9000
DEPLOYMENTS_AWS_URI: https://mydomain.it
logging:
options:
max-file: "10"
max-size: "50m"
minio:
environment:
# access key
MINIO_ACCESS_KEY: mender-deployments
# secret
MINIO_SECRET_KEY: im1Eikohkahw6Loh
volumes:
# mounts a docker volume named `mender-artifacts` as /export directory
- mender-artifacts:/export:rw
mender-mongo:
volumes:
- mender-db:/data/db:rw
volumes:
# mender artifacts storage
mender-artifacts:
external:
# use external volume created manually
name: mender-artifacts
# mongo service database
mender-db:
external:
# use external volume created manually
name: mender-db
Sorry for not being clear. Yes, I was using the 2.7.x branch. I was in the process to try the downgrade… and found that there were some updates in the 2.7.x branch. So I tried them and the result is that the storage proxy now works flawless, but I can not access the /ui, now I have Gateway Timeout in the /ui.
But the deployment on the devices is ok. They were able to download the artifacts.
I do not see anything wrong in the logs about the ui (but I did not checked every line).
I will try to try to find any difference between the version (both at config and container level).
At fist look (I rebased my config on top of 2.7.x branch) there is no significant difference.
I suspect that this will boil down to some misconfiguration of the proxy, that now is correct for the storage, but not for the ui.
Hi Peter, thank you for the support!
Yes, i’m sending you a private message as soon as i can with the link to download the artifact.
In the meantime, i tried to use version 2.7.x, but the same error is happening.
Version 2.5 instead is working fine, and for my use case it’s ok! The thing is i would like to test mender-connect and the DBus thing!
@AlessioDavoli I see, so it is not the artifact problem. It would seem, that the deployments requests are not going to the storage, but to the web UI. could you share docker ps output?