Proxy to S3 results in signature issues

Hi there,

In a setup where aws S3 cannot be reached, an nginx forward proxy is provided that will enable AWS download even when AWS is blocked.

Let’s say the nginx server is reachable via and we have the bucket at

The Mender server is set up in k8s with the Helm chart. The configuration s3.AWS_EXTERNAL_URI (here) is set to

This works. The Mender server generates s3 bucket URLs starting with for the artifacts.

However, when accessing the url, a signature error is thrown. This happens because of a presigning process where the signature depends on the url. This means that the download is signed with The proxy then forwards the request and changes the start to, leading to a wrong signature.

The configuration in the code also mentions a ProxyUri. However, there appears to be no Helm value for this.

Is there a solution to this problem? Otherwise the only solution that comes to my mind is to make the bucket public and either hope that public artifact url’s are not signed or cut off the signature headers from the url when forwarding in the proxy.


Hello @Dominik-Hstein ,
have you tried the new api_gateway.storage_proxy.url option in the Helm chart v5.5.0? This should cover your use-case.

1 Like

I’m not sure if I’m using it correctly.
Should the AWS external URI stay the same? In this case, what values should be used in the proxy url? The aws one or nginx one?
Edit: it seems to be the external for both.
I’ll try that.

Hi, the proxy url should be the nginx one and the external url should have the S3 bucket FQDN address, with also the region specified. For example:

It’s working now with the following configuration:

        value: "https://my-mender-domain"

    hostname: my-mender-domain
    path: /my-bucket-name

The result is that Mender produces URLs like https://my-mender-domain/my-bucket-name/... with a correct signature.

Hi @Dominik-Hstein , thanks for letting know!
I’m going to update the documentation.