podman --> Local RedHat Quay - http: no Location header in response

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

RimBlock

Active Member
Sep 18, 2011
869
31
28
Singapore
Hi,

I have a RedHat Quay server setup and I ma getting an error pushing to it.

The setup was completed following the RedHat guide here and when smoothly apart from the ssl key integration (permission error) which is now resolved.

Whilst I managed to test a push to Quay prior to implementing ssl, tring to do so after ssl is in place results in a 'Error: writing blob: determining upload URL: http: no Location header in response' error. After spending a couple of days bouncing aroundGoogle and getting nowhere, I am hoping someone here may have an idea on how to move forwards.

Quay server details
- Admin user: quayadmin
- Install dir: /opt/quay
- Storage dir: /opt/quay/storage (lv mount point - 1TB)
- Free system space: /:180GB free, /home: 398GB free, /opt/quay/storage: just under 1TB free
- Quay running with SSL and accessible via remote web browser.

Quay containers
[quay-user@ocp-quay config]$ sudo podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3f7468ea9b2a registry.redhat.io/rhel8/redis-6:1-110 run-redis 14 hours ago Up 14 hours 0.0.0.0:6379->6379/tcp redis
846047a70b4a registry.redhat.io/rhel8/postgresql-13:latest run-postgresql 14 hours ago Up 14 hours 0.0.0.0:5432->5432/tcp postgresql-quay
63ce8b483828 registry.redhat.io/quay/quay-rhel8:v3.15.0 registry 3 hours ago Up 3 hours 0.0.0.0:80->8080/tcp, 0.0.0.0:443->8443/tcp, 7443/tcp, 9091/tcp, 55443/tcp quay

Quay container started with
sudo podman run -d --rm -p 80:8080 -p 443:8443 --name=quay -v $QUAY/config:/conf/stack:Z -v $QUAY/storage:/datastorage:Z registry.redhat.io/quay/quay-rhel8:v3.15.0

DOCKER_CONFIG=/home/quay-user/.docker/

auth file
/home/quay-user/.docker/config.json

Auth file contents (auth tokens removed)

[quay-user@ocp-quay .docker]$ jq -S . config.json
{
"auths": {
"cloud.openshift.com": {
"auth": "<auth token removed>",
"email": "<email removed>"
},
"ocp-quay.home.arpa": {
"auth": "<auth token removed>",
"email": ""
},
"quay.io": {
"auth": "<auth token removed>",
"email": "<email removed>"
},
"registry.connect.redhat.com": {
"auth": "<auth token removed>",
"email": "<email removed>"
},
"registry.redhat.io": {
"auth": "<auth token removed>",
"email": "<email removed>"
}
}
}


Directory: /home/quay/storage/registry/uploads

[quay-user@ocp-quay uploads]$ pwd
/opt/quay/storage/registry/uploads
[quay-user@ocp-quay uploads]$ ls -l
total 0
-rw-r--r-- 1 1001 root 0 Jul 16 11:43 003d5ed5-c908-4355-af71-7f387b781fb9
-rw-r--r-- 1 1001 root 0 Jul 16 12:32 01a9668d-993d-4d3b-86b2-8232c7403c6a
-rw-r--r-- 1 1001 root 0 Jul 16 13:15 026d0889-8cc3-4866-8881-223cb6f79a79
-rw-r--r-- 1 1001 root 0 Jul 16 14:06 02787e50-6244-47f5-829f-be1ce0139f4f
-rw-r--r-- 1 1001 root 0 Jul 16 15:22 0357bd42-747e-4c83-855b-14f401ff6558
-rw-r--r-- 1 1001 root 0 Jul 16 15:04 036999f5-acd4-4cf6-ae11-3dc9bf0fe6de
-rw-r--r-- 1 1001 root 0 Jul 16 14:06 03cd5181-4457-4bcb-83b9-659f19dd9e1b

Podman Push
sudo podman --log-level=debug push ocp-quay.home.arpa/busybox/busybox:test

Podman Output
INFO[0000] podman filtering at log level debug
DEBU[0000] Called push.PersistentPreRunE(podman --log-level=debug push ocp-quay.home.arpa/busybox/busybox:test)
DEBU[0000] Using conmon: "/usr/bin/conmon"
INFO[0000] Using sqlite as database backend
DEBU[0000] Using graph driver overlay
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Using transient store: false
DEBU[0000] [graphdriver] trying provided driver "overlay"
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that overlay is supported
DEBU[0000] Cached value indicated that metacopy is being used
DEBU[0000] Cached value indicated that native-diff is not being used
INFO[0000] Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled
DEBU[0000] backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=true
DEBU[0000] Initializing event backend journald
DEBU[0000] Configured OCI runtime kata initialization failed: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] Configured OCI runtime youki initialization failed: no valid executable found for OCI runtime youki: invalid argument
DEBU[0000] Configured OCI runtime ocijail initialization failed: no valid executable found for OCI runtime ocijail: invalid argument
DEBU[0000] Configured OCI runtime crun-vm initialization failed: no valid executable found for OCI runtime crun-vm: invalid argument
DEBU[0000] Configured OCI runtime runc initialization failed: no valid executable found for OCI runtime runc: invalid argument
DEBU[0000] Configured OCI runtime runj initialization failed: no valid executable found for OCI runtime runj: invalid argument
DEBU[0000] Configured OCI runtime crun-wasm initialization failed: no valid executable found for OCI runtime crun-wasm: invalid argument
DEBU[0000] Configured OCI runtime runsc initialization failed: no valid executable found for OCI runtime runsc: invalid argument
DEBU[0000] Configured OCI runtime krun initialization failed: no valid executable found for OCI runtime krun: invalid argument
DEBU[0000] Using OCI runtime "/usr/bin/crun"
INFO[0000] Setting parallel job count to 13
DEBU[0000] Looking up image "ocp-quay.home.arpa/busybox/busybox:test" in local containers storage
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Trying "ocp-quay.home.arpa/busybox/busybox:test" ...
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:eek:verlay.mountopt=nodev,metacopy=on]@6d3e4188a38af91b0c1577b9e88c53368926b2fe0e1fb985d6e8a70040520c4d"
DEBU[0000] Found image "ocp-quay.home.arpa/busybox/busybox:test" as "ocp-quay.home.arpa/busybox/busybox:test" in local containers storage
DEBU[0000] Found image "ocp-quay.home.arpa/busybox/busybox:test" as "ocp-quay.home.arpa/busybox/busybox:test" in local containers storage ([overlay@/var/lib/containers/storage+/run/containers/storage:eek:verlay.mountopt=nodev,metacopy=on]@6d3e4188a38af91b0c1577b9e88c53368926b2fe0e1fb985d6e8a70040520c4d)
DEBU[0000] exporting opaque data as blob "sha256:6d3e4188a38af91b0c1577b9e88c53368926b2fe0e1fb985d6e8a70040520c4d"
DEBU[0000] Pushing image ocp-quay.home.arpa/busybox/busybox:test to ocp-quay.home.arpa/busybox/busybox:test
DEBU[0000] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0000] Copying source image [overlay@/var/lib/containers/storage+/run/containers/storage:eek:verlay.mountopt=nodev,metacopy=on]@6d3e4188a38af91b0c1577b9e88c53368926b2fe0e1fb985d6e8a70040520c4d to destination image //ocp-quay.home.arpa/busybox/busybox:test
DEBU[0000] Using registries.d directory /etc/containers/registries.d
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/000-shortnames.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/001-rhel-shortnames.conf"
DEBU[0000] Loading registries configuration "/etc/containers/registries.conf.d/002-rhel-shortnames-overrides.conf"
DEBU[0000] Found credentials for ocp-quay.home.arpa/busybox/busybox in credential helper containers-auth.json in file /run/containers/0/auth.json
DEBU[0000] No signature storage configuration found for ocp-quay.home.arpa/busybox/busybox:test, using built-in default file:///var/lib/containers/sigstore
DEBU[0000] Looking for TLS certificates and private keys in /etc/containers/certs.d/ocp-quay.home.arpa
DEBU[0000] crt: /etc/containers/certs.d/ocp-quay.home.arpa/ca.crt
DEBU[0000] cert: /etc/containers/certs.d/ocp-quay.home.arpa/client.cert
DEBU[0000] key: /etc/containers/certs.d/ocp-quay.home.arpa/client.key
DEBU[0000] Using SQLite blob info cache at /var/lib/containers/cache/blob-info-cache-v1.sqlite
DEBU[0000] IsRunningImageAllowed for image containers-storage:[overlay@/var/lib/containers/storage]@6d3e4188a38af91b0c1577b9e88c53368926b2fe0e1fb985d6e8a70040520c4d
DEBU[0000] Using transport "containers-storage" policy section ""
DEBU[0000] Requirement 0: allowed
DEBU[0000] Overall: allowed
DEBU[0000] exporting opaque data as blob "sha256:6d3e4188a38af91b0c1577b9e88c53368926b2fe0e1fb985d6e8a70040520c4d"
Getting image source signatures
DEBU[0000] Manifest has MIME type application/vnd.oci.image.manifest.v1+json, ordered candidate list [application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json, application/vnd.docker.distribution.manifest.v1+prettyjws, application/vnd.oci.image.index.v1+json, application/vnd.docker.distribution.manifest.list.v2+json, application/vnd.docker.distribution.manifest.v1+json]
DEBU[0000] ... will first try using the original manifest unmodified
DEBU[0000] Checking if we can reuse blob sha256:65014c70e84b6817fac42bb201ec5c1ea460a8da246cac0e481f5c9a9491eac0: general substitution = true, compression for MIME type "application/vnd.oci.image.layer.v1.tar" = true
DEBU[0000] Checking /v2/busybox/busybox/blobs/sha256:65014c70e84b6817fac42bb201ec5c1ea460a8da246cac0e481f5c9a9491eac0
DEBU[0000] GET https://ocp-quay.home.arpa/v2/
DEBU[0000] Ping https://ocp-quay.home.arpa/v2/ status 401
DEBU[0000] GET http://ocp-quay.home.arpa/v2/auth?a.../busybox:pull,push&service=ocp-quay.home.arpa
DEBU[0000] Increasing token expiration to: 60 seconds
DEBU[0000] HEAD https://ocp-quay.home.arpa/v2/busyb...42bb201ec5c1ea460a8da246cac0e481f5c9a9491eac0
DEBU[0000] ... not present
DEBU[0000] Trying to reuse blob with cached digest sha256:05b9fdbb044a7ecdb5ec3cb0fac492d667ef57df990d3b0254c8c800d5c31eb0 compressed with gzip in destination repo ocp-quay.home.arpa/quayadmin/busybox
DEBU[0000] Checking /v2/quayadmin/busybox/blobs/sha256:05b9fdbb044a7ecdb5ec3cb0fac492d667ef57df990d3b0254c8c800d5c31eb0
DEBU[0000] GET http://ocp-quay.home.arpa/v2/auth?a...admin/busybox:pull&service=ocp-quay.home.arpa
DEBU[0000] Increasing token expiration to: 60 seconds
DEBU[0000] HEAD https://ocp-quay.home.arpa/v2/quaya...c3cb0fac492d667ef57df990d3b0254c8c800d5c31eb0
DEBU[0000] ... not present
DEBU[0000] Trying to reuse blob with cached digest sha256:65014c70e84b6817fac42bb201ec5c1ea460a8da246cac0e481f5c9a9491eac0 in destination repo with no location match, checking current repo
DEBU[0000] ... Already tried the primary destination
DEBU[0000] Trying to reuse blob with cached digest sha256:90b9666d4aed1893ff122f238948dfd5e8efdcf6c444fe92371ea0f01750bf8c compressed with gzip with no location match, checking current repo
DEBU[0000] Checking /v2/busybox/busybox/blobs/sha256:90b9666d4aed1893ff122f238948dfd5e8efdcf6c444fe92371ea0f01750bf8c
DEBU[0000] GET http://ocp-quay.home.arpa/v2/auth?a...sybox/busybox:pull&service=ocp-quay.home.arpa
DEBU[0001] Increasing token expiration to: 60 seconds
DEBU[0001] HEAD https://ocp-quay.home.arpa/v2/busyb...22f238948dfd5e8efdcf6c444fe92371ea0f01750bf8c
DEBU[0001] ... not present
DEBU[0001] exporting filesystem layer "65014c70e84b6817fac42bb201ec5c1ea460a8da246cac0e481f5c9a9491eac0" without compression for blob "sha256:65014c70e84b6817fac42bb201ec5c1ea460a8da246cac0e481f5c9a9491eac0"
Copying blob 65014c70e84b [--------------------------------------] 0.0b / 4.3MiB | 0.0 b/s
DEBU[0001] No compression detected
DEBU[0001] Compressing blob on the fly
DEBU[0001] Uploading /v2/busybox/busybox/blobs/uploads/
DEBU[0001] POST https://ocp-quay.home.arpa/v2/busybox/busybox/blobs/uploads/
Copying blob 65014c70e84b done |
DEBU[0001] Looking up image "ocp-quay.home.arpa/busybox/busybox:test" in local containers storage
DEBU[0001] Normalized platform linux/amd64 to {amd64 linux [] }
DEBU[0001] Trying "ocp-quay.home.arpa/busybox/busybox:test" ...
DEBU[0001] parsed reference into "[overlay@/var/lib/containers/storage+/run/containers/storage:eek:verlay.mountopt=nodev,metacopy=on]@6d3e4188a38af91b0c1577b9e88c53368926b2fe0e1fb985d6e8a70040520c4d"
DEBU[0001] Found image "ocp-quay.home.arpa/busybox/busybox:test" as "ocp-quay.home.arpa/busybox/busybox:test" in local containers storage
Error: writing blob: determining upload URL: http: no Location header in response
DEBU[0001] Shutting down engines
INFO[0001] Received shutdown.Stop(), terminating! PID=92200

Anyone come across this issue or have any ideas on how to resolve it.

Many thanks.
 

luckylinux

Well-Known Member
Mar 18, 2012
1,520
475
83
No Idea about Quay at all (I thought it was a Service directly hosted by RedHat, I didn't even know you could deploy that yourself) but a few Things.

Given that you have a home.arpa Domain Name (as opposed to a Public Domain Name that you could self-host in your e.g. Homelab with Letsencrypt Certificates), you have to ensure that the Certificates are very likely in Several Different Places.

I know for sure e.g. Firefox / Chrome / CURL all use different Locations for Self-Signed Certificates, so you need to do the setup for each of those.

Could you get the overall System to work, e.g. the Web GUI ?

These Parts of the Tutorial should be relevant:



Or even the Steps before that:


I see in your DEBUG Output that you appear to have some Certificates in Place so that should be fine provided they are Actually used:
Code:
EBU[0000] Looking for TLS certificates and private keys in /etc/containers/certs.d/ocp-quay.home.arpa
DEBU[0000] crt: /etc/containers/certs.d/ocp-quay.home.arpa/ca.crt
DEBU[0000] cert: /etc/containers/certs.d/ocp-quay.home.arpa/client.cert
DEBU[0000] key: /etc/containers/certs.d/ocp-quay.home.arpa/client.key
I found this BUG Report from 2019 mentioning http: no Location header in response


Apparently trying to run the Command with --format=docker provided some more Information, seems like the Image was too big in that Case with respect to the Server / Reverse Proxy Configuration:


Otherwise maybe you can try running Quay behind traefik / caddy / nginx (and feed them the self-signed Certificates), although that could create more Problems that it might solve.

I usually run everything behind a Reverse Proxy but that's with Letsencrypt Certificates.

You can of course do the same in a traefik Dynamic Configuration File /etc/traefik/dynamic/certificates.yml:
Code:
tls:
  certificates:
    - certFile: /certificates/MYDOMAIN.TLD/fullchain.pem
      keyFile: /certificates/MYDOMAIN.TLD/privkey.pem
Or with caddy and Caddyfile:
Code:
# Example and Guide
# https://caddyserver.com/docs/caddyfile/options

# General Options
{
    # (Optional) Debug Mode
    debug

    # Disable Admin API
    admin off

    # TLS Options
    # (Optional) Disable Certificates Management (only if SSL/TLS Certificates are managed by certbot or other external Tools)
    auto_https disable_certs

    # (Optional) Default SNI
    # default_sni {$APPLICATION_HOSTNAME}
}

localhost {
    reverse_proxy /api/* localhost:9001
}

# (Optional) Only if SSL/TLS Certificates are managed by certbot or other external Tools and Custom Logging is required
{$APPLICATION_HOSTNAME} {
    tls /certificates/{$APPLICATION_CERTIFICATE_DOMAIN}/{$APPLICATION_CERTIFICATE_CERT_FILE:fullchain.pem} /certificates/{$APPLICATION_CERTIFICATE_DOMAIN}/{$APPLICATION_CERTIFICATE_KEY_FILE:privkey.pem}
 
    log {
    output file /var/log/{$APPLICATION_HOSTNAME}/access.json {
        roll_size 100MiB
            roll_keep 5000
            roll_keep_for 720h
            roll_uncompressed
    }
 
        format json
    }

    reverse_proxy http://[::1]:{$APPLICATION_PORT}
}
I run IPv6 only so you might need to replace [::1] with 0.0.0.0. Of Course quay and traefik/caddy need to be in the same Pod :) . And of course with a Reverse Proxy, you need to make sure that the reverse_proxy Directive points to the UNENCRYPTED Application Endpoint. The Service is after all exposed through the Reverse Proxy which terminates the SSL Connection, NOT the Application anymore. I think there is a Way to get the TLS over TLS working too, but it's more complicated, as then the reverse Proxy would need to trust the Application Certificate, you would need another Pair of Certificates, etc.

But I feel your Pain. Even with Letsencrypt Certificates, running the Official Registry Image together with the docker.io/cesanta/docker_auth:latest was quite a PITA. MANY Things and Pitfalls along the Way even after triple checking every Configuration.

Not saying it's a bad System (it's quite Good actually), but there are many Pitfalls. Especially Reverse Proxy Configuration, Redirects, etc.

The last thing to keep in mind with TLS is SNI. That's the "Hello" Part of the Handshake where the Client specifies the Server Hostname (Server Name Identification), although that's usually for TCP/UDP over TLS (Layer 4, Transport Layer) NOT for HTTP over TLS [HTTPS] (Layer 7, Application Layer):


But since your Error specifies something about the Header I have a feeling it's not that.
The best to troubleshoot it usually to run the equivalent Command via curl as in curl -L -vvv so you will get a very verbose Output. For Certificates Verification that can work somewhat, otherwise openssl s_client -connect <server_address>:<port> -servername <server_name> should do it.

EDIT 1: There might be more up-to-date Documents at Manage Project Quay actually compared to the RedHat Version.

Please check those out too !
 
Last edited: