SWAG Dashboard 504 Gateway Time-out

I installed SWAG on a newly installed Ubuntu VM. SWAG works so far. This means that the subdomains arrive and are forwarded to the right place. A firewall on Ubuntu is disabled.
Now I have installed the SWAG Dashboard and cannot get a connection internally via port 81. The error ‘504 Gateway Time-out’ occurs. The connection is rejected externally because of the access rules in dasboard.subdomain.conf. If I deactivate the access rules I also get an external error ‘504 Gateway Time-out’. I also mapped a different port for the dashboard. Unfortunately also without success.

The following versions are installed:

  • OS/docker info (Ubuntu 22.04.3 LTS, Docker version 24.0.7)
  • Hardware (Synology DS723+)

Here the configurations and logs are stored under Pastebin.

  • dashboard.subdomain.conf → link
  • docker-compose.yml → link

As a new user I can only post 2 links. If necessary, I can provide the links for error.log and access.log.

Does anyone know the problem or has a solution for it?

Best Regards
Thomas

1 Like

In the meantime, I found the following entries in SWAG’s Docker logs:

**** Applying the SWAG dashboard mod... ****
libmaxminddb
**** libmaxminddb already installed, skipping ****
**** goaccess already installed, skipping ****
**** libmaxminddb already installed, skipping ****
chown: changing ownership of '/dashboard/logs': Read-only file system
chown: changing ownership of '/dashboard/logs/cloud-init.log': Read-only file system
chown: changing ownership of '/dashboard/logs/installer': Read-only file system
chown: changing ownership of '/dashboard/logs/installer/subiquity-client-debug.log.2344': Read-only file system
chown: changing ownership of '/dashboard/logs/installer/cloud-init.log': Read-only file system
chown: changing ownership of '/dashboard/logs/installer/installer-journal.txt': Read-only file system
**** Permissions could not be set. This is probably because your volume mounts are remote or read-only. ****
**** The app may not work properly and we will not provide support for it. ****
**** Applied the SWAG dashboard mod ****
chown: changing ownership of '/dashboard/logs/installer/media-info': Read-only file system
chown: changing ownership of '/dashboard/logs/installer/subiquity-server-info.log.2019': Read-only file system
chown: changing ownership of '/dashboard/logs/installer/subiquity-client-info.log.2344': Read-only file system

Is there a connection here with the Ubuntu file system?

my first guess is that the issue is related to synology ACLs which tend to cause issues for most syno users, though with you running a vm i wouldn’t think this is the case.

What is the filesystem being used on the ubuntu system and how did you install docker? What is the output of snap list | grep docker

@driz Thank you very much for your help.
I have found that the errors mentioned in my previous post do not occur if I deactivate the mapped volumes.

    volumes:
      - /opt/docker/swag:/config:rw
      - /opt/docker/swag/fail2ban/fail2ban.sqlite3:/dashboard/fail2ban.sqlite3:ro
      - /var/log:/dashboard/logs:ro

But the dashboard still cannot be accessed via the mapped port 81. The mentioned timeout always occurs.
Ok, here I have the requested information for you:

Filesystem: ext4
Docker: The official Docker version has been installed: docker-ce 5:24.0.7-1~ubuntu.22.04~jammy amd64 Docker: the open-source application container engine
snap list | grep docker: No snaps are installed yet. Try ‘snap install hello-world’.

I set up this Ubuntu VM specifically for SWAG. So I would be free to change something here.

a small note, you say official docker version, but you didn’t install via the official instructions from docker, you installed from the ubuntu repo. This shouldn’t cause a problem, but our recommendation is to always install via the official instructions.

it’s very good you didn’t have the snap version, that causes a ton of issues. What is the filesystem on the host?

Yes of course, it is the official version of Ubuntu: docker-ce. The file system is as follows:

Filesystem                        Type  1K-blocks    Used Available Use% Mounted on
tmpfs                             tmpfs    201040    1156    199884   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv ext4   14339080 9172048   4416852  68% /
tmpfs                             tmpfs   1005196       0   1005196   0% /dev/shm
tmpfs                             tmpfs      5120       0      5120   0% /run/lock
/dev/sda2                         ext4    1992552  131772   1739540   8% /boot
tmpfs                             tmpfs    201036       4    201032   1% /run/user/1000

ok filesystem is fine too.
Ill pass this on to some others to see if they have any ideas, there is always a chance that the cpu in the syno simply can’t keep up and the caching takes longer than the timeout value.

Please remove these

      - /opt/docker/swag/fail2ban/fail2ban.sqlite3:/dashboard/fail2ban.sqlite3:ro
      - /var/log:/dashboard/logs:ro

The only bind mount should be

      - /opt/docker/swag:/config:rw

I can confirm that only config mapping is enabled. All other mappings are disabled. However, the dashboard cannot be loaded: time out

version: "2.1"
services:
  swag:
    image: lscr.io/linuxserver/swag:latest
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Berlin
      - URL=example.com
      - VALIDATION=dns
      - SUBDOMAINS=wildcard
      - DNSPLUGIN=ovh
      - DOCKER_MODS=linuxserver/mods:swag-dashboard
    volumes:
      - /opt/docker/swag:/config
    ports:
      - 443:443
      - 80:80
      - 81:81 # Dashboard
    restart: unless-stopped
    networks:
      swag_bridge:

networks:
  swag_bridge:
    external: true

this potentially could’ve been caused by huge logs. The latest swag PR resolves an issue with logrotate and may correct this problem. Can you pull :latest for swag and report back?

I consistently experience very slow load times with the dashboard mod and intermittently get the ‘504 Gateway Time-out’ error.

Any resolution or workaround if a result of large log files?

It might be a result of using watchtower or constantly recreating the container which deletes the cache files.
You can delete the logs, and if it happens again try to investigate what’s spamming your reverse proxy and causing huge logs.