SWAG container exits and doesn't restart

Background:
I picked up an old server and started a Linux project a few months ago. I’ve been learning a ton and having a lot of fun. I’m using the server to host a number of sites (that are still in development because I’m also learning web-site development). Searching online I learned about SSL/TLS certificates, nginx, docker, and so on.

Using what I learned, I set up swag to handle certificates and the reverse-proxy using docker-compose. I added my other sites to the same docker-compose.yaml file. After some struggles, I was able to get everything working the way I wanted. I had docker containers for each of my sites and stub sites for each of them. I could access them from any browser.

The first time I had to power off the server, when it came back up the docker containers did not all start up correctly. I was getting an “Address already in use” error on the swag container. I googled for solutions and found this answer suggesting finding and killing the existing processes that were bound to the ports in question (80 and 443 for me) so that the docker container (the swag container in my case) could be bound to them. The processes that I found that were using ports 80 and 443 were all “docker-proxy”. I killed them and I was able to restart the swag docker container successfully without any errors. However, every time I restart or reboot the server, I must go through the same steps.

Also, even if I don’t power cycle the server, the docker containers seem to get restarted, and sometimes the swag container doesn’t restart.

For example, as a test I shut down all my containers, edited the docker-compose file to only start swag and one website, and rebooted the server.

When I logged back in no containers were running. Good.

I killed docker-proxy (x4) and started the two containers.

thomas@server:~$ date
Fri 21 Jan 2022 09:40:13 PM UTC
thomas@server:~$ docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
thomas@server:~$ sudo lsof -i -P -n
COMMAND     PID            USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
[...]
docker-pr  2712            root    4u  IPv4  48142      0t0  TCP *:443 (LISTEN)
docker-pr  2718            root    4u  IPv6  44127      0t0  TCP *:443 (LISTEN)
docker-pr  2734            root    4u  IPv4  43233      0t0  TCP *:80 (LISTEN)
docker-pr  2740            root    4u  IPv6  48149      0t0  TCP *:80 (LISTEN)
[...]
thomas@server:~$ sudo kill 2712
thomas@server:~$ sudo kill 2718
thomas@server:~$ sudo kill 2734
thomas@server:~$ sudo kill 2740
thomas@server:~$ docker-compose up -d
Creating network "sites_default" with the default driver
Creating swag           ... done
Creating turbobutterfly ... done
thomas@server:~$ docker ps -a
CONTAINER ID   IMAGE                      COMMAND                  CREATED              STATUS              PORTS                                      NAMES
d170fc643e20   ghcr.io/linuxserver/swag   "/init"                  About a minute ago   Up About a minute   0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   swag
457b00fc0a8b   turbobutterfly             "/docker-entrypoint.…"   About a minute ago   Up About a minute   80/tcp                                     turbobutterfly
thomas@server:~$ date
Fri 21 Jan 2022 09:42:54 PM UTC
thomas@server:~$

Then I sat back and watched, checking in periodically.

thomas@server:~$ docker ps -a
CONTAINER ID   IMAGE                      COMMAND                  CREATED       STATUS       PORTS                                      NAMES
d170fc643e20   ghcr.io/linuxserver/swag   "/init"                  5 hours ago   Up 5 hours   0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   swag
457b00fc0a8b   turbobutterfly             "/docker-entrypoint.…"   5 hours ago   Up 5 hours   80/tcp                                     turbobutterfly

It has been up for five hours, seems okay.

thomas@server:~$ docker ps -a
CONTAINER ID   IMAGE                      COMMAND                  CREATED       STATUS       PORTS                                      NAMES
d170fc643e20   ghcr.io/linuxserver/swag   "/init"                  8 hours ago   Up 3 hours   0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   swag
457b00fc0a8b   turbobutterfly             "/docker-entrypoint.…"   8 hours ago   Up 3 hours   80/tcp                                     turbobutterfly

It seems to have shut down 3 hours ago. And restarted okay.

thomas@zapdos:/home/sites$ docker ps -a
CONTAINER ID   IMAGE                      COMMAND                  CREATED        STATUS                     PORTS                                      NAMES
d170fc643e20   ghcr.io/linuxserver/swag   "/init"                  19 hours ago   Exited (255) 3 hours ago   0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   swag
457b00fc0a8b   turbobutterfly             "/docker-entrypoint.…"   19 hours ago   Up 3 hours                 80/tcp                                     turbobutterfly

I checked again in the morning. It seems to have shut down 3 hours ago. The website container restarted, but swag didn’t.

This has been typical of what happens every time.

My questions:

  • Why is docker restarting my containers?

I’ve checked the logs (using docker logs) but I don’t see anything useful. For the swag container, the logs show the start-up procedure, but nothing on shutdown. For the website container, it shows it receiving a SIGTERM signal, but not why or from where.

Could it be something flaky with my hardware or my OS? Are there other logs somewhere to help identify possible issues? (I’m using an older PowerEdge that I got for free and Ubuntu 20.4 server).

  • When docker restarts my containers, why does swag sometimes not restart?

  • What can I do so that the containers successful restart automatically when the server reboots (i.e. doesn’t conflict with docker-proxy on ports 80 and 443?

I would be very grateful for any assistance. This has been keeping me from moving forward on my projects for a few months.

If you need other information to help understand what’s wrong, please let me know.

Please share your swag docker run or docker compose snippet

version: "3.7"

services:

  turbobutterfly:
    image: turbobutterfly
    container_name: turbobutterfly
    environment:
      - PUID=1000
      - PGID=1000
    volumes:
      - /media/shared/Sites/turbobutterfly/config:/config
    restart: unless-stopped

  swag:
    image: ghcr.io/linuxserver/swag
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
      - URL=turbobutterfly.com
      - SUBDOMAINS=wildcard
      - VALIDATION=dns
      - DNSPLUGIN=cloudflare
    volumes:
      - /home/sites/swag/config:/config
    ports:
      - 443:443
      - 80:80
    restart: unless-stopped

And the Dockerfile for turbobutterfly is:

FROM nginx:latest
COPY ./default.conf /etc/nginx/conf.d/default.conf
RUN mkdir -p /config

default.conf is:

server {
    listen       80;
    listen  [::]:80;
    server_name  localhost;

    location / {
        root   /config/html;
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /config/html;
    }
}

I see no reason why it would not restart and the fact that the ports are in use implies that it is, unless you are running another container using ports 80 and 443. You have restart unless-stopped, so as long as you dont down or stop the container before rebooting, it should restart (barring the prior mentioned comment)

you should verify all your compose statements for other containers and ensure you’ve not created any conflicts. I see no issues with your swag container at all in what you’ve shown

@driz thank you for your response. I’m not ignoring you, I’m trying to work with what you gave me. It was helpful to know you don’t see anything wrong with my setup.

Are you saying that the docker-proxy processes are created to work with the swag container?

Then it seems when the swag container is restarted, the docker-proxy processes are not restarted and they are now competing with the swag container for the ports and the swag container cannot start. Does that sound right?

I’ve tried a few other tests and I’ve learned a little bit more.

From some other posts in other forums I have learned where some system logs are. From them I learned that when the docker containers are restarted, it’s just docker restarting, and not the OS rebooting. I manually rebooted the OS and the log entries were different. Specifically in /var/log/kern.log, there was nothing significant logged when the containers were restarted, but an extensive series of log entries while rebooting.

Others in other forums said they had similar problems (random restarts) when their RAM was faulty. I did a memory test and there were no reported faults.

I guess my next task is to find out why docker is restarting my containers.

sigh.

docker proxy work with any container, they’re the thing proxying it to the host. swag will be one of them, yes. AFAIK any bridged container will show docker-proxy in netstat and ss.

In terms of uncontrolled restarts or failure to restart, I cant say, you perhaps have a host issue. maybe an outdated version of docker and/or docker-compose…

@driz, thank you for your feedback. It has been helpful.

I’ve done a lot of digging and I think I’ve found the problem, and the solution. It’s a little embarrassing. My excuse is that I’m still kind of new at this.

TL;DR:

It seems I had docker installed using both apt-get and snap. snap automatically updates whenever there is an update. This is what was repeatedly and “randomly” restarting docker along with all my containers. I completely uninstalled docker from both methods and re-installed using only apt-get. It has been up for over two days now without any issues.

Long Story:

When I was first trying to set up docker (and swag) I put my “config” folders into a new root folder I created, “/sites”. I kept getting access denied errors. Folks online said I was using the wrong version of docker, that I should uninstall it and install the official version using the official steps found here.

I did so, and it worked, until the next time I rebooted my server and I got the access denied errors again. I eventually relented and moved /sites to /home/sites and the access denied errors went away.

As I mentioned in my original post, I set up a bunch of sites to access through swag. I had about a half-dozen sites, some of which had two containers, one for front-end, one for data, so I had 13 containers altogether. They included WordPress, NextCloud, and some personal web projects I will be working on.

I seem to recall one of the canned sites (I don’t recall which one) had instructions for making sure docker was installed properly first, so I followed the instructions. Looking back I think these instructions were for installing docker using snap. I say this because this week while digging I found both the official apt-get installation and the snap. installation.

Apparently, snap automatically updates whenever there is an update. This is what was repeatedly and “randomly” restarting docker along with all my containers. I discovered the snap changes command and the update times corresponded to the container restart times. I completely uninstalled docker from both methods and re-installed using only apt-get (using the official steps linked above). It has been up for over two days now without any issues.

I suspect, but I don’t know for sure, that this dual install was also the cause of swag and docker-proxy fighting for port access. If something else causes docker to unexpectedly restart my containers, I guess I’ll see.

Yeah, just the other day a blog article showed up on my newsfeed with the exact same issue.

Docker via snap has other issues as well and we don’t recommend it. For best results, you should install docker from the official docker repos (not even from your distro’s repo).

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.