Unable to use swag subdomains with DuckDNS, subfolders work fine

I’m running the latest swag image as well as various other containers from LSIO and others. When I try to configure swag to use subdomains (e.g. nzbget.mydomain.duckdns.org), I get the “Welcome to your SWAG instance” landing page. When I configure it to use subfolders (e.g. www.mydomain.duckdns.org/nzbget), it works fine. This occurs for any container. Wouldn’t be much of an issue overall, but there doesn’t seem to be a unifi-controller subfolder option (only subdomain), and this is one that I would like use to avoid the annoying self-signed cert issue with Chrome. Also I prefer the subdomain option in general. I’m hoping someone can give me some guidance on how to address this issue.

Here’s my docker-compose for SWAG:

        container_name: swag
        cap_add:
            - NET_ADMIN
        environment:
            - PUID=1000
            - PGID=1000
            - TZ=America/Los_Angeles
            - URL=XXXXXXX.duckdns.org
            - SUBDOMAINS=wildcard
            - VALIDATION=duckdns
            - DNSPLUGIN=cloudflare
            - DUCKDNSTOKEN=XXXXXXXX
            - EMAIL=XXXXXXX
            - ONLY_SUBDOMAINS=false
            - STAGING=false
        ports:
            - '443:443'
        volumes:
            - '/home/XXXXXXX/swag:/config'
            - '/etc/localtime:/etc/localtime:ro'
        restart: unless-stopped
        image: ghcr.io/linuxserver/swag

Thanks!

please share your nzbget.subdomain.conf contents

you can also take a look at logs by doing
tail -f /home/XXXXXXX/swag/log/nginx/access.log /home/XXXXXXX/swag/log/nginx/error.log while that is running, try to access nzbget via subdomain, share any errors you see in the tail session.

Sure, unmodified from the included .sample file

# make sure that your dns has a cname set for nzbget

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name nzbget.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    # enable for Authelia
    #include /config/nginx/authelia-server.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /ldaplogin;

        # enable for Authelia
        #include /config/nginx/authelia-location.conf;

        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app nzbget;
        set $upstream_port 6789;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location ~ (/nzbget)?(/[^\/:]*:[^\/:]*)?/jsonrpc {
        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app nzbget;
        set $upstream_port 6789;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location ~ (/nzbget)?(/[^\/:]*:[^\/]*)?/jsonprpc {
        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app nzbget;
        set $upstream_port 6789;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }

    location ~ (/nzbget)?(/[^\/:]*:[^\/]*)?/xmlrpc {
        include /config/nginx/proxy.conf;
        include /config/nginx/resolver.conf;
        set $upstream_app nzbget;
        set $upstream_port 6789;
        set $upstream_proto http;
        proxy_pass $upstream_proto://$upstream_app:$upstream_port;

    }
}

Scratch that, most of them (radarr, sonarr, lidarr, nzbget, qbittorrent, plex) started working when I switched browsers. Must have been a caching issue or something.

However, the unifi-controller still doesn’t want to work. According to the unifi-controller.subdomain.conf.sample file, I need to make sure I’m not using a base-URL for the unifi-controller. I don’t think I’m doing that, but how do I know for sure? Here’s my compose code for the unifi-controller:

        container_name: unifi-controller
        environment:
            - PUID=1000
            - PGID=1000
            - MEM_LIMIT=1024M
        ports:
            - '3478:3478/udp'
            - '10001:10001/udp'
            - '8080:8080'
            - '8443:8443'
            - '1900:1900/udp'
            - '8843:8843'
            - '8880:8880'
            - '6790:6789'
            - '5514:5514/udp'
        volumes:
            - '/home/mchampion/unifi:/config'
        restart: unless-stopped
        image: ghcr.io/linuxserver/unifi-controller

I think I had to change the 6789 port because it had been giving me a conflict before, but other than that it should be unmodified.

you would know if you set a base_url.

please share the unifi-controller.subdomain.conf file you’re using. also, what error, if any, do you get when attempting to access this subdomain?

It works, oversight on my part. I was using unifi-controller.subdomain.duckdns.org, rather than unifi.subdomain.duckdns.org. I thought the subdomain was the same as the .conf file name for all the containers, but this one was different.

the file needs to be just .subdomain.conf subdomain.conf is not a placeholder.

What I mean is that most of the file names match the subdomain name, but the unifi-controller does not. For example the filename for the sonarr subdomain is ‘sonarr.subdomain.conf’ and it points to the subdomain sonarr.* , or the filename for nzbget is ‘nzbget.subdomain.conf’ and it points to the subdomain nzbget.* . However, for the unifi-controller the filename is ‘unifi-controller.subdomain.conf’ and it points to the subdomain unifi.* rather than unifi-controller.* , which is what I didn’t notice.

Once I got it all working as desired with DuckDNS, I tried to transfer everything to a paid domain for an easier URL. I purchased a domain via Google Domains and set up a free account at CloudFlare for DNS management. I changed the nameservers on my Google Domain site to the CloudFlare servers as CloudFlare required. Now I’m back to the problem where most of the subdomains are working (sonarr, radarr, lidarr, qbittorrent, nzbget, nzbhydra, plex) with the new domain, but others are not (unifi, tautulli). This is my first time dealing with DNS, so I might be doing something wrong on the settings, but I can’t figure out why some might work and not others if that were the case.

I have only 2 DNS records on CloudFlare at this point, one A type that points my domain name to my home IP, and a CNAME that connects to my google domain entry (auto-populated by CloudFlare when I set it up). I don’t have any CNAMEs for the subdomains as the SWAG documentation states that the CloudFlare setup supports wildcards. I’ll keep trying to troubleshoot, but happy to take suggestions from anyone who understands this stuff better than I do.