A bit of help with IP restriction with letsencrypt / dnsmasq

Hello all. I have moved everything I’m hosting to a single docker-compose file and using LSIO images where available.

The letsencrypt with the built-in proxy-confs is pretty brilliant, and it’s working fabulously except for one thing.

I want to proxy Synology’s Disk Station Manager as well as phpadmin, but I want them accessible ONLY via LAN and not over WAN.

Here is what I have done to try and achieve this.

In my router (I re-flashed DD-WRT just for dnsmasq), for the additional dnsmasq options I have:
address=/.domain.com/192.168.1.2 # IP of Synology NAS

Ports 80 and 443 are forwarded to 192.168.1.2 as well.

nginx runs on ports 80 / 443 (I’m only using 80 because I’m too lazy to type in https:// and that lets me auto-redirect HTTP to https). Below is the server block from the proxy conf (I am aware that the resolver / upstream assignment aren’t necessary, I just didn’t bother to remove them when I copied one of the existing files to make this one):

location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        deny 192.168.1.1;
        allow 192.168.0.0/16;
        allow 127.0.0.1;
        deny all;

        include /config/nginx/proxy.conf;
        resolver 127.0.0.11 valid=30s;
        set $upstream_dsm dsm;
        proxy_pass http://192.168.1.2:5000;
    }

What I am seeing is that everything works as it should within the LAN; if I do a nslookup on the hostname it resolves to 192.168.1.2 as it should.

The problem is that it seems reachable outside the LAN as well; to test it, I turned WiFi off on my cell phone and hit the URL, but it still comes up.

I’m guessing I am just missing something in the allow / deny block above, but I’ve tried a few different things without much success. At first, I tried to only allow 192.168.1.0/24, but when I did that I couldn’t reach it at all.

Has anyone done this successfully?

I use the following in the “server” block and it works as intended

allow 192.168.0.0/16;
deny all;

I made the change and restarted the letsencrypt container, and there’s no change.

I’m guessing it’s something network related, but trace routes from my phone don’t return anything (probably * * * over and over, as it just says No Response until it times out - and that’s whet her I hit one of the “publicly served” URL’s or the two private ones. Probably a firewall somewhere between my phone and the server dropping the ICMP / UDP packets.

Without a route to see, all I could think to do was clear the storage / cache of the Chrome app, but that didn’t make a difference either. I don’t know if the site is cached somewhere or what but that’s all I can think of. Here is the full server block for reference.

server {
    listen 443 ssl;
    listen [::]:443 ssl;

    server_name dsm.*;

    include /config/nginx/ssl.conf;

    client_max_body_size 0;

    # enable for ldap auth, fill in ldap details in ldap.conf
    #include /config/nginx/ldap.conf;

    location / {
        # enable the next two lines for http auth
        #auth_basic "Restricted";
        #auth_basic_user_file /config/nginx/.htpasswd;

        # enable the next two lines for ldap auth
        #auth_request /auth;
        #error_page 401 =200 /login;

        allow 192.168.0.0/16;
        deny all;

        include /config/nginx/proxy.conf;
        # resolver 127.0.0.11 valid=30s;
        # set $upstream_dsm dsm;
        proxy_pass http://192.168.1.2:5000;
    }
}

I use this on all of my containers except 3; if i turn off wifi on my cell and browse to (in your case) dsm.domain.com i get a 403 which is the expected result.

traceroute just uses icmp to explore hops on a route, it’s not going to tell you if you’re blocking access (via tcp) to a webserver like nginx.

for an example, here is one of my confs

location / {
    allow 192.168.0.0/16;
    allow <my ipv6 subnet>;
    allow fe80::/16;
    allow 172.20.0.0/16;
    deny all;
    ...
}

It’s been a few months, and since I had my Nextcloud shit the bed during an update and had to fix that, I thought I would give this another look.

So as near as I can tell, the nginx allow / deny block works as it should, if I do this:

    allow 192.168.0.0/16;
    allow 172.19.0.0/16;
    deny all;

But the rub I’m having is because the client IP is always the docker bridge gateway. Last line from nginx access log:

172.19.0.1 - - [14/Mar/2020:23:32:22 -0400] "GET /favicon.ico HTTP/2.0" 403 175 "https://dsm.<domain>.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.132 Safari/537.36"

Here’s a snippet from a docker network inspect on that bridged network:

    "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.19.0.0/16",
                    "Gateway": "172.19.0.1"
                }
            ]
        },

As a consequence, if I include the allow 172.19.0.0/16; line, I can connect, but both from inside and outside my network. If I comment out that line, I cannot connect even from inside the network.

So the problem seems to be that the actual client IP is not properly getting forwarded to nginx (or at least nginx is not recognizing it as such). I am not sure if this will always be the case when I am using bridged networking or if there is something I need to correct in the proxy configuration.

I haven’t tweaked the basic nginx config (or if I have, I do not remember doing so). Here are the set header calls from proxy.conf:

proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Ssl on;

The bridged network is pretty simple. Since I needed them all to connect to the same one for the proxy-confs you guys have setup to work, I just did it like this in the docker-compose.

networks:
  default:
    name: docker_network