SWAG - Docker MacOS real IP problem

Hello,

I have just recently switched to SWAG or better I am wanting to. I have set it all up and using cloudflare dns validation and a zerossl wildcard certificate for various docker containers (e.g. bitwarden).
One of my main reasons of switching to SWAG is the fail2ban integration. However my current problem is that when I look at my access.log that all of my requests are coming from 192.168.65.1 which if I am correct is a default docker IP.
I am running docker on an M1 Mac, the domain is Cloudflare proxied. I am using the standard sample configs that came with SWAG.
Generally everything is working and behaving as expected just can‘t implement fail2ban due to not seeing the real IPs in the acces.log file.

Anyone have an idea on how to get the real ips forwarded?

You host’s firewall will be blocking the real ip, we tend to see this on retail nas units which you can solve via Exposing the client IPs to Docker containers in Synology NAS
No idea if this will work on MacOS but should put you in the right direction.

Thank you, that would be very well possible. I am wondering however how I would achieve the same thing on Mac since it does not seem to use iptables but rather pf

http://man.openbsd.org/pfctl

https://krypted.com/mac-security/a-cheat-sheet-for-using-pf-in-os-x-lion-and-up/

You’d have to read up on that yourself, it could just be a by-product of using MacOS as a docker host.

I know…
Will report back if I am able to figure it out. Maybe eventually I’ll have to move back to a native
Linux server then I would not have some of these problems I suppose.
The Mac Mini as a server has been pretty good and allows for some things that I need a Mac for but for docker and the reverse proxy it does not seem ideal. Any idea if I would run my docker stuff inside a virtual machine on the Mac if I would have the same issues? I have an Ubuntu 22.04 multipass instance running on it already, I could move my docker stuff over to it. But it is also just bridged to the host network so it may have the same issues. However I have pihole running on it and it sees the actual client IPs from the internal network.

I gave up on pfctl too much work learning it for now so I moved my swag container over to my NAS and went with what was in your link. However after moving I am getting:

Unable to retrieve EAB credentials from ZeroSSL. Check the outgoing connections to api.zerossl.com and dns. Sleeping.

Any idea why? Or what to do?

That error would imply your container isn’t getting any connection to the ZeroSSL servers.

So could be a firewall topic? Didn’t think of that. I was worried it would be something with the config files that I overlooked

Wouldn’t know if it’s a configuration issue as you haven’t really posted any info about how you’ve deployed the container. 99% of the time if you’re getting timeout’s like that is something external or something you’ve setup on the host.

Yes, it is ok. If you want me to for documentation purposes I can post the container config. Since all of my other containers work I am pretty certain it is a firewall topic because I had the container create a new bridge network for itself and haven’t added that IP range to the firewall yet and by default I block everything that isn’t allowed.

---
version: "2.1"
services:
  swag:
    image: lscr.io/linuxserver/swag
    container_name: swag
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=$UID
      - PGID=$GID
      - TZ=America/Chicago
      - URL=my-url.com
      - SUBDOMAINS=wildcard
      - VALIDATION=dns
      - CERTPROVIDER=zerossl #optional
      - DNSPLUGIN=cloudflare #optional
      - EMAIL=email@zerossl.com #optional
    volumes:
      - ~/volume1/docker/swag/config:/config
    ports:
      - 4423:443
      - 8880:80 #optional
    restart: unless-stopped

I setup the cloudflare.ini with the right credentials (api key with DNS privileges).
And I port forwarded 443 and 80 to the respective IP:4423 or IP:8880. So pretty sure it is the firewall.

You don’t want to use ~ within paths, Docker runs as root but the apps within run as the UID/GID.
So the folders would get created in the roots homedir but you’re expecting it to be in the users home dir, Best to use full path.

Can’t see anything wrong with the compose mind, do you have anything like pihole/adguard running?

I have pihole running but not on the VLAN that the Synology and the MacOS are running on. Basically I limit pihole to one specific VLAN or if someone manually configures it as their DNS.

Regarding the ~ that is a good one thanks for that. I do know that it is going to the right place though since when I moved the existing key files it started complaining that those are missing.
Will still change it to the right path though. Thanks for that.

I will check the firewall in a few minutes and report back.

Ok, with firewall adjusted it seems to get connection but it throws this warning:

nginx: [warn] "ssl_stapling" ignored, host not found in OCSP responder "zerossl.ocsp.sectigo.com" in the certificate "/config/keys/cert.crt"

Could this be due to moving the config files to a different machine? How would I solve that?

it’s just a warning, are you experiencing an issue? (you can read about ssl_stapling here ocsp - nginx: ssl_stapling_verify: What exactly is being verified? - Server Fault)

My container says going to sleep after it so does not seem to like the warning.

share the full container log starting with our ascii logo; what error do you get when you attempt to reach swag’s page?

Generally his error means the container can’t reach that host. Can be caused by network ad blockers etc.