I have just recently switched to SWAG or better I am wanting to. I have set it all up and using cloudflare dns validation and a zerossl wildcard certificate for various docker containers (e.g. bitwarden).
One of my main reasons of switching to SWAG is the fail2ban integration. However my current problem is that when I look at my access.log that all of my requests are coming from 192.168.65.1 which if I am correct is a default docker IP.
I am running docker on an M1 Mac, the domain is Cloudflare proxied. I am using the standard sample configs that came with SWAG.
Generally everything is working and behaving as expected just can‘t implement fail2ban due to not seeing the real IPs in the acces.log file.
Anyone have an idea on how to get the real ips forwarded?
You host’s firewall will be blocking the real ip, we tend to see this on retail nas units which you can solve via Exposing the client IPs to Docker containers in Synology NAS
No idea if this will work on MacOS but should put you in the right direction.
Thank you, that would be very well possible. I am wondering however how I would achieve the same thing on Mac since it does not seem to use iptables but rather pf
I know…
Will report back if I am able to figure it out. Maybe eventually I’ll have to move back to a native
Linux server then I would not have some of these problems I suppose.
The Mac Mini as a server has been pretty good and allows for some things that I need a Mac for but for docker and the reverse proxy it does not seem ideal. Any idea if I would run my docker stuff inside a virtual machine on the Mac if I would have the same issues? I have an Ubuntu 22.04 multipass instance running on it already, I could move my docker stuff over to it. But it is also just bridged to the host network so it may have the same issues. However I have pihole running on it and it sees the actual client IPs from the internal network.
I gave up on pfctl too much work learning it for now so I moved my swag container over to my NAS and went with what was in your link. However after moving I am getting:
Unable to retrieve EAB credentials from ZeroSSL. Check the outgoing connections to api.zerossl.com and dns. Sleeping.
Wouldn’t know if it’s a configuration issue as you haven’t really posted any info about how you’ve deployed the container. 99% of the time if you’re getting timeout’s like that is something external or something you’ve setup on the host.
Yes, it is ok. If you want me to for documentation purposes I can post the container config. Since all of my other containers work I am pretty certain it is a firewall topic because I had the container create a new bridge network for itself and haven’t added that IP range to the firewall yet and by default I block everything that isn’t allowed.
I setup the cloudflare.ini with the right credentials (api key with DNS privileges).
And I port forwarded 443 and 80 to the respective IP:4423 or IP:8880. So pretty sure it is the firewall.
You don’t want to use ~ within paths, Docker runs as root but the apps within run as the UID/GID.
So the folders would get created in the roots homedir but you’re expecting it to be in the users home dir, Best to use full path.
Can’t see anything wrong with the compose mind, do you have anything like pihole/adguard running?
I have pihole running but not on the VLAN that the Synology and the MacOS are running on. Basically I limit pihole to one specific VLAN or if someone manually configures it as their DNS.
Regarding the ~ that is a good one thanks for that. I do know that it is going to the right place though since when I moved the existing key files it started complaining that those are missing.
Will still change it to the right path though. Thanks for that.
I will check the firewall in a few minutes and report back.