Wireguard server with port forwarding help

Up until now, I’ve been following LSIO’s excellent guide on how to get LSIO’s qBittorrent container working (with port forwarding) with their Wireguard container using Mullvad as the VPN provider.

Now, some of you might have heard that Mullvad is ending their port forwarding feature.
To get around this, while one could just switch to another provider that offers port forwarding until they decide to remove the feature as well because of abuse, I think it’s just a cat and mouse game at this point and therefore prefer to use my own VPS as my VPN.
The topic of this thread is not about my choices, but rather about how to complete LSIO’s guide to include the part of what would need to be done on the server (VPS) side to get the whole thing running.

On the client side, which is what LSIO’s guide was focusing on, there’s not much to change theoretically except the Wireguard key and VPS ip, and maybe the forwarding port if you choose a different one than the guide’s.

So far, I can get everything running except qBittorrent. I can download fine, I get people downloading from me, my IP shows my VPS’s ip, all good, except my qBittorrent client shows the port as not connectable; getting the little “fire” icon instead of the green planet one when I was using Mullvad.
Not sure what config I’m missing or mis-configured on the server side.

Server side

I’m using LSIO’s Wireguard and Nginx containers. Debian 11 host.
I use Nginx to direct server traffic on port 58787 to the server’s Wireguard container.
I have the VPS firewall ports 58787/tcp and 51820/udp open, as well as in UFW.
I created a “stream.conf.d” directory in Nginx’s config folder, along with a “qb.conf” file in it.
Created a custom docker network:

docker network create --subnet=172.30.26.0/24 wireguard_nw

Here’s the docker-compose (VPS ip changed to 55.55.55.55 for obvious reasons):
I don’t have SYS_MODULE in there as the container tells me I don’t need it when I start it.

docker-compose.yml

---
version: "2.1"
services:
  wireguard:
    image: lscr.io/linuxserver/wireguard:latest
    container_name: wireguard
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
      - SERVERURL=55.55.55.55
      - SERVERPORT=51820
      - PEERS=1
      - PEERDNS=1.1.1.1
      - INTERNAL_SUBNET=10.13.13.0
      - INTERFACE=eth0
      - ALLOWEDIPS=0.0.0.0/0
      - PERSISTENTKEEPALIVE_PEERS=all
      - LOG_CONFS=false
    volumes:
      - /home/admin/docker_data/wireguard/config:/config
      - /lib/modules:/lib/modules
    networks:
      default:
        ipv4_address: 172.30.26.10
    ports:
      - 51820:51820/udp
      - 58787:58787
    sysctls:
      - net.ipv4.conf.all.src_valid_mark=1
    restart: unless-stopped
  nginx:
    image: lscr.io/linuxserver/nginx:latest
    container_name: nginx
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
    volumes:
      - /home/admin/docker_data/nginx/config:/config
    restart: unless-stopped
    network_mode: "service:wireguard"
    depends_on:
      - wireguard

networks:
    default:
      name: wireguard_nw
      external: true

Wireguard’s generated wg0.conf

[Interface]
Address = 10.13.13.1
ListenPort = 51820
PrivateKey = <PrivateKey redacted>
PostUp = iptables -A FORWARD -i %i -j ACCEPT; iptables -A FORWARD -o %i -j ACCEPT; iptables -t nat -A POSTROUTING -o eth+ -j MASQUERADE
PostDown = iptables -D FORWARD -i %i -j ACCEPT; iptables -D FORWARD -o %i -j ACCEPT; iptables -t nat -D POSTROUTING -o eth+ -j MASQUERADE

[Peer]
# peer1
PublicKey = <PublicKey redacted>
PresharedKey = <PresharedKey redacted>
AllowedIPs = 10.13.13.2/32
PersistentKeepalive = 25

qb.conf in /home/admin/docker_data/nginx/config/nginx/stream.conf.d/

stream {
    server {
        listen 58787;
        proxy_pass 172.30.26.10:58787;
    }
}

nginx.conf in /home/admin/docker_data/nginx/config/nginx/

## Version 2023/04/13 - Changelog: https://github.com/linuxserver/docker-baseimage-alpine-nginx/commits/master/root/defaults/nginx/nginx.conf.sample

### Based on alpine defaults
# https://git.alpinelinux.org/aports/tree/main/nginx/nginx.conf?h=3.15-stable

user abc;

# Set number of worker processes automatically based on number of CPU cores.
include /config/nginx/worker_processes.conf;

# Enables the use of JIT for regular expressions to speed-up their processing.
pcre_jit on;

# Configures default error logger.
error_log /config/log/nginx/error.log;

# Includes files with directives to load dynamic modules.
include /etc/nginx/modules/*.conf;

# Include files with config snippets into the root context.
#include /etc/nginx/conf.d/*.conf;

# Include streams
include /config/nginx/stream.conf.d/*.conf;

events {
    # The maximum number of simultaneous connections that can be opened by
    # a worker process.
    worker_connections 1024;
}

http {
    # Includes mapping of file name extensions to MIME types of responses
    # and defines the default type.
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    # Name servers used to resolve names of upstream servers into addresses.
    # It's also needed when using tcpsocket and udpsocket in Lua modules.
    #resolver 1.1.1.1 1.0.0.1 2606:4700:4700::1111 2606:4700:4700::1001;
    include /config/nginx/resolver.conf;

    # Don't tell nginx version to the clients. Default is 'on'.
    server_tokens off;

    # Specifies the maximum accepted body size of a client request, as
    # indicated by the request header Content-Length. If the stated content
    # length is greater than this size, then the client receives the HTTP
    # error code 413. Set to 0 to disable. Default is '1m'.
    client_max_body_size 0;

    # Sendfile copies data between one FD and other from within the kernel,
    # which is more efficient than read() + write(). Default is off.
    sendfile on;

    # Causes nginx to attempt to send its HTTP response head in one packet,
    # instead of using partial frames. Default is 'off'.
    tcp_nopush on;

    # all ssl related config moved to ssl.conf
    # included in server blocks where listen 443 is defined

    # Enable gzipping of responses.
    #gzip on;

    # Set the Vary HTTP header as defined in the RFC 2616. Default is 'off'.
    gzip_vary on;

    # Helper variable for proxying websockets.
    map $http_upgrade $connection_upgrade {
        default upgrade;
        '' close;
    }

    # Sets the path, format, and configuration for a buffered log write.
    access_log /config/log/nginx/access.log;

    # Includes virtual hosts configs.
    include /etc/nginx/http.d/*.conf;
    include /config/nginx/site-confs/*.conf;
}

daemon off;
pid /run/nginx.pid;

Client side

Basically using LSIO’s guide.
Host is Debian 11 as well.
Created a custom docker network:

docker network create --subnet 172.20.0.0/24 wgnet

Also created a root owned directory named “init_scripts” in the qBittorrent data directory in order for qBittorrent to run a “reroute.sh” script on container start as per the guide suggests to have access to the webui.

I don’t have SYS_MODULE in the docker-compose as the container tells me I don’t need it when I start it.

docker-compose.yml

services:
  wireguard:
    image: lscr.io/linuxserver/wireguard
    container_name: wireguard
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
    volumes:
      - /home/admin/docker_data/wireguard:/config
      - /lib/modules:/lib/modules
    networks:
      default:
        ipv4_address: 172.20.0.50
    sysctls:
      - net.ipv4.conf.all.src_valid_mark=1
    restart: unless-stopped
  qbittorrent:
    image: lscr.io/linuxserver/qbittorrent
    container_name: qbittorrent
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
      - WEBUI_PORT=8080
    volumes:
      - /home/admin/docker_data/qbittorrent/config:/config
      - /home/admin/downloads:/data/downloads
      - /home/admin/docker_data/qbittorrent/init_scripts:/custom-cont-init.d:ro
    networks:
      default:
        ipv4_address: 172.20.0.8
    ports:
      - 8080:8080
    restart: unless-stopped
networks:
  default:
    name: wgnet
    external: true

wg0.conf

[Interface]
Address = 10.13.13.2
PrivateKey = <PrivateKey redacted>
DNS = 1.1.1.1
PostUp = iptables -t nat -A POSTROUTING -o wg+ -j MASQUERADE; iptables -t nat -A PREROUTING -p tcp --dport 58787 -j DNAT --to-destination 172.20.0.2:58787
PreDown = iptables -t nat -D POSTROUTING -o wg+ -j MASQUERADE; iptables -t nat -D PREROUTING -p tcp --dport 58787 -j DNAT --to-destination 172.20.0.2:58787

[Peer]
PublicKey = <PublicKey redacted>
PresharedKey = <PresharedKey redacted>
Endpoint = 55.55.55.55:51820
AllowedIPs = 0.0.0.0/0
PersistentKeepalive = 25

reroute.sh in /home/admin/docker_data/qbittorrent/init_scripts/

#!/bin/sh

ip route del default
ip route add default via 172.20.0.2
ip route add 192.168.1.0/24 via 172.20.0.1

Recap

Everything works great. Qbittorrent reports its ip as 55.55.55.55 and it downloads/uploads. It’s just the connection icon in its webui that won’t show as connectable.
Did a couple netcat commands to test if port 58787 is accessible and it seems to be the case all the way through.
While the port is accessible, it doesn’t mean data is going through it, which I don’t really know how to test without a torrent client.

From an external device:

nc -v 55.55.55.55 58787
Connection to 55.55.55.55 58787 port [tcp/*] succeeded!

From within Nginx container on the VPS:

docker exec nginx nc -v 172.30.26.10 58787
Connection to 172.30.26.10 58787 port [tcp/*] succeeded!

From within Wireguard container on the VPS:

docker exec wireguard nc -v 10.13.13.2 58787
Connection to 10.13.13.2 58787 port [tcp/*] succeeded!

From within Wireguard container here at home:

docker exec wireguard nc -v 172.20.0.8 58787
Connection to 172.20.0.8 58787 port [tcp/*] succeeded!