Todays wireguard update broke wireguard connectivity

Todays wire guard update broke connectivity for me… I have been running this for more than a year without any issues until today…

Running linux/wireguard in server mode on a raspberry pi 4 headless, with latest updates,upgrades to both the OS and docker.

Looking via portainer there is no longer an ip or port showing after pulling and ‘uping’. If I manually go to connect it to the wireguard default I get port and ip showing but still not working to connect to android phone.

Logs are full of this…
2022-07-10T10:30:06.809957236Z s6-svscan: warning: executing into .s6-svscan/crash
2022-07-10T10:30:06.811545927Z s6-svscan crashed. Killing everything and exiting.
2022-07-10T10:30:06.813376301Z s6-supervise s6-linux-init-shutdownd: fatal: unable to iopause: Operation not permitted
2022-07-10T10:30:06.813448819Z s6-linux-init-shutdownd: fatal: unable to iopause: Operation not permitted
2022-07-10T10:30:06.814127814Z s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted
2022-07-10T10:31:08.353599647Z s6-svscan: warning: unable to iopause: Operation not permitted
2022-07-10T10:31:08.353858793Z s6-svscan: warning: executing into .s6-svscan/crash
2022-07-10T10:31:08.355319745Z s6-svscan crashed. Killing everything and exiting.
2022-07-10T10:31:08.356818066Z s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted
2022-07-10T10:32:09.829965137Z s6-svscan: warning: unable to iopause: Operation not permitted
2022-07-10T10:32:09.830084525Z s6-svscan: warning: executing into .s6-svscan/crash
2022-07-10T10:32:09.831513125Z s6-svscan crashed. Killing everything and exiting.
2022-07-10T10:32:09.833540628Z s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted
2022-07-10T10:33:11.309762219Z s6-svscan: warning: unable to iopause: Operation not permitted
2022-07-10T10:33:11.309877773Z s6-svscan: warning: executing into .s6-svscan/crash
2022-07-10T10:33:11.311667259Z s6-svscan crashed. Killing everything and exiting.
2022-07-10T10:33:11.313236432Z s6-linux-init-hpr: fatal: unable to reboot(): Operation not permitted
2022-07-10T10:34:12.810435073Z s6-svscan: warning: unable to iopause: Operation not permitted

Any hints or help much appreciated!

Please post your docker-compose and the startup logs so we can investigate further, in the mean time, roll back to a previous image.

Thank you for the reply.

Docker-compose is this and has not changed since the original install…

version: ‘3.3’
services:
wireguard:
container_name: wireguard
image: Package wireguard · GitHub
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
- SERVERURL=fxxxxxxxx.org
- SERVERPORT=51820
- PEERS=SGA20,Phantom
- PEERDNS=auto
- INTERNAL_SUBNET=10.0.0.0
ports:
- 51820:51820/udp
volumes:
- type: bind
source: ./config/
target: /config/
- type: bind
source: /lib/modules
target: /lib/modules
restart: unless-stopped
cap_add:
- NET_ADMIN
- SYS_MODULE
sysctls:
- net.ipv4.conf.all.src_valid_mark=1

Forgive me but I do not know where ‘startup logs’ are - can you help with a location or command for this please?

Looking at Portainer I believe wireguard is no longer joining a network on startup. But even manually joining a network in Portainer there is still no functionality - I can see the phone sending data to the wireguard server as the tx value is incrementing, but the rx remains at 0B all the time, so no response for some reason.

Thank you

Seems that if I join the wireguard network manually using portainer, the ip and port details appear in Portainer. But a minute later they are mysteriously gone again… Groan…

as a quick note, we do not support or recommend deploying containers using portainer. That said, container logs can be accessed via your cli by typing docker logs wireguard you will know its the full logs if it begins with our ascii logo. You may need to restart the container to get the full logs.

I would suggest trying to reproduce the issue using docker-compose (NOT portainer’s incomplete docker-compose) or docker cli to rule out 3rd party software issues. I was unable to reproduce the issue myself, using the latest container and this compose

version: "2.1"
services:
  wireguard:
    image: lscr.io/linuxserver/wireguard:latest
    container_name: wireguard
    cap_add:
      - NET_ADMIN
      - SYS_MODULE
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/London
      - SERVERURL=<removed>
      - SERVERPORT=51820
      - PEERS=SGA20,Phantom
      - PEERDNS=auto
      - INTERNAL_SUBNET=10.0.0.0
    volumes:
      - /tmp/wg:/config
      - /lib/modules:/lib/modules
    ports:
      - 51820:51820/udp
    sysctls:
      - net.ipv4.conf.all.src_valid_mark=1
    restart: unless-stopped

Note that if your actual LAN uses 10.0.0.0 you will cause a conflict, in my case, i do not use 10.0.0.0/24 and thus have no issue with this subnet.

Thank you for the response.

I did not use portainer for install, just as a quik graphical way to see what is going on. So that is not the issue here. Docker-compose is the method used for the install and all updates.

My lan is on 192.168.x.x so no conflict there and it has been working for over a year now and nothing has changed (at least I have not changed anything)…

I will try your compose file and also get the logs tomorrow and see if that helps at all. Research seems to indicate that the S6scanner is some scheduled thing that may be the issue behind the logs I posted before. But that is just a best guess on what I have seen so far…

Onwards and upwards!

Thanks :slight_smile:

I ran ‘docker container restart wireguard’ and then ‘docker logs wireguard’ but the result is the same repeating logs as shown in the first post. I don’t see any logo so I will try your compose tomorrow as it is getting late now.

Thank you for your time looking into this, it is appreciated very much.

YAY! - I managed to solve it! (with a lot of help from ‘the internet’)…

Here are the steps I took to get this working again…

sudo nano /etc/apt/sources.list
-add the line
deb http://deb.debian.org/debian buster-backports main

Then copy and paste the below text, it should just run.

sudo bash
gpg --keyserver keyserver.ubuntu.com --recv-keys 04EE7237B7D453EC
gpg --keyserver keyserver.ubuntu.com --recv-keys 648ACFD622F3D138

gpg --export 04EE7237B7D453EC | sudo apt-key add -
gpg --export 648ACFD622F3D138 | sudo apt-key add -
exit

Then

sudo apt install libseccomp2/buster-backports

Pull and Up the container and… The log now looks like this…

 docker logs wireguard
s6-rc: info: service s6rc-oneshot-runner: starting
s6-rc: info: service s6rc-oneshot-runner successfully started
s6-rc: info: service fix-attrs: starting
s6-rc: info: service 00-legacy: starting
s6-rc: info: service 00-legacy successfully started
s6-rc: info: service fix-attrs successfully started
s6-rc: info: service legacy-cont-init: starting
cont-init: info: running /etc/cont-init.d/01-envfile
cont-init: info: /etc/cont-init.d/01-envfile exited 0
cont-init: info: running /etc/cont-init.d/01-migrations
[migrations] started
[migrations] no migrations found
cont-init: info: /etc/cont-init.d/01-migrations exited 0
cont-init: info: running /etc/cont-init.d/02-tamper-check
cont-init: info: /etc/cont-init.d/02-tamper-check exited 0
cont-init: info: running /etc/cont-init.d/10-adduser

-------------------------------------
          _         ()
         | |  ___   _    __
         | | / __| | |  /  \
         | | \__ \ | | | () |
         |_| |___/ |_|  \__/


Brought to you by linuxserver.io
-------------------------------------

To support the app dev(s) visit:
WireGuard: https://www.wireguard.com/donations/

To support LSIO projects visit:
https://www.linuxserver.io/donate/
-------------------------------------
GID/UID
-------------------------------------

User uid:    1000
User gid:    1000
-------------------------------------

cont-init: info: /etc/cont-init.d/10-adduser exited 0
cont-init: info: running /etc/cont-init.d/30-module
Uname info: Linux 90de815223a7 5.10.103-v7l+ #1529 SMP Tue Mar 8 12:24:00 GMT 2022 armv7l armv7l armv7l GNU/Linux
**** It seems the wireguard module is already active. Skipping kernel header install and module compilation. ****
cont-init: info: /etc/cont-init.d/30-module exited 0
cont-init: info: running /etc/cont-init.d/40-confs
**** Server mode is selected ****
**** External server address is set to <REMOVED> ****
**** External server port is set to 51820. Make sure that port is properly forwarded to port 51820 inside this container ****
**** Internal subnet is set to 10.0.0.0 ****
**** AllowedIPs for peers 0.0.0.0/0, ::/0 ****
**** PEERDNS var is either not set or is set to "auto", setting peer DNS to 10.0.0.1 to use wireguard docker host's DNS. ****
**** Server mode is selected ****
**** No changes to parameters. Existing configs are used. ****
cont-init: info: /etc/cont-init.d/40-confs exited 0
cont-init: info: running /etc/cont-init.d/90-custom-folders
cont-init: info: /etc/cont-init.d/90-custom-folders exited 0
cont-init: info: running /etc/cont-init.d/99-custom-scripts
[custom-init] no custom files found exiting...
cont-init: info: /etc/cont-init.d/99-custom-scripts exited 0
s6-rc: info: service legacy-cont-init successfully started
s6-rc: info: service legacy-services: starting
services-up: info: copying legacy longrun coredns (no readiness notification)
services-up: info: copying legacy longrun wireguard (no readiness notification)
s6-rc: info: service legacy-services successfully started
s6-rc: info: service 99-ci-service-check: starting
[ls.io-init] done.
s6-rc: info: service 99-ci-service-check successfully started
[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.0.0.1 dev wg0
[#] ip link set mtu 1420 up dev wg0
.:53
CoreDNS-1.9.3
linux/arm, go1.18.2, 45b0a11
[#] ip -4 route add 10.0.0.3/32 dev wg0
[#] ip -4 route add 10.0.0.2/32 dev wg0
[#] iptables -A FORWARD -i wg0 -j ACCEPT; iptables -A FORWARD -o wg0 -j ACCEPT; iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

All working again now, even with the original docker-compose I use!

Thanks for your help on this. Maybe this will help others if they get this problem!

Glad you solved it, but if you had showed us the logs like we requested, you’d most likely see this link printed in there, which would tell you how to fix it: