LetsEncrypt, nextcloud, mariadb, google domains problems

Newbie to LSIO, and self-hosting in general.

I should preface this by saying that up until yesterday, I had an instance of LSIO’s nextcloud running (without letsencrypt or mariadb), DNS was correctly configured, ports were forwarded, life was good, except for the fact that I THINK that you can’t use NC’s password app with self-signed certs. Otherwise, I would have been content with the way it was. (I completely erased all files including config and data files for this older, simpler instance)

OK, I’m following the guide here:https://blog.linuxserver.io/2019/04/25/letsencrypt-nginx-starter-guide/

My redacted docker-compose:

version: "2"
services:
  nextcloud:
    image: linuxserver/nextcloud
    container_name: nextcloud
    environment:
      - PUID=1000
      - PGID=1001
      - TZ=America/Los_Angeles
    volumes:
      - /REDACTED/config/nextcloud:/config
      - /REDACTED/nextcloud:/data
    depends_on:
      - mariadb
    restart: unless-stopped
  mariadb:
    image: linuxserver/mariadb
    container_name: mariadb
    environment:
      - PUID=1000
      - PGID=1001
      - MYSQL_ROOT_PASSWORD=REDACTED
      - TZ=America/Los_Angeles
    volumes:
      - /REDACTED/config/mariadb:/config
    restart: unless-stopped
  letsencrypt:
    image: linuxserver/letsencrypt
    container_name: letsencrypt
    cap_add:
      - NET_ADMIN
    environment:
      - PUID=1000
      - PGID=1001
      - TZ=America/Los_Angeles
      - URL=REDACTED.COM
      - SUBDOMAINS=nextcloud
      - VALIDATION=http
      - ONLY_SUBDOMAINS=true
      - EMAIL=REDACTED@EXAMPLE.COM
    volumes:
      - /REDACTED/config/letsencrypt:/config
    ports:
      - 443:443
      - 80:80
    restart: unless-stopped

Running logs on nextcloud and mariadb looks good, no errors. However, when I run the logs on letsencrypt (this is partial, focused on the problem. Also redacted):

http-01 challenge for nextcloud.redacted.com
http-01 challenge for redacted.com
Waiting for verification...
Challenge failed for domain nextcloud.redacted.com
Challenge failed for domain redacted.com
http-01 challenge for nextcloud.redacted.com
http-01 challenge for redacted.com
Cleaning up challenges
Some challenges have failed.
IMPORTANT NOTES:
 - The following errors were reported by the server:

   Domain: nextcloud.redacted.com
   Type:   connection
   Detail: Fetching
   http://nextcloud.redacted.com/.well-known/acme-challenge/Dunno if this is sensitive, so redacted:
   Connection refused

   Domain: redacted.com
   Type:   connection
   Detail: Fetching
   http://redacted.com/.well-known/acme-challenge/redacted:
   Connection refused

   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A/AAAA record(s) for that domain
   contain(s) the right IP address. Additionally, please check that
   your computer has a publicly routable IP address and that no
   firewalls are preventing the server from communicating with the
   client. If you're using the webroot plugin, you should also verify
   that you are serving files from the webroot path you provided.
 - Your account credentials have been saved in your Certbot
   configuration directory at /etc/letsencrypt. You should make a
   secure backup of this folder now. This configuration directory will
   also contain certificates and private keys obtained by Certbot so
   making regular backups of this folder is ideal.
ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

Running lsof shows that ports 443 and 80 are open on the host. However if I point my browser at the local server IP address (or my external dns, or IP address), I don’t see NC. (Note that this did let me access NC before I attempted to use mariadb/letsencrypt).

I am certain that port forwarding is set correctly. I haven’t changed it since starting my attempt to use letsencrypt/mariadb with NC. (and once again, it worked before)

About google domains:
This could be where [one of] my problems[s] is. The guide says that you’re supposed to make a c-name point to an a-record, which in turn points to your IP address. I made an ‘a record’ called ‘@’ that points to my ip address, and then a c-name that points to that a record. Was ‘@’ the right thing to enter there? Google doesn’t really explain what @ means. OK, it looks like I did this right according to this: https://my.bluehost.com/hosting/help/whats-an-a-record

So, if letsencrypt is misconfigured and giving up, should I still be able to access nextcloud through the local lan ip address? (I just realized nginx probably complicates this). Maybe a better question is, what happens if letsencrypt fails, is the whole thing inaccessible?

I know this is a lot of text to dig through. I appreciate any help you can give. :slight_smile:

PS: I copied nextcloud.subdomain.conf.sample to nextcloud.subdomain.conf, just in case that’s a question that might be asked of me.

PPS: Please let me know if I accidentally posted sensitive information.

EDIT: I just noticed that my nextcloud “data” dir contains nothing but nextcloud.log, and that is 60k of errors saying stuff like:

"level":3,"time":"2020-05-14T00:05:00+00:00","remoteAddr":"","user":"--","app":"cron","method":"","url":"--","message":{"Exception":"Exception","Message":"Not installed","Code":0,"Trace":[{"file":"/config/www/nextcloud/lib/base.php","line":651,"function":"checkInstalled","class":"OC","type":"::","args":[]},{"file":"/config/www/nextcloud/lib/base.php","line":1089,"function":"init","class":"OC","type":"::","args":[]},{"file":"/config/www/nextcloud/cron.php","line":42,"args":["/config/www/nextcloud/lib/base.php"],"function":"require_once"}],"File":"/config/www/nextcloud/lib/base.php","Line":282,"CustomMessage":"--"},"userAgent":"--","version":""}

I thought that might reveal something. It also suggests to me that I don’t have a single problem, but multiple problems. It’s not just letsencrypt that’s misconfigured, it’s also nextcloud.

the @ with your ip and a cname pointing to it is fine. you can verify with nslookup nextcloud.<yourdomain.com> 8.8.8.8 you should see your public ip. Since you’re failing to get a cert, nginx in letsencrypt isn’t starting. I realize you’ve stated that you’re certain port forwarding is correct, but this is most likely where your issue is.

When you test from your mobile phone on mobile data, if you’re not seeing the nginx page or nextcloud, it means the traffic isn’t making it from your router to your docker host ip (99% of the time) there’s also a chance your provider blocks port 80 which would prevent http validation from working.

I would suggest taking a look at our troubleshooting blog post here:

it sounds like nextcloud itself is working and didn’t stop until you swapped to letsencrypt. You didnt mention whether you were testing externally (this is how you should be testing) or not, but just because port 443 is allowed in via your ISP doesnt mean 80 is (80 is commonly blocked) and since 80 is required for you to get your certs, it’s very critical.

1 Like

I didn’t know about nslookup. Thanks for pointing that out! It does show my local IP address. That’s more convenient than using whois (which is what I did before).

About port forwarding: Like I said, I’m sure that I’ve forwarded the ports correctly, and specifically tested that my NC instance was working outside of my home (as mentioned above, I was running NC without letsencrypt until recently). It was working for months, actually, I specifically remember accessing it from my gym’s wifi (before it closed) and while I was on wireless at a park. I’ve been poking holes in my openwrt routers for many years now, including when I was running a jabber server (maybe I’m not that new to self-hosting). I guess it’s possible that my service provider blocks one of those ports, and that NC was able to work with just the one port (I’m not sure which one that would be). I’ll test it out somehow, that link troubleshooting guide looks like a promising path to follow… just not today. I’m supposed to be doing stuff right now. :slight_smile:

remember, you need to test port 80 (http validation) you probably tested 443 before which may have worked perfectly, but without 80, you cant do http validation on letsencrypt which means nginx in letsencrypt (the reverse proxy portion) never starts.

let us know if you need more help, we’re always around (here and on discord)

Hmm, I couldn’t wait, so I fired up that instance of nginx, and the results were surprising. I can’t connect to the instance via redacted.com, which does suggest a port forwarding problem. However, I also pointed my browser at the internal IP address and port 80, and I get the message “unable to connect”. This is strange, because in another tab, I’m connected to that same IP address and my instance of deluge.

What does that mean? I checked my iptables rules, and they all seem generated by docker. I haven’t actually configured a firewall at all on that machine (I don’t know IPTables, and would have to use something like uncomplicated firewall). Maybe I’ll play around with the ports… it’s the only thing I can think of.

I would say you may want to visit us on discord, but keep in mind there are a few parts.

if you can’t reach nginx using the testing setup internally, that sounds like either iptables is blocking you (you can show me iptables -vnL in pastebin if you want) or you didn’t expose the ports. I’m assuming you followed the nginx section of that guide to the t and port exposing isn’t the issue, so likely it’s iptables, which i can help you with. generally speaking (could be wrong here) i thought most iptables on distros came wide open to allow all traffic.

since you mention deluge is working, assuming it’s our container, it runs in host mode so won’t need the nat rules like letsencrypt does.

What is your host os and version?

OK, I’m on discord. I see someone who has a similar screen name, it says they are playing a game in their status. Not sure if that’s you or not.

or you didn’t expose the ports
By expose the ports, do you mean port forwarding through the router? When I tested internally, and couldn’t connect, wouldn’t that rule out “exposing ports” as a cause? I mean in that specific instance?

What is your host os and version?
Manjaro. I know that it’s unusual to use a rolling release distro as a server, but that’s what I did.

I’m assuming you followed the nginx section of that guide to the t and port exposing isn’t the issue

Hmm, are you talking about the nginx example on this page? I think I followed that to a t (It didn’t say that I should edit any nginx files). The main guide however, I could have messed that one up. I didn’t do any editing of the ngninx .conf file (nextcloud.subdomain.conf) That file doesn’t say it will listen on port 80, just port 443. That might be the problem. But, does that explain why the nginx troubleshooting example doesn’t work either? Not even locally? I don’t think it had the reader edit any config other than the docker-compose file.

My iptables rules: https://pastebin.com/rDsZyc9F

most of the LSIO team won’t be around right now, for about half it’s 11pm and the other half it’s like 6am or something.

When i say expose the ports, i mean map them in docker compose ie; 443:443 80:80 since you mentioned you couldn’t reach it internally on your lan, the router wouldn’t come into play when visiting http://

the reason your host matters is that docker depends on iptables and things like firewalld, nfw, and ufw can cause problems. I’m not personally familiar with the newer distributions, so i would need to google to see if there are issues with manjaro, docker, and iptables.

as long as you followed the troubleshooting guide to a t, that’s fine, since you couldn’t see the site on port 80, it means even if you did make a mistake on the letsencrypt side, it probably wouldn’t have worked so we will want to try to dig into it. In terms of listening on port 80, the subdomain.conf only listens on 443, in the site-confs/default there is a 301 section for port 80 to send it all to https (that’s why we’re running it, right!) the real reason you need port 80 to work is because you’re using http validation which relies on it. Once we can prove out the basic nginx locally, we can move on to tshooting externally, but i imagine if we fix local, your external will work fine.

for your iptables, the only thing jumping out at me is that in your docker chain, 443 and 80 are going to a non-docker ip (192.168.48.2) unless you created your own bridge network and just happened to define the subnet be 192.168.48.x. if you look at port 8112, it’s going to 172.18 which is a typical subnet a docker network would use. we’ll want to look at it more.

I’m out for the evening, but you’ll see discord get active here in probably 5 or so hours.

I have a separate docker-compose file running on that same host. It’s a home media setup with radarr, sonarr, deluge and openvpn. My best guess is that those rules have something to do with that separate docker-compose file. That other docker-compose file doesn’t use ports 443 or 80 (not explicitly anyway). It makes a network that is used for both openvpn and deluge. Here is the most relevant section (redacted):

version: ‘3.4’
services:

vpn:
container_name: vpn
image: dperson/openvpn-client:latest
cap_add:
- net_admin # required to modify network interfaces
restart: unless-stopped
volumes:
- /dev/net:/dev/net:z # tun device
- /redacted:/vpn # OpenVPN configuration
security_opt:
- label:disable
ports:
- 8112:8112 # port for deluge web UI to be reachable from local network
command: ‘-f “” -r 192.168.1.0/24’ # enable firewall and route local network traffic

deluge:
container_name: deluge
image: linuxserver/deluge:latest
restart: unless-stopped
network_mode: service:vpn # run on the vpn network
environment:
- PUID=${PUID} # default user id, defined in .env
- PGID=${PGID} # default group id, defined in .env
- TZ=${TZ} # timezone, defined in .env
volumes:
- /redacted:/downloads # downloads folder
- /redacted/config/deluge:/config # config files

It honestly hadn’t occurred to me before this that it might be causing all the problems. So, I shut it down with docker-compose down, then re-ran that nginx test/example. I still could not connect to it either internally or externally. Also, when that other docker compose instance was down, my IPTables were simpler and shorter. There was no mention of 192.168.48.2.

with the deluge thing down, can you show me what your iptables looks like?
can you also docker inspect nginx | grep IPAd and show me the output

we were pretty busy on discord today but i didnt see you pop in

IPTables without any containers running: https://pastebin.com/7TiWA9Df

docker inspect nginx | grep IPAd
“SecondaryIPAddresses”: null,
“IPAddress”: “”,
“IPAddress”: “192.168.96.2”,

That .2 address is my desktop (this computer I’m using right now). I have no idea why that would appear in my server.

well; to be honest here, im not sure how you got this to happen. you can see in iptables that nothing is natting 80 or 443 to your container, but the fact that your nginx docker container is duplicating your desktop’s ip… pretty weird man, but this is definitely the issue.

at this point, im pretty much lost on what is going on, i would definitely encourage you to get on discord (ppl are on right now, but it’ll die out here shortly) and get some more live help.

I’ve contacted you in the new-members forum on discord. I have a different nick there, that the server won’t let me change. I’ve also attempted to PM you, but that doesn’t work either.

BTW, I wasn’t looking closely enough at the IP address. It is NOT the address of anything on my network. Sorry about the confusion.

for anyone who ends up following this thread, the user came into discord for live help. They ended up updating their system and things started working. There may be additional things they did, but the above config snippets were correct, this was some weird host system behavior.

Solved thanks to @driz and others in Discord.

On other forums, I’d change the OP to reflect that the problem has been solved, but it seems like you can’t edit old posts after some period of time, or maybe after someone else replies to them. So, I just marked driz’s last post as the solution.

1 Like

This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.