Linuxserver/diskover help - can't access web interface

(nginx label is wrong, but I was forced to add a label and there is not one for diskover, very very few available actually.)

I’ve never used Diskover. It looks cool. I’d like to check it out. I’m TRYING to check it out.

I used the posted docker-compose file from dockerhub my production home server. I changed port 80 to something else as it’s used. I edited all the volume paths to point to my local persistent storage. End result: HTTP ERROR 500 when trying to access port “80.”

Soooooooooooooooo, I setup a brand new Ubuntu 20.04 Server LTS VM to test if it was just me or or not as this has been driving me crazy. I installed Portainer via docker. I then installed linuxserver/diskover using the the default docker-compose file found on dockerhub with all default ports and just volume changes to local persistent storage. I figured this is as generic/virgin as it gets and it will work. Nope. Same problem(s).

RESULTS:
IP:80 " 502 Bad Gateway" or “HTTP ERROR 500”
IP:9181 - RQ dashboard displays
IP:9999 - “ERR_INVALID_HTTP_RESPONSE”
(I’m guessing 9999 is not supposed to be interactive as I see the Invalid JSON from … in the logs)

Meanwhile the elasticsearch container keeps stopping. I see the logs saying:
[2020-08-28T00:59:42,195][INFO ][o.e.b.BootstrapChecks ] [0ZdlJdz] bound or publishing to a non-loopback address, enforcing bootstrap checks
ERROR: [1] bootstrap checks failed
[1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

yes, I’ve seen the notes on the page " ElasticSearch also requires a sysctl setting on the host machine to run properly. Running sysctl -w vm.max_map_count=262144 will solve this issue. To make this setting persistent through reboots, set this value in /etc/sysctl.conf ." but don’t understand why this isn’t configured in the container by default. Normally no big deal I’d just go and do as told, but the container keeps stopping before I can get a bash prompt on it.

I’ve chmod +777’d all the data directories.
I’ve added changed PUID and PGID variables to 0 for all the containers
…and it made no difference
I’ve search on Discourse on here.
I’ve searched documenation.

I’m stuck!
I want pretty pictures and graphs in my web browser. :slight_smile:
What am I doing wrong?

You don’t have to choose a label when posting.

Always post how you deployed the container or else we can’t see hat you have done and it’s all guessing.

You should consider not using portainer as there is no love for it around here. Usually makes issues so better to use docker compose.

If elastic search and redis are not running, diskover will not work, if I remember correctly. Follow the readme and do what it says and it should work.
We can’t set systemctl from the container, so you have to do it yourself on the host.

Deployed using docker-compose (as stated):

version: '2'
services:
  diskover:
    image: linuxserver/diskover
    container_name: diskover
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Americas/LosAngeles
      - REDIS_HOST=redis
      - REDIS_PORT=6379
      - ES_HOST=elasticsearch
      - ES_PORT=9200
      - ES_USER=elastic
      - ES_PASS=changeme
      - RUN_ON_START=true
      - USE_CRON=true
    volumes:
      - /data/docker/diskover/config:/config
      - /data/docker/diskover/data:/data
    ports:
      - 80:80
      - 9181:9181
      - 9999:9999
    mem_limit: 4096m
    restart: unless-stopped
    depends_on:
      - elasticsearch
      - redis
  elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.9
    volumes:
      - /data/docker/diskover/elasticsearch/data:/usr/share/elasticsearch/data
    environment:
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
  redis:
    container_name: redis
    image: redis:alpine
    volumes:
      - /data/docker/diskover/redis/data:/data

and 2nd docker-file as stated:

version: '2'
services:
  diskover:
    image: linuxserver/diskover
    container_name: diskover
    environment:
      - PUID=0
      - PGID=0
      - TZ=Americas/LosAngeles
      - REDIS_HOST=redis
      - REDIS_PORT=6379
      - ES_HOST=elasticsearch
      - ES_PORT=9200
      - ES_USER=elastic
      - ES_PASS=changeme
      - RUN_ON_START=true
      - USE_CRON=true
    volumes:
      - /data/docker/diskover/config:/config
      - /data/docker/diskover/data:/data
    ports:
      - 80:80
      - 9181:9181
      - 9999:9999
    mem_limit: 4096m
    restart: unless-stopped
    depends_on:
      - elasticsearch
      - redis
  elasticsearch:
    container_name: elasticsearch
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.9
    volumes:
      - /data/docker/diskover/elasticsearch/data:/usr/share/elasticsearch/data
    environment:
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
      - PUID=0
      - PGID=0
    ulimits:
      memlock:
        soft: -1
        hard: -1
  redis:
    container_name: redis
    image: redis:alpine
    volumes:
      - /data/docker/diskover/redis/data:/data
      - PUID=0
      - PGID=0

I have the “almost” exact problem except for the elasticsearch one: all 3 containers work for me but I still can’t get to the web interface.

I get a server error 500. Same if I actually enter the diskover container with “docker exec -ti diskover bash” and try “wget http://localhost:80

Here is my compose file in case it matters. As you can see I tried to access it via traefik as reverse proxy as well as by mapping port 80 to 3080 and accessing it directly but both ways give server error 500.

version: '2'

services:

  diskover:
    image: linuxserver/diskover:latest
    container_name: diskover
    hostname: diskover
    environment:
      - PUID=1000
      - PGID=1000
      - TZ=Europe/Berlin
      - REDIS_HOST=diskover_redis
      - REDIS_PORT=6379
      - ES_HOST=diskover_elasticsearch
      - ES_PORT=9200
      - ES_USER=elastic
      - ES_PASS=secretpassword
      - RUN_ON_START=true
      - USE_CRON=true
    volumes:
      - ./config:/config
      - ./data:/data
      - /sixer/tmp:/data/sixer/tmp
    ports:
      - 3080:80
      - 9181:9181
      - 9999:9999
    networks:
      - traefik
      - diskover
    mem_limit: 4096m
    restart: "no"
    depends_on:
      - diskover_elasticsearch
      - diskover_redis
    labels:
      - "traefik.enable=true"
      - "traefik.docker.network=traefik"
      - "traefik.http.routers.diskover.tls=true"
      - "traefik.http.routers.diskover.entrypoints=https"
      - "traefik.http.routers.diskover.middlewares=secHeaders@file"
      - "traefik.http.routers.diskover.rule=Host(`diskover.my.tld`)"
      - "traefik.http.routers.diskover.service=diskover"
      - "traefik.http.services.diskover.loadbalancer.server.port=80"

  diskover_elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:5.6.9
    container_name: diskover_elasticsearch
    hostname: diskover_elasticsearch
    restart: "no"
    networks:
      - diskover
    volumes:
      - ./elasticsearch/data:/usr/share/elasticsearch/data
    environment:
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
    ulimits:
      memlock:
        soft: -1
        hard: -1
        
  diskover_redis:
    image: redis:alpine
    container_name: diskover_redis
    hostname: diskover_redis
    restart: "no"
    networks:
      - diskover
    volumes:
      - ./redis:/data


networks: 
    traefik: 
        external: 
            name: traefik 
    diskover: 
        external:
            name: diskover

I went through all logs I can find and the only thing I found is this but I am unsure what I can do about it. Any hints?

cat config/log/nginx/error.log

2020/09/17 10:21:23 [error] 381#381: *4 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Uncaught Elasticsearch\Common\Exceptions\BadRequest400Exception in /app/diskover-web/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Connections/Connection.php:630
Stack trace:
#0 /app/diskover-web/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Connections/Connection.php(293): Elasticsearch\Connections\Connection->process4xxError(Array, Array, Array)
#1 /app/diskover-web/vendor/react/promise/src/FulfilledPromise.php(28): Elasticsearch\Connections\Connection->Elasticsearch\Connections\{closure}(Array)
#2 /app/diskover-web/vendor/guzzlehttp/ringphp/src/Future/CompletedFutureValue.php(55): React\Promise\FulfilledPromise->then(Object(Closure), NULL, NULL)
#3 /app/diskover-web/vendor/guzzlehttp/ringphp/src/Core.php(341): GuzzleHttp\Ring\Future\CompletedFutureValue->then(Object(Closure), NULL, NULL)
#4 /app/diskover-web/vendor/elasticsearch/elasticsearch/src/Elasticsearch/Connections/Connection.php(314): GuzzleHttp\Ring\Core::proxy(Object(GuzzleHttp\Ring\Future\" while reading response header from upstream, client: 192.168.178.141, server: _, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "192.168.178.140:3080"

Did you get this working? I’m having something completely different happening to me. The crawlers don’t seem to be starting or sending any data to be parsed.

I did not.

Disappointed by the lack of community support.