Ombi currently needs access to write and read the Index.html file (ClientApp/dist/index.html), this is to work around a unsupported scenario with Angular.
Ombi will write and read that file every time Ombi starts up.
uh… a bit confused as to why you’re asking along this particular line given this is the linuxserver.io forum “container support” topic. we’re dealing with the linuxserver.io ombi docker container. The /opt/ombi folder structure is on a docker volume created and maintained by the image… it’s not a share. The only things shared from the host to Ombi are clearly specified in the docker-compose.yml details provided earlier: the mounts at /config and /le-ssl. Such as it is, the host hardware/OS has nothing to do with /opt/ombi/* whatsoever.
That said, I’m running on a 8-core Intel I7 QNAP TVS-1282T NAS with 86 TB RAID-6.
The reason I’m asking is so I can get an understanding of your setup and try to replicate the issue my end. We have users running our containers on weird and wonderful setups so having some sort of basic info on what host you’re running goes a long way.
so there’s a difference in our deployment of the container then.
both our docker-compose YML has PUID=1000… but the deployed docker volume directory permissions on /opt/ombi/* are showing UID 1001 not the “abc” user UID 1000.
I’ll try a full drop of my containers, delete the ombi images, and recreate again…
@j0nnymoe, thanks for your help and patience so far…
Well, I completely uninstalled QNAP Container Station, re-created my docker user (UID 1000) and verified no quotas, manually verified all docker/container-station package and data directories were removed, then rebooted and re-installed Container Station.
The three container packages that use lchown all failed to deploy upon docker-compose pull with the notorious “disk quota exceeded” error. Every other container works fine.
Evidently something about how QNAP sets up docker seems to have changed and broken since I last pulled these containers. Earlier versions have been working for YEARS prior.
May have to go hunting for the offending chown in the radarr, omni, etc… packages and see if that is something that’s changed recently (probably not?) or more likely if it’s yet-another QNAP feature [read: bug] in the v4.x firmware or container station package’s docker config.
Yet, as @j0nnymoe has tested, this is working on QNAP firmware v5.x… just as it HAD been working for me on v4.x for a very long time.
QNAP firmware v 220.127.116.111 (2022-01-28)
Container Station v.18.104.22.168 (2022-02-13)
# docker version
API version: 1.41
Go version: go1.13.15
Git commit: 50b64c4
Built: Tue Oct 26 07:03:45 2021
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: f180ce8
Built: Tue Oct 26 07:05:57 2021
[/share/docker/mediasvcs] # docker-compose pull
Pulling authelia ... done
Pulling swag ... done
Pulling sabnzbd ... done
Pulling grafana ... done
Pulling radarr ... extracting (100.0%)
Pulling sonarr ... done
Pulling lidarr ... extracting (100.0%)
Pulling lazylibrarian ... done
Pulling ombi ... extracting (100.0%)
Pulling heimdall ... done
ERROR: for radarr failed to register layer: Error processing tar file(exit status 1): lchown /app/radarr/bin/System.Reflection.Metadata.dll: disk quota exceeded
ERROR: for ombi failed to register layer: Error processing tar file(exit status 1): lchown /opt/ombi/Ombi: disk quota exceeded
ERROR: for lidarr failed to register layer: Error processing tar file(exit status 1): lchown /app/lidarr/bin/System.Private.Xml.dll: disk quota exceeded
ERROR: failed to register layer: Error processing tar file(exit status 1): lchown /app/lidarr/bin/System.Private.Xml.dll: disk quota exceeded
@j0nnymoe, thank you very much for the time you’ve spent on this with the testing, explanations, and for the container station guide. I really appreciate it.
Everything I’ve done jives with the process you’ve documented in the guide, though the names, paths, and UID/GID are different in my env. ALMOST tempted to upgrade to QNAP firmware v5, but given the challenges many have had with that, I’m not yet willing to swap one potential fix for a host of other risks, up-to and including possibly having to significantly extend a maintenance outage to restore 86 TB of data from backups.
I should note that all of this was working just fine when I first set it up and for numerous pull updates until recently. This leads me to two courses of investigation…
Investigation with QNAP. I’ll be opening a support ticket (for all the good that usually does), and have also posted a “me-too” reply with details on this qnap forum post. That qnap post isn’t mine. I’m not the first to have this precise issue, yet no solution is provided.
Have the radarr, lidarr, and ombi containers always performed a chown as a non-privleged user, or is this a relatively new change to the container’s code? Only the containers that do this are failing. May go hunting through github if I can find the time and the QNAP support route fails to be a fruitful endeavor.
Quotas were evidently on and in an inconsistent state even though:
Control-Panel > Privileges > Quota showed “Enable quotas for all users” as UNCHECKED.
User Profile details:
Control-Panel > Privileges > Users: click on the “edit account profile” glyph for each user row;
displayed the quota section with 3 selectable options and the “no limit” option selected instead of simply “Quota: disabled”
Whatever state this left the actual filesystems in with regards to quotas was sufficient to cause problems when the docker engine attepted to execute a chown on files in a docker volume overlay. The issue, however was not sufficient to cause problems with pulling images and writing them to disk in the first place.
Why the partially and inconsistently configured quotas caused problems with chown by a named non-root user within the context of a docker volume overlay, but not via command shell on the NAS directly remains a mystery.
enabled quotas for all users, applying the following setting:
Control-Panel > Privileges > Quota: [CHECKED] Enable quotas for all users, Quota size on disk: 2048GB
waiting and refreshing the control panel until quotas showed up on that screen
Individually disabled quotas for each user profile one-at-a-time
Control-Panel > Privileges > Users: click on the “edit account profile” glyph for each user row, set to no limit.
Turn quotas off for all users
Control-Panel > Privileges > Quota: [UNCHECKED] Enable quotas for all users
5. Check individual user profile quota details to verify “Quota: disabled” instead of “Quota: (selected) no limit”