I have a really bad “server” (just a laptop) that runs Fedora Server and uses Docker Compose to host Jellyfin. It has been very annoying to update (the web GUI for Fedora doesn’t even work half of the time), updating is painful, and it’s a pain to manage. I am trying to redo my entire setup, so I will be getting a NAS to store all of my media. However, I still want to host apps like Nextcloud and Jellyfin, but I’m probably just going to use the NAS as storage for such apps.
Should I:
- use CasaOS, Yunohost, or a different easy to use server OS
- stick with Fedora server
- use a different distro
If I should use a conventional server distro (Fedora, Debian, Ubuntu), suggestions for management GUIs, easy to use Docker management GUIs, and ways to set up file sharing (Samba configuration seems like a pain) are greatly appreciated.
(side note: I use Docker bind mounts and they seem to allow me to update my Jellyfin content through SFTP/whatever the SSH-based file transfer protocol is. Is there a point in me switching to volumes? I haven’t taken my container down manually since I first started it up)
If you’re willing to do vm’s look into something like proxmox and run stuff as vm’s, makes managing them and trying new things easy. For the os that’s running services keep it simple and stick with Debian, if you need a gui for docker use portainer or yacht. For sharing the media files from the nas to the pc make an nfs mount entry in your fstab file so it mounts on boot. Do be advised however that if for some reason your nas is down while jellyfin is doing a scan it will be unable to find any media files and will start cleaning out metadata since without the nfs mount no media technically exists.
Honestly for your use case it would be worth keeping things simple and just running jellyfin on your nas if possible, just make sure you get one with a decent Intel cpu so you can enjoy quicksync for transcoding capabilities.
CasaOS is great for very simple (basic!) container management and easy creation of basic shares but it doesn’t offer any tools for RAID management and is only single user oriented so it doesn’t have any access control built in to it’s Samba shares. These features might come in the future though.
I don’t know Yunohost but I’d recommend you Openmediavault. It’s Debian based, offers tools for managing RAID, Docker, Shares, Users, , Access control Lists, Updates and much more.
It’s actually amazing and in active development.
If your laptop is bad because it is slow I would try to stay away from GUI tools. Do as much as possible (ideally everything) from the command line. A server is usually supposed to be tucked away somewhere without a monitor or keyboard and mouse attached to it. Happily doing its job and only that.
There is lots of web based guis. These are accessed on a separate device on the same network (or internet). These use very little resources, all the rendering is done on the client web browser.
I’m guessing you want an all-in-one server setup for NAS duties and services?
UnRAID is probably the simplest from a management point of view for storage and docker.
If you’d prefer something free, then OpenMediaVault works great. It can handle storage (Linux MD-RAID, BTRFS, ZFS, or mergerfs + SnapRAID) and compute tasks like VMs and Docker/Docker Compose all from a web interface. The only problems I’ve encountered with OMV is trying to click through configuration changes too fast and getting ‘stuck’ in a loop of applying conflicting changes. As long as you wait a second or two after hitting OK/apply on things, then you’re good.
I use TrueNAS SCALE myself with docker and other services running in systemd-nspawn containers. I have a separate Intel NUC running Proxmox.
I use https://github.com/azukaar/Cosmos-Server on Ubuntu and really like it, seems to take care of reverse proxies and stuff for any new services you add. I’m running on the lowest-spec Hetzner auction I could find, but even so it’s a pretty beastly server with an i7 6700 or something, and 128GB of RAM. I’ve got nextcloud and a bunch of other services running and I rarely go above 10% resource utilisation.
Huh. Never heard of Cosmos before! Sounds pretty good, what’s the catch?
I guess that I haven’t read the source code to make sure there’s nothing malicious there? I’m kind of a scrub, which is why I decided to give this thing a go in the first place. I say “seems to take care of reverse proxies and stuff” because I haven’t checked at all to make sure any of that’s working. I’ve done no pentesting either. It’s not that I can’t figure out how to manually configure proxmox or whatever, I’m just usually too tired to put in the concerted effort, so Cosmos has allowed me get things up and running quickly and without having to learn too much more than I already know beforehand.
Also, Cosmos does take care of basically everything by itself, but when I first set it up (many patches ago now) there was some issue with the way it assigned UIDs in containers so that the root user in some containerised apps couldn’t see the data even though it was in directories that were correctly bound to the container. I had to enlist a friend with more experience to help me troubleshoot that. So, defaults are usually fine but it’s happy to let you shoot yourself in the foot if you don’t really know what you’re doing.
I guess the latter, practical points was what I was asking about. I’ve tried Yunohost, I tried Podman+Cockpit… They work fine but there’s always a hurdle to just having a steady home server running. But if some initial wonkiness is the worst Cosmos has to offer I’ll give it a go!
Interesting. Do you know how Cosmos handles storage and especially RAID?
I think it leaves that up to the host OS; I’m just using SMB network shares, directories in which I bind directly to the containers I want to have access.
There is a lot here but I think the most important thing is that docker containers should always be disposable. Don’t put any data into the container ever.
All of your data and configuration should be done in volumes. Local disk to inside the container is all you really need.
By doing this you make updating any given docker container easy as just pulling the newest tagged version of the container. If you are using docker and not podman you can use tools like watchtower to do this automatically.
As for what distro, it depends on your goals. Do you want to learn and improve your skills? Stick with Fedora or Rocky or Debian or openSUSE. I recommend learning the command line as you go, but if you want a nice UI openSUSE has Yast which is a very robust tool.
If you want to just have a home NAS but don’t want to learn that’s a different question. In this case if you’re getting a proprietary NAS anyway you could just get one that supports docker (like synology) and kill 2 birds with 1 stone.
Don’t put any data into the container ever. All of your data and configuration should be done in volumes. So are bind mounts (–bind) to the filesystem bad? Am I able to access Docker volumes through SFTP/SSH?
You should use volumes over bind. You just move your media into the volume location on the local host and try will show up in docker. You should never need to ssh or sftp into the container.
Is a Docker volume accessible like a folder if I SSH/SFTP into the host machine, not the container?
Yes. The left side of the : in the volume is the file on the host. You can see this directory on the host. The right side of the : is where that directory is replicated into the docker container.
All you need to do is to interact with the directory on the host.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters NAS Network-Attached Storage NUC Next Unit of Computing brand of Intel small computers RAID Redundant Array of Independent Disks for mass storage SBC Single-Board Computer SFTP Secure File Transfer Protocol for encrypted file transfer, over SSH SMB Server Message Block protocol for file and printer sharing; Windows-native SSH Secure Shell for remote terminal access ZFS Solaris/Linux filesystem focusing on data integrity
[Thread #658 for this sub, first seen 6th Apr 2024, 07:35] [FAQ] [Full list] [Contact] [Source code]
I’m an old man when it comes to major changes. If it’s salvageable then maybe stick with what you’ve got. Have you used lazy docker or watchtower?
Lazy docker should give you a more reliable interface (TUI, over ssh, not a GUI)
Watchtower (aims to) update your containers for you so you don’t have to go through this pain in the first place :)
Personally, I run my Nextcloud and Jellyfin servers on NixOS with auto updates on. It’s been chugging along great!
I vote for CasaOS based on the videos I’ve seen of it. I haven’t actually done any self hosting stuff myself, yet.
Laptops are cool servers because it has a builtin battery that usually lasts at least an hour, especially if the screen is off. You don’t have to worry about UPS batteries that give less than 10 minutes and have a horrible beeping sound.
I’ve used casaos for a while and it works great. Super easy to update everything and simple setup.
I use portainer (also docker container) to manage all containers, even tho sometimes I have to use terminal. Few portainer alternatives also exist, but forgot the name. You can also setup watchtower for automatic updates if thats something for you. Im running Debian 11 and never tried casaOS or simmilar, but I dont think it matters what OS you use if its working. I have installed OMV on the same machine for NAS features
I personally use dockge instead of portainer. portainer is great but it’s way overkill for my setup, dockge is much simpler
Regarding management UIs I’m a fan of Cockpit (https://github.com/cockpit-project/cockpit https://cockpit-project.org/)
Regarding management UIs for docker I believe most use either portainer (https://github.com/portainer/portainer https://www.portainer.io/) or dockge (https://github.com/louislam/dockge https://dockge.kuma.pet/).
Regarding Samba most NAS devices simplify it a lot, but it isn’t that complicated to do on Fedora either and once you’ve got it setup it’s not gonna need a lot of tinkering. (https://docs.fedoraproject.org/en-US/quick-docs/samba/)
Whether you invest in a NAS or not I recommend you invest in a USB disk large enough to act as a backup for the storage disks. That’s not an investment for later but one you want right away. And do make certain it takes backups, not replicates data. A popular option is Borg Backup (https://github.com/borgbackup/borg https://www.borgbackup.org/)
If I went for a NAS I would Borg Backup the laptop to the NAS and then use the NAS own backup software to backup to the USB.Do you use Sonarr, Radarr, Jackett, QBitTorrent with this setup?
It would complety automatic everything
Although the mentioned tools are great I don’t know how your comment applies to OPs problem. They are asking for a tool to manage their server and not for a tool to automate their torrenting.
Saltbox.
Ansible-configured Ubuntu-based server rollout, with tag-based installation already available for the apps you listed.
Great docs & support Discord; simple yml configs for quick setup changes.
Single command updating entire server, Portainer for Docker management, rclone for NAS fusemount (cloudplow for syncing), & lots of additional software available to add to your setup.
Highly recommend.
Ever considered DietPi?!
DietPi is an extremely lightweight Debian OS, highly optimised for minimal CPU and RAM resource usage […]
I like Dietpi. It’s just a few homelab scripts on top of a stripped down debian ISO designed to reduce resource usage for homelabs while giving some utilities for installing popular homelab software by wrapping common projects around its own “software repo” (custom scripts for installing and configuring projects so they’re a lot easier to get running than normal).
I run mine from a raspberry pi 4b but you can use x86 or other SBCs if you like.