One thing i’ve learned over the years is the way I did something when I was learning is usually not the best way of doing it, and I later go back and redo everything now that I have gained experience and knowledge. I have decided to ditch google photos, and was lucky enough to snag a free rackmount 2017 server from work with an i7 installed and 2 6tb drives on the way. But now comes the hard part of deciding what software I am going to end up learning on and hopefully living with. First and foremost I want a photo backup service, and I have debated between immich and xpenology, I also know that I want to run Pihole and would really like to self host my own website documenting my projects, even if no one will ever look at it.
If you had to start from the beginning, which OS, which container manager and which containers would you build. I would love the recommendations from those who walked so that I can run
Never buy into a platform, they tend to waste a lot of my time on changing things to upgrade versions that have no point except to change the whole platform, and the things you like to use get deprecated. Or they never get to where you want to be. Your vision is not the maintainers vision. Learn to roll your own…everything.
Platform like what?
I already apply these rules myself, but these are the five major things I emphasize to everyone.
-
Don’t overcomplicate things. You don’t need Proxmox on every machine “just in case”. Sometimes, a system can be single purpose. Just using Debian is often good enough; if you need a single VM later, you can do that on any distro. This goes for adding services, too. Docker makes it very easy to spin things up to play with, but you should also know when to put things down. Don’t get carried away, you’ll just make more work for yourself and end up slacking and/or giving up.
-
Don’t put all your eggs in one basket if you can avoid it. For instance, something like Home Assistant should run on its own system. If you rely heavily on your NAS, your NAS should be a discrete system. You will eventually break something and not have the time or energy to fix it immediately. Anything you truly rely on should be resilient so that your tinkering doesn’t leave you high and dry.
-
Be careful who you let in. First, anybody with access to your systems is a potential liability to your security, and so you must choose your tenants carefully. Second, if others come to rely on your systems, that drastically reduces your window to tinker unless you have a dedicated testbench. Sharing your projects with others is fun and good experience, but it must be done cautiously and with properly set expectations. You don’t want to be on the receiving end of an angry phonecall because you took Nextcloud down while playing around.
-
Document when it’s fresh in your mind, not later. In fact, most of the time you should document it before you do it. If things don’t go according to plan, make minor adjustments. And update docs when things change. What you think is redundant info today might save your ass tomorrow.
-
Don’t rely on anything you don’t understand. If it works, and you don’t know how or why it works on at least a basic level, don’t simply accept it and move on. Figure it out. Don’t just copy and paste, don’t just buy a solution. If you don’t know it, you don’t control it.
-
I’d get my active directory and local dns situated before having a bunch of virtual machines setup
Currently doing that. Migrating from a raspberry pi 4.
Here is the plan:
Use the ASROCK N100DC-ITX motherboard+CPU for low power usage but enough power for a snappy web server. Add 32GB ram and 2tb m.2 ssd as well as 4+7tb external usb storage.
Install windows 11 as root OS and use windows subsystem for linux with ubuntu for any webservices, install those using docker inside cosmos. This way the device can be used to play simple 2d games via steam yet be used for plex and web services such as bookstack and kavita.
Started from a mini computer with 4gb ram and a USB nas. Barely supported 1 stream.
Now I have a rack mount server. Overjoyed when I actually use 40gb of the 256gb of ram.
Deployment is handled via gitea runners. Mirrored on a separate laptop in the fork of gitea. I get a notification when it completes.
Do differently? … I do have my new server I just got with 384gb of ram. Just waiting for me to find out.
Only change I’d make is to run Debian on my server over Ubuntu. I’d still run everything in Docker Compose rather than something else, or consider the use of something like k3s.
The server setup to get it ready for hosting data was a bit complicated, so I liked someone’s suggestion of putting everything in an Ansible playbook. I’d consider doing that.
I’ve made a lot of “interesting” decisions and some things were silly, some things worked out, some things I needed to fix.
I’ve been self-hosting for a long time. Before commodity virtualization (let alone containers)… As technology improves, one must adapt and that often involves rebuilding, migrating, retiring software.
My only regret is the data I’ve lost from insufficient backups. Everything else is part of the fun, part of the experience.
Documentation.
Document how my drives are set up (like really… I don’t remember how they are configured XD I only know I haven’t ran out of space yet so everything is going the correct mount.)
Keep a propper list of what process uses which port.
Use containers from at the start.
That’s all I can think of atm.
Do all my network configuration with ansible, terraform or something similar.
And I wouldn’t document anything but instead automate everything.
I’m going to go against the grain and say use K8s from the beginning, or maybe more broadly gitops. I now use a mix of nuc and pi with Talos Linux and deploy with argocd. My entire stack is run from git repos. For me thats really easy and makes running my lab and selfhosted parts a lot less work.
Though the learning experience going from vm management to docker, to compose, to K8s, to gitops has been valuable.
Do you have a guide you fomllowed to get started? I’d love to try and go in that same direction
Probably wouldn’t use Apache2.
I want to use caddy and reverse proxy from there but I can’t bring myself to do it.
I did start over recently - Dell R730 salvaged from work, 64 cores, 256GB RAM
Took the guts and moved it to a Machinist X99 motherboard in a Rosewill server case so I could put in silent fans and have room for 15 drives.
Proxmox hypervisor booting on 2x 2TB NVME drives.
Bought 6 x 16TB drives and a HBA to run them, then did HBA passthru to a virtual machine running Truenas which mounts everything as a samba share
Multiple other Ubuntu VM’s running docker compose. One VM runs utility / ARRS, 2nd VM runs Plex so that my media playback isn’t affected by utility. These servers mount the TrueNAS samba shares for file processing. That way they are booting/running on NVME but media is on spinning iron.
Also have LXC running Adguard Home for DNS based adblock/malware protection for the entire house. Also have a Raspberry PI as a 2nd DNS server.
Several Windows VM’s used for work (I connect to customer environments so I spin up a separate VM for each customer to keep their environments isolated.
Not much - it’s been a pretty organic learning journey.
Very much a crawl > walk > run thing. Can’t necessarily jump straight to the end.
I’m happy with OpenMediaVault for my purposes, but I probably would’ve taken the time to research and leverage ZFS instead of EXT4.
I started with OMV and a NUC connected to a 2 drive terramaster DAS. This was a mistake. Ended up building a box with room for 12 drives running debian with snapraid and mergerfs. Using duplicati to keep all my yaml and config folders backed up