Lol - not quite. It sounds like a lot, but all of this runs on a couple of HP DL360s, a handful of Raspberry Pis, a nettop box, and a couple of consumer NASes.
I often look at lists of things i can host and think to myself "do I need this?". This br8ngs me back to huge lists of services like this and my curiosity. Do folks actually interact with all these services regularly? Honest question, no shade intended.
Do folks actually interact with all these services regularly?
In my case, yep. I believe in as much separation between services as possible, so each service essentially resides on its own docker host, whether physical or Linux container.
That said, some of my services are stacks of multiple containers. For example. my DNS service is a pair of Pi-hole DNS servers, each running their own Pi-hole container, but each one also running containers for Cloudflare tunnel and telemtry export to Prometheus.
Immich has a stack of 6 containers, Piped a stack of 5. So, out of the 66 containers (that aren't Portainer agent or Watchtower), it probably condenses down to around half that number (eg. the 25 docker hosts I have, plus a handful or two others).
My starting point (with this incarnation of my homelab) was my Asrock ION330 nettop box. Then I discovered Raspberry Pis. Then I decided I needed a couple of HP DL360s. RIP my power bill.
How do people get to so many Docker containers before moving to Kubernetes? I only have 76 containers across 68 pods and that's far too much for me to manage in Docker.
Honestly, anything not mission critical (network/internet and home automation, mainly) gets auto-updated by Watchtower. I have Watchtower set to pull latest images of everything on a weekly basis, and specific containers that are set to monitor only. Every Saturday morning, I check the Slack channel for notifications of containers that need controlled updating.
Not really doing much docker, but a lot of LXC - everything scripted with ansible. I define basic container metadata in a yaml parsed by a custom inventory plugin - and that is sufficient for deploying a container before doing provisioning in it.
Squoosh (Image converter, runs fully in browser but I like hosting it anyway)
Paperless-ng (Document management)
CryptPad (Secure E2EE office colaboration)
Immich (Google Photos replacement)
Audiobookplayer (Audiobook player)
Calibre (Ebook management)
NextCloud (Don't honestly use this one much these days)
VaultWarden (Password/2FA/PassKey management)
Memos (Like Google Keep)
typehere (A simple scratchpad that stores in browser memory)
librechat (Kind of like chatgpt except self-hosted and able to use your own models/api keys)
Stable Diffusion (AI image generator)
JellyFin (Video streaming)
Matrix (E2EE Secure Chat provider)
IRC (oldschool chat service)
FireFlyIII (finance management)
ActualBudget (another finance thing)
TimeTagger (Time tracking/invoicing)
Firefox Sync (Use my own server to handle syncing between browsers)
LibreSpeed (A few instances, to speed testing my connection to the servers)
Probably others I can't think of right now
Most of these I use at least regularly, quite a few I use constantly.
I can't imagine living without Searxng, VaultWarden, Immich, JellyFin, and CryptPad.
I also wouldn't want to go back to using the free ad-supported services out there for things like memos, kutt, and lenpaste.
Also librechat I think is underappreciated. Even just using it for GPT with an api key is infinitely better for your privacy than using the free chatgpt service that collects/owns all your data.
But it's also great for using gpt4 to generate an image prompt, sending it through a prompt refiner, and then sending it to Stable Diffusion to generate an image, all via a single self-hosted interface.
You've got like a whole DCs worth of stuff. I've downscaled the hardware in my server a lot, but it's still just a single Threadripper 2970wx with 128 GB RAM and 50 TB of ZFS storage and 50 TB of cloud based object storage in a midtower case. I have like 20 containers running, one is a Caddy webserver which acts as a reverse proxy for all the others.
I love to do things to excess as much as the next geek, but I could never find a reason to run as much as you have.
I hear ya, the answer to "why?" is usually "because I can" 😂
About 8 months ago I had 20x HDDs and 8x NVME drives in my server, totaling 187 TB across three ZFS pools. I could write to the largest pool (2 RAIDZ1 striped vdevs, 6 drives wide) at 250 MB/sec and read from it at over a GB/sec and that was from spinning rust with NVME "special devices".
What was I doing with all of this? Pirating movies and TV shows and running a media server for my friends and family.
I believe they changed some of their licensing from the fallout of their IPO. Just worth noting for the selfhosting crowd. I know terraform is being forked entirely, but I'm unfamiliar with the specifics beyond that.
My day job is a lot of kube/openshift so nomad is refreshing. Having the template blocks are amazing and makes it so that much of what helm gave me is not required. Parameterized jobs are the best once you find a good use case for them!
What I’m more interesting in is what is it that you selfhost to have so many docker containers?
Well, lots of services are stacks of containers - Immich has 6 containers and Piped has 5, for example - so it's easy for the container count to get up there.
Other "services" are groups of containers/hosts to provide a complete capability - Home Assistant; esphome; Node-RED, for example. Then there's just the stuff that, due to my desire for loose coupling, are spread across multiple docker hosts/containers - 5 x Sonarr/Radarr instances, for example.
Good question. According to my UPS, I'm pulling about 173Wh for everything except my pair of HP DL360s. Those each have a couple of 480W PSUs in them, but they're nowhere near running at full tilt, so I can't be sure. I really should get some power measurement going...
A single SFF desktop setup in a Node306. 2700x, 32 GB RAM, Arc A380, some WD reds.
Homeassistant & associated packages for esphome and Zwave stuff
Jellyfin
*arr suite + transmission
yacht
uptimekuma
paperless
immich
authelia with OIDC SSO for containers where possible
traefik for reverse proxy
Nexcloud
valheim server
boinc in the winter
syncthing for phone sync
more services for keeping up the others
Soon a pihole to come.
I want to expand my smart home setup. My project this spring is integrating my smart gas and electric meters into homeassistant. We are completely stripping the house so I am wiring up everything with KNX with a nee Zwave devices where needed. Greatly expanding the smartish home.
I also have to set up a proper network. Right now I am using my Proximus Internet Box from the ISP which admittedly is pretty customizable.
No, I (respectfully) disagree... When I had a tower PC under my desk, I upped Boinc to use ~50% idle CPU (from memory... might've been more) and that would just keep the chill off my office so that I didn't need to heat it (unless it was really cold).
In the Summer I would drop Boinc down to ~25% as it was getting too hot in there.
Well, considering going from a 40W idle system to 80 to 100W is a >100% increase in power.
In Belgium we pay 0.30€ per kWh, so running the entire year at 80W average is approximately 150€ difference with idle the entire year. That definitely helps. That is 1/3 the cost of a lawnmower or a month of groceries.
But in the winter it is a 80-100W small heater that can keep a local area a degree or so warmer.
When you start paying your own power bill it really adds up. I wish I had gone for an intel NUC sometimes.
Currently 3 physical boxes down from 4 and aiming for 2. It pretty well comes down to a hypervisor and a NAS and the regular aux gear like a switch and modem. They're big boxes though with about 35 TB storage, .5 TB RAM, and 72 cores between them so lots of space to make imaginary computers in.
Right now my goal is reducing the power footprint. Kill-a-watt places the whole set at 650 watts today and I should knock about 150 off when I get the other box virtualized.
Nice - have you got anything setup to monitor power consumption? I've got a few of those "smart" plugs running on Tuya (localised through Home Assistant) but I'm not 100% convinced of their accuracy just yet...
Just the kill-a-watt plug that the main power block is attached to. The servers have stats visible via the IDRAC (R730XD & R820) to break out for those, but nothing that shows a dashboard or such.
I have a very modest 7 docker containers on a vm on my gaming rig and I have a raspberry pi for my DNS server. Honestly my setup is quite scuffed (in comparison to yours), but it does what I need it to do
DMVPN to remote locations like a desk switch at work and family member houses
Sophos UTM
Active Directory for my home computers
hybrid sync to MS Entra (Azure Active Directory) with Entra Connect
hybrid Exchange on Premise and Exchange online
Active Directory for management network
Security Onion VMs for IDS
Network monitoring like Elastiflow, PRTG
Docker, gitlab, OpenSalt / Saltstack
Trellix ePO for AV
Nessus vuln scanners
Team Awareness Kit (TAK) server
Active Directory Certificate Services
Home media applications
These things are mostly to maintain familiarity and documentation development. I write off the cost of electricity as continuing education and professional development. More enterprise than some enterprises.
Most likely some sum of (cores x Ghz) each processor in all servers? While it kind of makes sense, it feels like a much higher clock speed than what I’m used to seeing.
I have a single quad sock E5-4640 server, I think in terms of me having 4 processors with 8 cores at base 2.4Ghz each; I don’t regularly (or ever, for that matter) think in terms of me having 76.8Ghz.
360G8s should be single or dual sock E5 v2 processors. I can’t really math right now (insufficient caffeine), but I can’t seem to make the math work, so I’d imagine something that to be an aggregated across all three systems, not individual systems?
Love it! I'm gonna grab a 32RU rack soon. Got most of my stuff in a small ~14RU wall cabinet right now. I was originally aiming for low power everything - RasPis, etc. But I've since bought a couple DL360s, and you just can't beat the sheer grunt factor, especially when paired with Proxmox.
2 Raspberry Pi 4 with a few services running (some directly, some via docker): pihole, pialert, gitlab plantuml, munin, restic rest server, jupyter instance, airsonic-advanced.
And an old synology NAS which serves as document and media server
I've pared mine down a lot. The biggest hurdle for me has been storage.
It used to be 5 2u servers running a ceph cluster, but that got to be expensive and unruly.
Now it's mainly a small half depth supermicro for my firewall, a half depth supermicro for home assistant, a 2u Dell for unraid, and a small NAS.
Unraid houses Plex and the *arrs. Along with a handful of other useful services like immich.
I do colo a 1u HP though that houses my pbx, web server, unifi controller, jirai server, nextcloud, email, and a bunch of other servers that I run.
Now, I've got a lot of spare hardware though. 7 Dell 1u servers, 2 Dell 2u, a supermicro 3u, an HP 2u and a bunch of things clients that I might turn into replacements for my rokus.
I've got an old Dell Poweredge tower server with dual 6-Core Xeons, 128 GB Ram, and 21 TB combined Raid 5 storage.
10 VM's
Veeam Backups
All behind a Mikrotik RB3011
I run one service per VM because I like being able to nuke the whole thing without bringing down any other services.
You can get some good hardware on eBay if you know what you're looking at. The HDD and SDD's cost more than the server. Electricity probably runs about $16/mo.
Biggest problem I've got coming up is what I'm going to do for backups once I exceed Veeam community editions 10 VM limit.
Three most important VM's are Jellyfin (whole family uses every day), Paperless-ngx (I use every day), and Jitsi (kids use to video call Grandma and Grandpa).
Most of the other stuff is non-essential.
I am currently migrating from a dedicated docker host to a proxmox host with multiple LXC containers.
old host - 23 docker containers, 128GB system drive, 4TB data drive
backup server - 1 docker container, 1TB disk
proxmox - 3 LXC containers, one of which has 3 docker containers. 500GB system drive, 4TB media drive (not LVM)
The plan is to migrate the loads on the old host to the proxmox host. I also have another 4TB drive coming with the intent of setting up a RAID with 2 of the 4TB drives.