Lemmings, what's your self hosted server power usage?
I'll just come out and say it: 50W. I know, I know an order of magnitude above what's actually needed to host websites, media center and image gallery.
But it is a computer I had on-hand and which would be turned on a quarter of the day anyway. And these 50W also warm my home, although this is less efficient than the heat pump, of course.
Mine is roughly 300 watts, much of which is from using an old computer as a NAS separate from my server server.
However, I put the whole thing in the basement next to my heat pump water heater which sucks the heat out of the air and puts it into my water, so I am ameliorating the expense by at least recapturing some of the *waste heat.
~600W.
2 machines:
Dell 730 8 disks running multiple Minecraft servers.
Supermicro 16 disks in raid 10 running multiple VM for various functions.
All on a 6kva ups (overkill I know)
Good timing for this thread. I just finished consolidating 2 computers worth of fun into 1 newer computer that can do it all. I sold my wife on the idea with electricity as the reasoning.
In the end, it uses 30 watts less, which is not as much as I had hoped. That's about $5 a month.
180 watts with an i5-13400, 9 spinning disks, 1 M.2 SSD, no extra GPU, 24 port switch (powers 3 AP's), modem, Mikrotik router, and a large UPS. I wonder if the UPS uses any power as a trickle charge for the batteries.
As soon as you have a requirement for large reliable storage then you're on to at least the small desktop arena with a few HDD at which point it's more efficient to just have the small pc and ditch the RPI.
5W vs 50W is an annual difference of 400 kWh. Or 150 kG CO2e, if that's your metric. Either way, it's not a huge cost for most people capable of running a 24/7 home lab.
If you start thinking about the costs - either cash or ghg - of creating an RPi or other dedicated low power server; the energy to run HDDs, at 5-10W each, or other accessories, well, the picture gets pretty complicated. Power is one aspect, and it's really easy to measure objectively, but that also makes it easy to fetishize.
yah, my house is wired with copper and 10 gig copper uses a lot of power. It doesn't really help that the new slightly less power hungry 48 port 10 gig switches are thousands of dollars. I'm using 100 to 150ish watts per 10 gig switch to be able to buy the switch for under 500 bucks instead of using 60-100 watts and paying 2-5k per switch...
I would like to expand my storage, however I don't have any available SATA ports and I believe adding an HBA would increase the idle draw about 8 W. I might just upgrade the SSDs and split the storage between the HDDs and SSDs.
My current setup uses ~180W, which is a lot, but WAY better than my previous one, which was ~600W. Power is cheap where I live, so I'm not too worried about it.
180W homelab:
N6005 fanless mini PC running pfsense
mikrotik CRS310-8G+2S+IN switch
TP-Link AP225 access point
Server running proxmox w/ AMD 5900X, RTX 3080, 128GB ECC RAM, LSI-9208i w/2x10TB drives, and dual SFP+ NIC
600W homelab:
Aruba 24-port PoE gigabit switch w/ 4xSFP+ ports
Dell R720xd fully kitted out w/ 12x 6TB drives, 2x 512GB SSD, 2x 32GB SD cards, 100-something GB RAM, 2x whatever the best CPU was for that unit
Dell R710 w/ 6x 6TB drives, 1x 256GB SSD, 100-something GB RAM, 2x whatever the best CPU was for that unit.
I've been thinking about upgrading because the CPU isn't that fast, the RAM ain't that much and I want to add a few more HDD's.
I've seen a pretty interesting Lenovo P520 with 64GB RAM a CPU that's 3x times as fast and room for 6 HDD's for €350, but the power consumption I can see online (80W) isn't that appealing with European electricity prices.
The services I use most here are Paperless and Piped.
Mealie will be added to that list as soon as the upstream PR lands which might be later this evening.
My Immich module is almost ready to go but the Immich app has a major bug preventing me from using it properly, so that's on hold for now.
I do want to set up Jellyfin in the not too distant future. The machine should handle that just fine with its iGPU as Intel's Quicksync is quite good and I probably won't even need transcoding for most cases either.
I probably won't be able to get around setting up Nextcloud for much longer. I haven't looked into it much but I already know it's a beast. What I primarily want from it is calendar and contact synchronisation but I'd also like to have the ability to share files or documents with mere mortals such as my SO or family.
The NixOS module hopefully abstracts away most of the complexity here but still...
Depends where you draw the line of the home lab. I'm drawing 160W at the moment, but that includes a dedicated CCTV PC (running Proxmox in a cluster) and POE switch. The CCTV I don't consider part of the home lab really, the alternative would be an off the shelf box and no one would consider that.
The 160W also includes a 24 port switch (I'm only using 8) and the FTTP power, plus the rake from the UPS. So probably total the actual homelab server would be about 80-100, I guess. But even then it runs my router using opnsense, so I don't have a separate router box to power. It also serves as my "cloud" storage, so I'm not saving watts, but I'm saving the cost there.
I could get the power down quite a bit by changing the 6 HDD for 2 mirrored HDD, but the cost of large enough disks means it'd be years before it paid for itself, so I'm sticking with 6 small disks for now.
I've thought about trimming things down and going lower powered, but it all comes back to storage and needing the large storage online all the time.
Plus I consider a 100W a big saving when before I ran a dual Xeon Dell R710 which used around 225W under the same workload.
That’s energy, not power. If that’s the energy consumption per hour, then that’s 120W, which is high but not outrageous with a full size computer with 6 disks.
Rpi4B - 6.4W max (more like 5 in real world usage)
Cpu case fan - 1.4W
2x SSD - ~6W each
13.8 to ~18 depending on what the SSDs are pulling i guess. I use it as an *arr seedbox and plex server (up to 1080p h264 works flawlessly!) as well as nextcloud
~120W with an old server motherboard and 6 spinning drives (42TB of storage overall).
Currently running Nextcloud, Home Assistant, Gitea, Matrix, Jellyfin, Lemmy, Mastodon, Vaultwarden, and a bunch of other smaller stuff alongside storing a few months worth of surveillance footage, so ~$12/month in power certainly ain't a bad deal versus paying for hosted versions of even a fraction of those services.
I have looked at the ROI for getting more efficient kit and ended up discovering that going for something like a low-idle-power-draw system like a NUC or thin client and a disk enclosure has a return period on the order of multiple years.
Based on that information, I've instead put that money towards lower hanging fruit in the form of upgrading older inefficient appliances and adding multi-zone temperature control for power savings.
The energy savings I've been able to make based on long-term energy use data collected via Home Assistant has more than offset all of the electricity I've ever used to power the system itself.
Don’t have anything spectacular performance wise but my late 2012 i7 Mac Mini Server is reporting ~14w (with my services running and downloads happening) and I saw bursts up to 30w. Not too bad for 12yo Mac running Homebridge, 2 Navidrome instances, Jellyfin, nginx, Transmission, and SMB (looking into Nextcloud to replace that).
About 150w total, trying to bring it down since electricity here is pretty expensive.
4 machines: two 4th gen i5, one 6th gen Nuc (Have two more but not set them up yet), and one HP thin client. Also two UPSes, and 3 cameras (previously four, but one was accidentally damaged).
Hosting Home Assistant, Zabbix, Palworld, SMB, Transmission, Plex and a bunch of other misc stuff.
Kind of contemplating moving everything to a 10th gen Nuc, but thinking a Ryzen based mini pc might be a better option
My main issue is I'm not shutting down my Pi-Hole, home assistant, NAS etc etc just to plug in something like this in, and then 24h or so later shut them all down again to retrieve it again. That said I basically have a collection of Pis (passively cooled and this silent) and a Synology disk station so the power use is pretty low.