Just spent a good week installing my home server. Time to pause and lookback to what I've setup and ask your help/suggestions as I am wondering if my below configuration is a good approach or just a useless convoluted approach.
I have a Proxmox instance with 3 VLAN:
Management (192.168.1.x) : the one used by proxmox host and that can access all other VLANs
Servarr (192.168.100.x) : every arr related software + Jellyfin (all LXC). All outbound connectivity goes via VPN. Cant access any VLAN
myCloud (192.168.200.X): WIP, but basically planning to have things like Nextcloud, Immich, Paperless etc...
The original idea was to allow external access via Cloudlfare tunnel but finally decided to switch back to Tailscale for "myCloud" access (as I am expected to share this with less than 5 accounts). So:
myCloud now has Tailscale running on it.
myCloud can now access Servarr VLAN
Consequently to my choice of using tailscale, I had now to use a DNS server to resolve mydomain.com:
Servarr now has pihole as DNS server reachable across all VLAN
On the top of all that I have yet another VLAN for my raspberry Pi running Vaultwarden reachable only via my personal tailscale account.
I'm open to restart things from scratch (it's fun), so let me know.
Also wondering if using LXCs is better than docker especially when it comes to updates and longer term maintenance.
Let's start with the concept piece, which you dont need to explicitly follow, but is a decent ref. You dont need to use this explicitly, this is more about how far/close to enterprise you want, whether its for fun, for practice, whatever. From an enterprise perspective, you'll typically have:
Management - The only things that belongs here are network devices. Router, switches, etc, but notably not servers. This also - from just a general security perspective - should not be the default network, the vlan you get an IP from ifz you randomly connect to a port. In the enterprise, this is so people dont see the vlan switches are on, at home its just... Good practice at best.
Services - servers you trust go here. Ones that are providing local services, not accessed by the outside (except via VPN). This could be auth services, a wiki, automation tools - proxmox servers, as an example, would go here. You may also put a jump service here - a single VM with access to management and logging, so that this becomes the only logical means of access (and a physical on the router or a locked up switch for example)
Workstations - trusted user accessible devices would go here. Your desktop, laptop, etc go here. If an ssid is associated to this network, its typically hidden.
IoT (no internet) - IoT devices you dont fully trust (as in, pretty much any IoT device) will get put into a VLAN without internet access. Prevents them from phoning home, usually these are closed off devices. Again, not a requirement for home use, but for personal privacy not a bad choice. This became a problem more recently in the enterprise with people buying more consumer-ish stuff at the direction of someone in C Suite who wants their office to behave like their home office. Also put your TV here. Seriously, don't give smart tvs internet access. Block them and plug in a Linux box or something.
Guest - Where everyone else goes, a guest vlan with internet only access. Ideally they will each get NATd and can't see each other either. If you want these guests to be able to cast/airplay/etc though, you also would want/have:
Media Services VLAN - this is where you would have a Chromecast, AppleTV, mersive solstice pod, airparrot, crestron air media, etc. So your receiving device lives here, in the one vlan you allow some bonjour, and allow men's forwarding from workstation and guest to both hit this network (and its devices). Security risk. Manageable, but a risk.
There are MANY variations and unique versions of this. This is more or less a typical enterprise with we home media uses mixed in.
Now for structure purposes, you basically would have:
Management is inaccessible by anything not a switch, with the exception of that one locked down physical port and maybe that jump VM.
Each VLAN has its own gateway, each gateway is also NTP and DNS. You then have the upstream DNS provider be your pihole (or technitium, or pihole container, etc - I have 3 that I use so something is always up. DNS is not priority ordering, diff conversation, but having multiple is a good idea. Now your devices get their DNS from the gateway IP, and that gets it from your pihole, so you dont need to expose your pihole to those devices directly
guest has access to NOTHING. Just internet. Maybe the media services if you're into that sort of thing.
IoT can be accessed from other vlans, but should have access to nothing else. As in, initiated from outside (workstation, services, etc) passes, initiated.from inside (a roomba) you block.
the rest generally doesn't need too much security, its more device management - workstation, servers, etc.
OK so there are the generics, let's go back to yours.
192.168.1.x - sounds like default to me. Risky to use for proxmox and network management on a vlan generic endpoints will land in. If you have a different one for default - great! Ignore this. If its management, id move Proxmox into 200 instead.
192.168.100.x - solid choice to group up your externally facing riskier stuff and funnel it all through one connection. I'd make sure when that connection goes down everything else loses connectivity - confirm that kill switch works. Bind their network interfaces to the virtual network that goes to your VPN connection (I'm assuming a docker container here).
192.168.200.x - yup, logical group, makes sense to do. I'd probably put your hypervisor here.
Now LXC vs Docker.... I'd call that mostly preference. I prefer LXC. I also keep things at a stable version and upgrade when needed, not automatically. If you want automated, your best bet is docker. If you want rock stable, and d9nt mind.manual updates, LXC is great. You can automate some with ansible and the like, but that can be a lot to set up for minimal need. YMMV.
Anything I build from source (honestly, most of what I do) I put in an LXC. Anything I take someone else's image (rare, but happens), is docker. I have a local git repo I keep synced to projects on codeberg, github, and the like, so my setups are all set to build from that local repo. Makes sure I've got the latest if something is taken down, but also a local spot to make changes, test, etc for anything I may push back upstream.
Hope that helps!
Edit: Forgot to talk security!
OK first off, figure out your threat model. Where would threats come from? How serious would they be? What risks are worth taking, which are not?
Security is an ogre (onion) - its got layers. For example, I have zero concern with region blocking. No one is hitting my network from China, so I'm not allowing some random to try and get in.
What I am concerned about is user credentialing for access - one login for all services, MFA is hard required, and I don't do text/email as MFA - that's baby town frolics levels of security, I don't like it.
Best way to think of it is a row of bikes. A thief is going to come by and steal one. Which one will they go for?
Do you need to have 7 bike locks and encase the whole thing in concrete? Or do you need to be enough of a pain in the ass (u lock, braided steel cable or chain looped through the wheel and frame) that the other bike (with a $5 cable lock you can pop open with a bic pen).
I don't think there is anything wildly wrong with it, but it seems like you're doing all of this at the router, unless you have dedicated switches for each VLAN?
VLAN is not a security feature, it's a logical separation of IP segments. Maybe I'm missing your intention here, but just setting different IP spaces on VLANs and then bridging them doesn't help your security, it just complicates your network.
Not OP, but logical separation and firewall rules is a needed first step for security. They already mentioned in the post that one vlan has dedicated outbound (via VPN only) and doesn't have access to their .200.
Physical switches per vlan is completely unnecessary, and entirely why vlans are used rather than subnets.
Yes the idea is to make it easier to isolate/configure firewall rules and try to protect more sensitive data.
(i.e. I don't care much if people can access my ISOs ;)
However, at the end of the day they are all on the same Proxmox host.
You can't use the same subnet on different vlans if you ever intend for both of them to reach the internet.
In that case you'd need a second router which just defeats the purpose
Not saying physical switches are needed for security, which is why I was asking for clarification. Doing all of this on a router doesn't make sense without a physical separation though. That's my point. If the router gets owned, they have access to all networks anyway. If the idea is just for traffic direction and shaping, then I'm confused why the bridged pihole.