I've never gotten the obsession with patch panels and small (laminar) patch cables on server racks. It seems to make more sense just to mount the switches on the back of the rack, and run the cables directly? And then cables leading off the rack can just go directly from the switch (appropriately labeled)
I do totally get it for building wiring, where the patch panel is used to bulk terminate cables going elsewhere into fixed 2-post racks, and the patch cords take care of the higher level concern of assigning switch ports. But doing it on a home server rack just feels like cargo cult copying that pattern for not much gain.
I did this before the pandemic for a "real" K8s environment (history is 20/20, but this rocked for what I needed then).
Basically a super inexpensive and tiny i7-4600U laptops w/o display that I upgraded to 16GB of RAM and 256 SSDs. I still run a smaller fleet for different services and testing - both standalone and as part of a Proxmox cluster.
I donated to Mr. Chromebox for years, super awesome work.
Excellent choice. Have several of them too, with either i5-7500T or i7-7700T, with small 65W or 90W power-bricks. With 32GB RAM each. With that even the 7500's are more than enough for my browsing needs. Didn't see the need to rack them, though. I've put four of them on the back of my desk edgewise, under one of my screens, with an old routerthing running OpenWRT on top of them, connected with short patch cables on the back, out of sight. Stable and cat-proof.
Even with the powersave governor pushing them down to 800Mhz they stay snappy, and rarely go above 1.2GHz, except when I compile stuff, or do logic-simulations. But I have other, more modern stuff for that. OTOH with things like distcc/Icecream they can be useful helpers.
Edit: Suspend to RAM/wakeup (via whichever mechanism(even via keycombo or special key on modern Keyboard)), WOL, NetBoot/PXE works every single time.
4K video? No problem.
Hackintosh? Check. But why? There's QEMU. (Am not a Mac person anyways)
Genode? Check. Much more interesting running native/bare metal.
The quality of the BIOS/UEFI is phenomenal, like Thinkpad legendary.
In 2010 I built a small HTPC[1] in an Antec ISK300-150[2]. Started out with 500GB HDD + 8GB RAM, then replaced with 2TB SSD, then added another 4TB SSD and another 8GB of RAM recently. Started out TV-connected, but for years now it's been running Ubuntu headlessly in my basement, hosting containers, automations, TimeMachines, etc. Not sure if you'd call it a homelab. Wonder what do people do with their multi-PC racks that can't be done with one small machine like this? (besides running LLMs of course)
I picked up an ASRock NUC BOX-225H (they have a 255H I wanted but out of stock) which might be a better alternative depending on storage needs and sensitivity to price.
I believe the BIOS is American Megatrends. It's the 'good' kind of BIOS, text only menu system. My system is headless right now so I can't dive into it.
I asked that in a half-joking way. But the look of it, or even the brand wasn't what I meant to say. Let me explain.
'Home-lab' can mean many different things. When you put some common OS on a thing, and then run that head-less 24/7 to fiddle with virtualization/containerization/clustering on top of that OS mostly, the 'quality' of the BIOS/UEFI doesn't really matter.
It's used during initial setup, and that's it. Maybe some tuning, but one interacts not that often with it.
This changes when one uses that thing to throw anything at it that was ever made for AMD64. Exotic stuff like Genode, for instance. Though that also is a question of hardware and driver support.
This continues with the implementation of ACPI, leading to the cleanest bootlogs ever, with no errors at all. It goes on with all sorts of Netbooting/PXE, be it as a client, or server. Other niceties are suspend to ram, wake on lan, reliably working every single time.
This is amplified by having several of them, using at least some of them not 24/7 with the same setup, but changing everything, suspending them, or shutting them down, using them only on demand (via WOL/magic packet), testing failover/HA, and whatnot else.
Again, without interacting with the BIOS/UEFI, it just has to work with everything behind the scenes.
That's what I meant to say in the context of home-lab. It needs to be able to flawlessly work with all sorts of ever changing stuff.
Have a few Tinys for when I worked on OpenStack. They are now used as a router, and desktop and thinclient, for a lot more recent Tiny that runs a few VMs/containers for the homelab. Easy to get spare parts for and reliable. Do check if they accept internal (mini)PCIe devices if you want to add a network adapter, as some are whitelisted to only take some certified wifi devices.
I do totally get it for building wiring, where the patch panel is used to bulk terminate cables going elsewhere into fixed 2-post racks, and the patch cords take care of the higher level concern of assigning switch ports. But doing it on a home server rack just feels like cargo cult copying that pattern for not much gain.