Buy Me a Coffee

Buy Me a Coffee!

Friday, December 19, 2025

What I am running at home

Since we moved from Brooklyn to Wilmington, I have enough room to start setting up a real home lab.   

Side note: I have had my R710 since 2018 (thank you TechMikeNY) but it hasn't had a real home where I could actually run it as a server for over 3 years.  I initially purchased it when Magenic was getting deep into a Pivotal Cloud Foundry partnership so I could run CF locally.  I overbought (High-End Dell PowerEdge R710 Server 2x 2.93Ghz X5670 6C 144GB 6x 2TB) but it was something I had been wanting to do for a while and I was able to run it in the Magenic offices in Manhattan.  I got a 1/4 rack on wheels and had a great time with it until Magenic closed the office and I had to bring it home to a Manhattan apartment.

I have a 1GB Verizon FiOS connection running in and a semi-finished basement and a full sized room for my office.  With a door!  Anyway, let me do a quick inventory and I will try and come back and talk about each entry in more detail later.  Sharing is caring, and I do want to brag a bit about what I have going on.  

First, I choose Proxmox as my base system.  I know VMWare has ESXi as a free hypervisor and Microsoft still has Hyper-V Server 2019 available, but I wanted open source as well as free with a great UI.  I looked at Unraid as it has perpetual licensing and looks great, but I decided Proxmox VE was a better fit.

Here are the different computers I have running as part of my Proxmox cluster:

  • Dell Poweredge R710
  • Dell XPS 17 L702X
  • Dell Precision M4800
  • HP Omen 40L
  • Dell Alienware Aurora R5
I started out just running on the R710 (The Beast) but thought it would be fun to try clustering and, admittedly, things got a little bit out of hand.  The service that I am currently hosting are:
I have some Windows 11 and openSUSE desktops running in VMs, and use Veeam to backup our desktops to OpenMediaVault.

I have not moved to Ceph storage to allow for VM migrations yet, as it it honestly a bit daunting and I still have a lot of work to do with my standard services.

I will try and get back into blogging more and will dig into the individual pieces as I do.  For example, I have Ollama running locally on my desktop, in a VM with a PCI passthrough, and on The Beast using raw memory and CPU.  The VM and local are memory constrained by the video cards I have but the  raw CPU and memory instance is slow.  

Thanks for your interest!