DumbyLand Host - KVM Rebuild 2019

Welcome to my first blog post in too damn long! I have been working on my home lab for a few months now. I previously had things built up on an ESXi environment, but then there was disaster. As this blog is evidence of, I’m apt to drop out of things for a while at a time. And I did the same for my server. I boot it up one day… and everything was terribly broken. I spent an entire 2 days wiping it clean and rebuilding a KVM box from scratch, with some good Graylog and Suricata and SDN virtualization fun!

I learned a whole lot and am very happy with how things have worked out. I’d love to build out a full-scale, crazy powerful server based on this design. Before some recent news that was going to be a soon reality, but savings will have to be reallocated for the moment. I hope you enjoy a brief overview of what my home lab consists of. I will be releasing more specifically technical guides here in the near future. I’ve built PLENTY on this server given the specs I’m working with, and been able to build out some cool stuff with Graylog I’ll be writing up as well as my existing simulation lab with log collection details and all that good stuff.

I’ve got a whole lot of detail about my build below. Thanks for stopping by!

Host Details

Supermicro E300-8D
Intel® Xeon® processor D-1518
2.2 GHz x 4c8t

/dev/sda - 1TB - Spinning Disk
/dev/sdb - 500GB NVMe SSD
/dev/sdc - 500GB SATA SSD

Host OS: Debian 9
SDN: OpenVSwitch
Hypervisor: KVM + libvirt

This has served as an awesome little testbed server. I definitely need to bump up its RAM a bit more, especially now that DDR4 has gotten a lot cheaper since I first installed what its got. It’s a small little guy and I’ve had it running for a couple months in my living room and I don’t even realize it’s there.

Perfect mobile or test bed server. I’m excited to be able to build a full-size rig for home and use this as a rapid development box, eventually using all sorts of fun virtualization things in BSD and what not.

Physical NIC / IP Addresses

eno1: Virtual Tap - OpnSense [Gateway - DHCP]
eno2: Virtual Tap - OpnSense [GuestDevices - /]
eno3: - DHCP (Virtual IP - Connected to local switch -> GuestDevices)
eno4: - DHCP (Home LAN)
eno5: unallocated
eno6: unallocated

Primary Network Areas

This is a really interesting setup, and I really like it so far. The primary network for even the host OS is based on the virtual OS. I have a virtual router connected to a physical NIC, and that is attached to my home network. It gets assigned a DHCP address by my home router, and that becomes the WAN gateway for my virtual LAN - this is the HLAN. If it was mobile, and I attached it to a business or other external network I’d consider it an ELAN connection.

I have a 2nd physical NIC on my server that is attached to my virtual router as well. This acts as my “Guest Devices” network - a PLAN. One intent of this server is to have multiple people connect and/or with laptop use, so I have a physical network that has a switch and an AP. I connect my laptop to this AP, which is connected to the physical switch. That switch then connects to the virtual router on a “Guest Devices” network described in the next section. This is my primary physical bridge into the virtualized network environment.

IPMI was a lifesaver. I’m a complete noob when I do this stuff and messed up a bunch of things while figuring out networking and some niche Linux networking screwups. Locked myself out many times and struggled and panicked to find cables to hoo up my server to a monitor… such a silly situation to have in 2019, but there I was. That was when I learned how IPMI worked and it was incredibly useful!

Software Defined Network


Core Router

OpnSense VM

The real magic was in how simple OpenVSwitch was to setup. It sure required a lot of tinkering, but I got there. I had initially been playing with linux bridging but it felt limited and/or overly complex to do things like port mirroring for network monitoring, which is a primary purpose of my monitoring setup.

I switched to OVS, and I transferred a good bit of knowledge from Linux bridging, but it felt a lot more true to physical switching. You create a bridge, and attach ports to those bridges. Those ports are then very simple to attach to virtual machines. It’s simple to manage all of this, get detailed information and match information up since you really cross a lot of transparent bridges when it comes to virtual networking like this.

The router VM is also a pretty fun setup. I decided to run an OPNSense router which would manage a few primary networks

My network concept was very basic. I mainly wanted 3 realms – servers, clients, and guest devices. I created SDN LAN networks which are managed by my OPNSense router VM. I’ll go over all of these and also recap a little of what we covered earlier.

There’s definitely some wonky stuff when it comes to routing for my hypervisor host, but overall this setup works wonderfully and reliably without any significant issues I can remember.

I can do some serious firewalling and virtual routing stuff with this setup, and have played around with it a lot. I’ll cover more of that in my DumbyLand lab setup since it’s more directly related to how my VMs are setup and used.







Debian Linux auto-provision

VM Creation

I previously used ESXi on this box and while it was good experience, I’m so much happier with how KVM has been working out so far. Native CLI access is fantastic, and being able to throw up a simple Linux VM for some of the graphical stuff is simple enough with a minimal BunsenLabs build.

Libvirt tools felt very organic for management and creation. The scripabiity of it was great, and even though it’s babby’s first scripts I was able to make some cool ones that made my life a whole lot easier.

virt-manager is the GUI interface for remote console and everything good like that

CLI control is my favorite. It’s super convenient to use virsh to control all aspects of the VM operation. My install scripts are CLI-based. For WIndows, I start CLI and finish in the GUI Console but that’s just the nature of Windows.

I have a fully-automated script that with a Debian base, you go through a simple menu, and are able to create a fully-automated install

Guest OS


Core-Router - 2GB
Graylog - 8 GB
AttackerVM - 2GB


Test-IDS - 4GB
WinServer-1 - 4GB
WindowsClient1 - 1GB
AdminMachine - 4GB
VictimClient - 2GB
NetworkServices - 1GB

Core-Router - OPNSense

As mentioned before, my core virtual router is OPNSense. It’s been awesome and I am excited to continue to use it and learn more about it. Not much to say about it aside from the networking aspect, it’s just fine as a virtual machine.

Log Collection / SIEM - Graylog

I am doing full IDS and log collection across this environment. One of the most significant resources I have assigned is for my Graylog instance with 8GB RAM. I’ve tweaked this machine a lot, but so far this has been pretty space effective for the sparse storage I’ve dedicated so far. There’s a lot of JVM Heap Space stuff that has required some tuning.

I pump a bit of network traffic to it from all the machines sending logs and so far it seems like I haven’t had any problems with reliability of communications or storage.

IDS - Suricata / OpenVSwitch Port Mirroring

It’s very straight forward to mirror traffic on an OVS bridge to another vport on that same bridge. I created a VM that has 3 virtual interfaces, and attached 2 of them to the server0 bridge and the other to the client0 bridge. 1 of the server0 interfaces is a management port and provides standard LAN access. The other 2 get a port mirror from their respective bridges. Suricata is configured to listen on those 2 interfaces, and sniffs them for IDS detection. This all works great!