Secure Dumby

Posts Security Analysis Electronics Radio Hackerspace Conferences

Ratiocinative Deceit - Dumby's Ramblings

(Aug 25) DumbyLand Host - KVM Rebuild 2019

Welcome to my first blog post in too damn long! I have been working on my home lab for a few months now. I previously had things built up on an ESXi environment, but then there was disaster. As this blog is evidence of, I'm apt to drop out of things for a while at a time. And I did the same for my server. I boot it up one day... and everything was terribly broken. I spent an entire 2 days wiping it clean and rebuilding a KVM box from scratch, with some good Graylog and Suricata and SDN virtualization fun! 

I learned a whole lot and am very happy with how things have worked out. I'd love to build out a full-scale, crazy powerful server based on this design. Before some recent news that was going to be a soon reality, but savings will have to be reallocated for the moment. I hope you enjoy a brief overview of what my home lab consists of. I will be releasing more specifically technical guides here in the near future. I've built PLENTY on this server given the specs I'm working with, and been able to build out some cool stuff with Graylog I'll be writing up as well as my existing simulation lab with log collection details and all that good stuff.

Thanks!

Host Details

Supermicro E300-8D

Intel® Xeon® processor D-1518

2.2 GHz x 4c8t

32GB DDR4 RAM


/dev/sda - 1TB - Spinning Disk

/dev/sdb - 500GB NVMe SSD

/dev/sdc - 500GB SATA SSD



Host OS: Debian 9

SDN: OpenVSwitch

Hypervisor: KVM + libvirt

This has served as an awesome little testbed server. I definitely need to bump up its RAM a bit more, especially now that DDR4 has gotten a lot cheaper since I first installed what its got. It’s a small little guy and I’ve had it running for a couple months in my living room and I don’t even realize it’s there. 

Perfect mobile or test bed server. I’m excited to be able to build a full-size rig for home and use this as a rapid development box, eventually using all sorts of fun virtualization things in BSD and what not. 

Physical NIC / IP Addresses

eno1: Virtual Tap - OpnSense [Gateway - DHCP]

eno2: Virtual Tap - OpnSense [GuestDevices - 192.168.180.1 / 192.168.80.1]

eno3: 192.168.180.11 - DHCP (Virtual IP - Connected to local switch -> GuestDevices)

eno4: 192.168.1.138 - DHCP (Home LAN)

eno5: unallocated

eno6: unallocated

Primary Network Areas

  • [HLAN/ELAN] Home LAN / External WAN Gateway
    • Connected to home LAN as ISP
  • [PLAN] Physical LAN [PLAN]
    • Physical Guest Devices
      • Wi-Fi Network
      • Physical Switch
  • Virtual LAN
    • Virtual Machine internal networks
  • Home LAN Backup
  • IPMI
    • Emergency Maintenance

This is a really interesting setup, and I really like it so far. The primary network for even the host OS is based on the virtual OS. I have a virtual router connected to a physical NIC, and that is attached to my home network. It gets assigned a DHCP address by my home router, and that becomes the WAN gateway for my virtual LAN - this is the HLAN. If it was mobile, and I attached it to a business or other external network I’d consider it an ELAN connection.

I have a 2nd physical NIC on my server that is attached to my virtual router as well. This acts as my “Guest Devices” network - a PLAN. One intent of this server is to have multiple people connect and/or with laptop use, so I have a physical network that has a switch and an AP. I connect my laptop to this AP, which is connected to the physical switch. That switch then connects to the virtual router on a “Guest Devices” network described in the next section. This is my primary physical bridge into the virtualized network environment. 

IPMI was a lifesaver. I’m a complete noob when I do this stuff and messed up a bunch of things while figuring out networking and some niche Linux networking screwups. Locked myself out many times and struggled and panicked to find cables to hoo up my server to a monitor… such a silly situation to have in 2019, but there I was. That was when I learned how IPMI worked and it was incredibly useful!

Software Defined Network

OpenVSwitch

  • 2 Virtual Bridges
    • server0
    • client0
  • Port Mirroring
    • m0
    • m1

Core Router

OpnSense VM

  • 4 Interfaces
    • ServerLAN
    • ClientLAN
    • GuestDevices
      • Connected to PLAN
    • WAN
      • Connected to HLAN
  • Firewall
  • VPN

The real magic was in how simple OpenVSwitch was to setup. It sure required a lot of tinkering, but I got there. I had initially been playing with linux bridging but it felt limited and/or overly complex to do things like port mirroring for network monitoring, which is a primary purpose of my monitoring setup. 

I switched to OVS, and I transferred a good bit of knowledge from Linux bridging, but it felt a lot more true to physical switching. You create a bridge, and attach ports to those bridges. Those ports are then very simple to attach to virtual machines. It’s simple to manage all of this, get detailed information and match information up since you really cross a lot of transparent bridges when it comes to virtual networking like this.

The router VM is also a pretty fun setup. I decided to run an OPNSense router which would manage a few primary networks

My network concept was very basic. I mainly wanted 3 realms -- servers, clients, and guest devices. I created SDN LAN networks which are managed by my OPNSense router VM. I’ll go over all of these and also recap a little of what we covered earlier. 

  • ClientLAN
    • Windows clients, attacker VM, administration machines. Anything that an actual user would sit at and use.
  • ExternalAccess
    • I also had a very useful need to access the server remotely. I created an OpenVPN server via OPNSense and assigned access to a few clients.
  • GuestDevices
    • This is the network for physical devices that I want to connect into the virtual environment. I also loop the hypervisor host back into this so it gets an IP address into the virtual environment. While everything else has been near perfect, it’s been super unstable for some strange reason so I don’t like to rely on it.
  • ServerLAN
    • Graylog
    • Network to house all appliances, network services, WIndows servers. 
  • WAN
    • This is connected to my home network, gets a DHCP address, and is the primary Internet gateway for all of the virtual machines, the hypervisor host, and the guest devices

There’s definitely some wonky stuff when it comes to routing for my hypervisor host, but overall this setup works wonderfully and reliably without any significant issues I can remember. 

I can do some serious firewalling and virtual routing stuff with this setup, and have played around with it a lot. I’ll cover more of that in my DumbyLand lab setup since it’s more directly related to how my VMs are setup and used.

Hypervisor

KVM

LibVirt

Scripts

vm-create

Automation

Debian Linux auto-provision

VM Creation

I previously used ESXi on this box and while it was good experience, I’m *so* much happier with how KVM has been working out so far. Native CLI access is fantastic, and being able to throw up a simple Linux VM for some of the graphical stuff is simple enough with a minimal BunsenLabs build. 

Libvirt tools felt very organic for management and creation. The scripabiity of it was great, and even though it’s babby’s first scripts I was able to make some cool ones that made my life a whole lot easier.  

virt-manager is the GUI interface for remote console and everything good like that

CLI control is my favorite. It’s super convenient to use virsh to control all aspects of the VM operation. My install scripts are CLI-based. For WIndows, I start CLI and finish in the GUI Console but that’s just the nature of Windows. 

I have a fully-automated script that with a Debian base, you go through a simple menu, and are able to create a fully-automated install

Guest OS

Infrastructure

Core-Router - 2GB

Graylog - 8 GB

AttackerVM - 2GB

Lab

Test-IDS - 4GB

WinServer-1 - 4GB

WindowsClient1 - 1GB

AdminMachine - 4GB

VictimClient - 2GB

NetworkServices - 1GB

Core-Router - OPNSense

As mentioned before, my core virtual router is OPNSense. It’s been awesome and I am excited to continue to use it and learn more about it. Not much to say about it aside from the networking aspect, it’s just fine as a virtual machine.

Log Collection / SIEM - Graylog

I am doing full IDS and log collection across this environment. One of the most significant resources I have assigned is for my Graylog instance with 8GB RAM. I’ve tweaked this machine a lot, but so far this has been pretty space effective for the sparse storage I’ve dedicated so far. There’s a lot of JVM Heap Space stuff that has required some tuning.

I pump a bit of network traffic to it from all the machines sending logs and so far it seems like I haven’t had any problems with reliability of communications or storage. 

IDS - Suricata / OpenVSwitch Port Mirroring

It’s very straight forward to mirror traffic on an OVS bridge to another vport on that same bridge. I created a VM that has 3 virtual interfaces, and attached 2 of them to the server0 bridge and the other to the client0 bridge. 1 of the server0 interfaces is a management port and provides standard LAN access. The other 2 get a port mirror from their respective bridges. Suricata is configured to listen on those 2 interfaces, and sniffs them for IDS detection. This all works great!

Last Edited: 2019-08-25