Secure Dumby

Posts Security Analysis Electronics Radio Hackerspace Conferences

Ratiocinative Deceit - Dumby's Ramblings

(May 18) DumbyLand Beginning - Server Build & Configuration

The last few weeks have been extremely difficult. Let alone the last year... I haven't done nearly as well of a job as I would have liked on keeping up with my blog, but I'm not short on topics to discuss or work to share! There was a stimulus that kicked me into high-gear project mode. Given a choice, I wouldn't ever choose to go through it again. With the passing of our hacker brother d3c4f, I hit that high-gear mode. I gotta step up, I gotta do more, and I gotta contribute back to the community I hold so dear.

To this end... I introduce...

A home lab is a crucial component of the modern IT professional, and especially so in the security field. By nature, security is cross-discipline and requires a diverse skillset. Unless you've got 10 years of IT experience, it's highly unlikely that you can cover all the ones directly important to your specific position or specialty. This becomes a fundamental issue for both those experienced in the field looking to expand their teams, as well as those new to the field who are finding their way in. 

I know this first-hand. I've got a solid 8 years of professional experience, and I'd say about 6 of those were strictly IT. In that time, I never got to administer or experience an enterprise environment. I had little exposure to hardware, or complex configurations. I hadn't had a reason to understand a full-blown virtualized environment. Beyond a home network, some small IT shop deployments, and personal projects my systems administration skills are quite weak! I've read Windows logs, but I don't know exactly what stimulus leads to which logs -- being able to directly simulate a user failing to login, or brute forcing a Windows admin account is something I am in dire need of at my current stage of career.

So let's get to the meat!

The Purpose

I needed a lab. I know that with my personality and skill level, I would only ever fully utilize such a lab if I were able to take it around and share my experience, working with others to accomplish and review the work. Portability was key! Naturally, the market solution to that is the NUC. Extremely capable, extremely transportable. 

Recon Infosec created a mobile DFIR lab, and went this route. 3x Skull Canyon NUC, and 1 more machine I imagine is their primary network interface. The NUC are sleek packages with passive cooling, but i7 and 32GB DDR4 RAM. Quite a powerhouse, and you can slap 3 into a nice little case ready to go.

And doing what I do best, I leaned on others for further information.

And, naturally, some jerk got me sallivating for something I had no clue even existed! Thanks @M3atShi3ld you jabroni. He casually threw this out there... and I instantly went into a downspin thinking of all the possibilities -- of every possible pro/con I could wrap my head around.

Come 05/02, and I'm speeding home from for work after receiving delivery confirmation of a very sexy box. 

There's an E300-8D in that box. So, why did I make the choice for this box?

There's a few things that led me to choosing it.

  1. NUC
    1. High-price, RAM limited. The Skull Canyon NUC are interesting, but max at 32GB RAM. While they have an i7, I've heard for the vast majority of people simple labs like this hardly touch the CPU, so that's essentially a wasted resource
    2. Given this criticism, it means that to scale up hugely, you need multiple boxes. The cost on this would add up quite quick
    3. With scope of boxes, comes scale of management. Recon Infosec was already up to 3 of these NUC for their lab. That's a lot of hefty management which is probably cool to implement and good learning, but not something I wanted the hassle or cost of at the moment
    4. Runs DDR4
    5. Passively cooled. Again, I have no clue what the real resource impact will be on these boxes, but if it does end up chewing CPU I imagine they get quite hot
    6. I heard from multiple people that reliability on NUC was not great
  2. Supermicro Embedded E300-8D
    1. Truly enterprise-grade gear, which I've never had the pleasure of interacting with
    2. Xeon processor. I figure for the kinds of loads it will see, it's optimal to the i7. I have no technical backing to believe this claim.
    3. There are other Supremicro embedded server options which I probably should have done a little more research on. This was a slight snap judgement, but mainly because I was excited to get something significant and hefty
    4. 6 onboard NIC. Again, I have no reason why I can justify this and with fancy enough configuration and automation I imagine it would be no big deal to utilize a 4-port server, but I really like the scope and vision that having this capabiity brings to me
    5. 128GB RAM max on this box. I can go to 64GB RAM without dipping into extreme ECC price territory like we're seeing with DDR4 at the moment
    6. Honestly, biggest downside right now is RAM price. I only opted for 32 now hoping that by the time I scale to 64 it will be better, and by the time I'm going to 128GB there won't be a whole lot of financial constraints on my needs
    7. Has m.2, mSATA, and a 2.5" drive for storage. I'm not exactly sure what options are available on the NUC, but I figure they may max out with just an m.2
    8. Fans are surprisingly quiet. I've dealt with some blade servers and R410 and what not. This is NO jet engine like those other ones! It spins up loud on boot, but soon rounds down to a very reasonable volume. I keep it in my living room while hacking away at it and it hasn't bothered me so far.
    9. SFP+ ports! Tons of gigabit for teaming, huge huge network capacity ability if I ever need to share lots of data, or can connect to big Internet pipes
  3. Price Comparison
    1. The Skull Canyon NUC is $520 on Amazon at time of writing
    2. E300-8D is $650 on Amazon
    3. RAM is exactly the same for both (up to 64GB)
      1. So, assuming that CPU doesn't bottleneck and I scale up to 64GB RAM (which I assuredly will), I'll have effectively saved myself $400 or so from buying the NUCs on base hardware alone. 
      2. That's not to mention savings on the physical networking infrastructure, since I have so much on-board networking capability
      3. Potential to be able to scale up to 128GB RAM if I don't hit any other bottlenecks
      4. Uses DDR4, so if I do go to ECC it will be modern gen RAM that is re-usable or tradeable elsewhere

Unboxing

It's smol. Quite smol. 

Breaking Into the Chassis

Initial access was quite straight-forward. RTFM. Remove 2 screws, slide cover plate back. Expose all the juicy inners!



Starting the Build

Let's break into all this fancy new hardware. 

Timetec Hynix IC 32GB Kit (2x16GB) DDR4 2400 MHz PC4-19200 Non-ECC Unbuffered 1.2V CL16 2Rx8 Dual Rank 288 Pin UDIMM
Samsung 860 EVO 500GB mSATA Internal SSD (MZ-M6E500BW)
WD Black 1TB Performance Mobile Hard Disk Drive - 7200 RPM
Samsung 860 EVO 500GB M.2 SATA Internal SSD (MZ-N6E500BW)

Total Build Cost: ~$1300

Start with RAM since I know you can't screw it up, take the little wins before you end up in wars of attrition over stupid things like moving the location of a single stand-off so you can use the mSATA drive. 

So about that. Out-of-the-box, I'm not sure what they intend for you to be able to use the stand-off for. It's right in the path of the mSATA PCI slot, and manual indicates you should move the stand-off to secure the mSATA drive. Okay, whatever.

No. Not just whatever. I'm going to save anyone else who builds this a lot of time. Tear it down -- all apart. You will be removing the motherboard to move this standoff. Just heed my lessons.

Like any good hacker, I had it physically disassembled and in pieces within an hour of being hands-on with it. 

Next up was the M.2 drive, and the 2.5" drive. I wanted at least something to have a good amount of storage for cheap, long-term storage. Originally I had read there was a 4 TB drive, and ordered it... but it was a 3.5" drive. Ugh. WD Black was a striaghtforward, super cheap option, but you could go with another SSD or maybe find a bigger 2.5" drive. 


First Boot and Hypervisor


I decided that since I had a free license thanks to .edu hookups, I'd go with "real boy" virtualization and straight to ESXI 6.5. So let's get it going! Now this part should be really easy if you're not a complete idiot like me. I struggled for way too long and too many USB drives, ISO images before I finally got the right god damn ESXi installer image on a USB. Plugged it in, and wait for the first satisfactory boot!

First ESXi boot!

My first connect up to it was me hooking Ethernet cable up to a router I had laying around, and then my PC up to the router. This was my first method of connecting up to the management port on the server, and worked this way for a little while until I discovered direct connect thanks to a little magic called Automatic MDI Crossover -- more on that later!

For some reason, I had a deep-seated need to change the hostname first thing. So I went and did that!

I set it by going to the following setting:

Networking > Default TCP/IP stack > Edit Settings

Networking!

At first, I did something really dumb by screwing around with hardware passthru which was due to me just being really misled with my thinking. I eventually turned back around that rabbit hole. Took a full system reset, but I was able to get back in and screwing around with configuring it. 

https://docs.vmware.com/en/VMware-vSphere/6.5/vsphere-esxi-vcenter-server-65-networking-guide.pdf

Now for the proper setup --

Set a static IP on the primary management NIC -- vmnic0. I chose 192.168.88.50
Set a static IP on the Ethernet interface on your computer (a USB adapter in the case of my laptop) -- 192.168.88.100
Connect Ehternet cable from computer to vmnic0
Access the web interface from computer -- https://192.168.88.50
Appreciate the great success!

With all this setup, management is that much better. Now you only need to have any regular old cable and a way to connect them, right up to the server!

But this only covers management networking. Given that my ultimate purpose is to travel this thing around, and share in the experience... I need a way to connect up to the internal networks, whatever they may become. There's also questions here of how exactly I'll bridge those connections, and how I'll hook people up to it on the outside. Lots to learn!

I also have some dreams of it being used in situations where I can utilize it as a drop-in IR box, or something fun like that. The box has a lot of physical networking capability... and let's cover all that!

So, out of the box it has 6 physical NIC on the motherboard. 

vmnic0 - Already assigned to management for ESXi
vmnic1 - I wanted to use this port to provide an Internet uplink for the box
vmnic2 - And this one for an external, physical LAN
vmnic3 - Gaming (No plans yet, figure I could eventually use it for mobile LAN server)
vmnic4 - IDS
vmnic5 - Unallocated

At this point, I'm not precisely sure what I wanna do with the rest of this, but have some general ideas. Let's start to implement at least some of those ideas on the box itself! We'll now dig into vSwitch networking within ESXi.

I have a general idea what to do, and this is my playground. Let's go for it!

We're going to create a series of vSwitch to represent different networks... obviously. 1 for VMs, 1 for physical access, 1 for Internet/"Gateway", etc. When you create a vSwitch, you can also designate a physical uplink port. 

There is a default switch -- vSwitch0 that gets created with vmnic0 set as the physical uplink -- that's how the management network works! So let's create a vSwitch called "VM Access", because I need an interface into the server for physical devices to get in. I wanted to save the 2nd NIC for gateway since it's more of a "management" type service, so we'll use the 3rd physical NIC for that switch.

Name: "VM Access"
Uplink: vmnic2
Port Group: "Access VMs"

And that's that! Now, you've assigned physical NIC to virtual switches... but need to add VMs to those virtual networks! Network interfaces to VMs in ESXi are called "Port Groups". You create a new Port Group, and link it directly to a vSwitch. Let's create a couple more of the vSwitch, and also some port groups to use for the VMs.  I did not have the best naming conventions here by any means, do better than me!

Name: "Gateway"
Uplink: vmnic1
Port Group: "Internet Access"

Name: "VM Network"
Uplink: NONE
Port Group: "VM Connection"

Name: "vSwitch0"
Uplink: vmnic0
Port Group: "Management Network"

We've got a physical NIC as figured out as we can so far, let's get the virtual side up and going!

Core Router

Now, as I mentioned before, I wasn't entirely sure how this was all going to work. I had figured that a router VM of some kind would be needed, and that sounded fun enough to play with anyways! I downloaded a copy of OPNSense, and set up a VM for it.

We're going to initially have 3 interfaces for this VM:

Internet Access - Provide Internet gateway for the router
VM Connection - Provide VMs with a connection to the router
Access VMs - Provide the physical LAN access to the router

This next bit was a lot of fun. It took me 2 laptops, a few afternoons, and some freetime at work to completely figure this all out... but boy did I learn and do a lot to get there! 

With console access via ESXi web client, I was connected to the opnsense VM. There are very limited options from this point, and I figured that it would be straight-forward enough to figure out a few interfaces. Boy was I so, so wrong with my ideas and it caused me a lot of grief. To be fair... the problem I was experiencing was *extremely* basic, and nobody that I asked for help had any for me, so HAH!

I went ahead and configured the 3 interfaces as I saw them. During basic configuration, you set up 2 interfaces -- WAN and LAN. There is also a 3rd, simply called OPT1 (Which I later re-named to VM1, as indicated above). Neat little tidbit about OPT1 and LAN -- LAN gets rules by default to allow traffic. OPT1 has zero rules to allow traffic by default, and you can't do jack shit from it until you configure firewall rules on it. The router was configured to provide DHCP for both interfaces.

I did not know about the firewall rules.

I also did not realize how painful it is to swap back and forth from static IP on 1 physical NIC, over to DHCP on the other physical NIC in vane attempts to figure out what in the fuck you screwed up on the configuration.

My laptop connected up, got DHCP -- everything looked right. IP was good, subnet good, gateway good. Could not ping the router. Could not access the webconfig. WHAT THE FUCK! I know it's pretty common that firewall/routers won't have ping enabled by default, that made sense. But no SSH, and no webconfig... annoying and weird!

Here is my Saturday, hacking away with primary laptop on my left knee, and my Chromebook on the right -- both without Ethernet ports, and me struggling to rotate through my stock of 4 sort-of misbehaving USB-Ethernet adapters. It was a great time! I ended up using a LANTurtle, and a simple Microprice adapter I had laying around. The LANTurtle was great and worked reliably, except it has its own layer 3 abstraction that made troubleshooting *really* hard to pindown and be really sure of myself.

So, here I am. Swapping cables in/out, adapters in/out, swapping back and forth from ESXi management, and connected to the VM Access network. Using tcpdump from the router console, I can see pings and web requests coming from my laptop, so I know layer 1 is working, but SOMETHING IS BLOCKING SHIT. I'm going nuts, absolutely convinced that it's a VMWare firewalling issue, searching everywhere and asking people for advice. 

I finally broke down and went with a new tactical route -- from the virtual side. During a free moment I had uploaded a few Windows ISO and created a Win7 VM. I boot that up, and it connected to the VM Network vSwitch. Access it via the webconsole. DHCP connects, I get an IP address on the 192.168.1.x range (the LAN interface on the router) and... AND... IT PINGS THE FUCKING ROUTER. AND CONNECTS TO THE WEB CONFIG OHMAHGAWDTHEFUQ.

Now another REALLY fun bit... This was Win7, with old ass IE being the only available browser. OPNSense webconfig doesn't work for a god damn thing AT ALLLLLLLLLLLL in that version of IE, naturally.

Another REALLY fun thing about ESXi is that there is no good way to upload files to the VMs. No drag'n drop like I'm so used to with type 2 hypervisors. The best method I found was to create a data ISO, upload that to ESXi, and then connect that up to the VM as a disk drive. Access the ISO, and pull off the data. 

After a few flip flops back and forth with insane ideas of what was wrong, I did the work and signed up for a trial with Daemon Tools Lite for their data ISO creation app, loaded one up with Chrome (remember to use the OFFLINE installer if you haven't yet figured out WAN, like me at this point), and got that onto my desktop VM.

Chrome opened up to the OPNSense webconfig beautifully! Now I'm in business. Oh yes. First thing I did was switch around LAN and OPT assignments for the interfaces. 

Back on my laptop... I CAN PING THE GATEWAY AHHHHHHH. And I can access the webconfiAAAAAAAAAAAAAAAAAAAAHHHHHHHHHHHHHHHHHHHHHHHHH

AHHHHHHHHHHHHHHHHHHHH.

Ahem. 

Now like any good systems administrator, first thing I did was go into the firewall, and add ANY*ANY rules to all the interfaces. Back at the Win7 VM, I can ping the gateway. I can ping my laptop. My laptop can ping the gateway, it can ping my VM. Everybody loves everybody. Everything is good in the world. 

*Note I did make notes of stupid security things I did. I honestly kinda like this as a retrospective of how certain mistakes are made!

Portability!

Here we'll set up some good portability for this mighty fella. I already had a nice, padded case from Harbor Freight that I was using for my poorly abandoned SDR gear. Now repurposed for real use, this package is coming along smoothly!

Another big part of that is getting the hell away from those damned USB adapters. I made a decision to go extremely basic with my initial rollout of network gear. 

1 TP-Link gigabit 8-port switch
1 ZyXEL 802.11n Wi-Fi AP [NWA1121-NI]

I'm not entirely sold on either bits of this gear, but so far they have served me well! Connect the first and third Ethernet ports on the server up to the switch, and then the AP to the switch. Cabling is a mess, and ideally I'll do PoE to an extremely small AP, but I couldn't find any of the Mikrotik MiniAP available, and didn't want to totally invest in a PoE switch as I wasn't sure feature set and expandability required.

Who knows, mabye I'll get an itch or financial support to go crazy and use the onboard SFP+ ports! Those mean all sorts of applicability to being dropped in for use in all sorts of environments as they are 10gigabit uplinks! Just the gear for them isn't quite... consumer/hobbyist researcher friendly. 

ANYWAYS.

AP configuration was pretty straight forward. Set an SSID, connect up to SSID, it gets passed through to the switch. Switch passes it to the 3rd physical NIC. That gets passed to the core router VM. Now I just connect it up, make sure the VM is running, and I'm ready to roll via a simple Wifi connection!!! This should support quite a few cliens with plenty of speed to access VMs, hassle-free. Hooray!

One final convenience -- accessing the ESXi management from Wifi. Another entry into the "security todo" list, I created a new port group - "Management - From Wifi" and attached it to the "VM Access" vSwitch. 

The ESXi host itself gets its network interfaces for management via "VMKernel NIC". The default one is attached to the default vSwitch. This one, we're going to attach to the "VM Access" vSwitch, as just mentioned. Now the ESXi host has access to the vmnic0 physical NIC for management at 192.168.88.50, and then over the Wifi/ physical LAN at 192.168.1.3. If I'm ever concerned about locking things down -- a red vs blue situation, or just around more untrusted/unsavory people, that interface can just be disabled or hardened. Now I can access VMs, and the ESXi host from the comfort of my laptop simply connected to a Wifi connection. 

Let The Gateway... OPEN!

HOME STRETCH OF ALL THE FUNDAMENTAL BULLSHIT!

What a journey, am I right? The final goal of mine to have really "figured it all out" as far as the basic infrastrcuture goes was an Internet gateway connection. The reason this has been so difficult is all the networks around me are hobbled together with scotch tape and hope. I'm basically all Wifi at home due to a poor location of coaxial drops.

I found a way to deal with this via my handy dandy Pineapple Mk 5! It's a very capable and convenient device to keep around as an awesome wireless bridge or general router/gateway. I connected up to it, hooked it up to my home wifi. Then I connected a cable up from its Ethernet port to the server's 2nd physical NIC -- vmnic1, Gateway! IT'S FINALLY HAPPENING!

Checked out the opnsense webconfig, and what did I know... WAN IP ACHIEVED! 

Connected to the "DumbyLand - VM Access" SSID, I can ping google.com, and I can access the ESXi web client, and I can ping my Win7 VM! I can access my online Docs! My VM can access the Internet! IT'S ALL WOOOOORKKKIIIINNNNNGGGGG!!!!

THE FUTURE

I've already messed around with a couple ideas for labs. With all of this figured out, the rest should be figuring out the VM environments themselves! I can bridge the worlds, and provide the wealth of the Internet to them. 

For now, I think that's a fucking great start.

I love ya, d3c4f. We all do! And we'll sure miss you. You've inspired many, and will always continue to do so.

Last Edited: 2018-05-18