Matrix - Server Build

Oh, how foolish, yet direly uninspired I was but just 10 days ago.

A boy wandering his way through online bazaars of components, not knowing what in the world would do him right. Dreams of Epyc, the legacy of Xeon. A war of strangely-named Lakes, editions, and socket compatibility in a whirlwind of availability and compromise.

Never was anything straight-forward. That is the way of the home built server, and it’s a journey. If you’re anything like me - selecting the parts alone is a saga of disappointment and anguish, exhilaration and anger. I can research specific information with the best of them, dig deep into product sheets… but there are so many different variables to bring together. RDIMM vs LRDIMM? Oh you BETCHA we’re gonna learn about that! Selecting storage capacity - who really are you, what do you care about, do you care about things, how much will you pay for them? Let’s find out! 

And why not toss in learning all about ZFS and Proxmox from scratch, all on my own! I have learned a completely new philosophy of managing a hypervisor going into this. I had a lot of insane and idiotic ideas at the start. But look at me now - a dumby with his very own glorious pile of iron and silicon to torture!

And I’m very proud of what I have built. In this post I’m gonna cover my purpose for building a server, shopping guide, and the physical build of the server itself. It’s taken me awhile to get around to this point because even this much has changed drastically up unto this very day.


Table of Contents

Purpose & Philosophy

Introducing - The Matrix

Purpose & Philosophy

A home server. Why do I want it? What do I hope to accomplish?

I love simulation. I am not satisfied to just read about or demo something, I want to BE it. The ultimate entertainment and learning experience for me is doing. If I want to learn how to secure a particular type of environment, I want to walk its grounds with my own 2 feet. Since those opportunities are limited, the next best thing is building simulations of those environments to explore.

The mission of my home laboratory is emulating enterprise environments. I want purpose, definition, and a philosophy for what exists and what it does. That means scale, maturity, and breadth. I want to do it real and right, the same as if I was building something professionally for work. I write documentation, build procedures, and consider projects end-to-end. I don’t just think about throwing together a Windows red team environment with a domain controller, a couple victim machines, and an attacker VM to run through exercises with.

I want a small business environment with processes and work happening. I want to simulate an attack, but I can’t do that before I’ve built and maintained a complete IT environment with the complex and dynamic needs that really drive decision making in the real world. I want to have the Excel spreadsheets of passwords, machines that slipped from asset management and are out-of-date, and a mile-long to-do list of security enhancements we all have that we hope to get around to. That’s the experience in real life, and what I’m simulating here.

To that end, I want to really understand this technology. I want to feel confident and competent as a Linux, network, and virtualization engineer so that when I’m working with real experts in these domains I have the appropriate level of awareness, and empathy with the difficulties and challenges of properly building and securing complex environments.

Return to Table of Contents

The All-In-One

Beyond simulating laboratory environments, this is going to be a true production server. I’ve built it with extreme performance needs in mind that I intend to fully tweak to and take advantage of. It has redundancy, and enterprise-grade parts intended for real work. I’m going to properly manage my home network as if it was an enterprise-level network with advanced capabilities that reflect that.

It’s going to be my core home infrastructure, as well as my laboratory. It will manage all of my networking, and run any of my home services as I migrate away from the cloud. I want it to stream media, act as a storage server, be my router, security monitoring stack, and also cover home automation.

This is a direct extension of my previous home lab environment that I called DumbyLand which I built on a Supermicro micro appliance server - which I still have and love! I learned everything I know about KVM with that, and am going into this project with a much more mature mindset of how to architect this system thanks to that experience.

To expand my home serving capability, storage was the biggest blocker I had. Building a NAS was a necessity, especially if I wanted to migrate off of cloud services and start to centralize and own all of my data. By the time you spec out a NAS, you may realize it’s only about 25% more of the cost to build an amazing server… so that’s what led me down the route of building an “all-in-one” - storage, virtualization, network appliance, and home server.

Return to Table of Contents

What about cloud?

I am a cloud cynic. I think it unlocks wonderful capabilities for organizations, and is a necessary part of the modern enterprise toolkit. It is no panacea to IT needs, however. If you intend to fully utilize the cloud, it is a true dedicated discipline that you need to fully engrain into the fold of your IT. Half-assing it only leads to heartburn and sad pandas.

Scalability, and distribution are massively beneficial traits to have for a distributed mobile application, or a service that has extremely volatile use trends. But for your standard IT infrastructure the management headache of learning to properly scope, architect, and configure a cloud with all of its individual services is a massive undertaking. In my view, if this is your goal and you haven’t hired explicit experts in the particular cloud environment you’re onboarding to… you’re doomed.

There are an endless amount of unique scenarios and gotchas in every IT environment, and cloud is no different. Knowing how to manage AWS costs is a complete discipline. AWS IAM is unique compared to Azure IAM, and especially to Active Directory and other privilege management.

Cloud is expensive. While you may have a ton of flexibility over what to spin up, and can microtune to your needs, knowing how to do so effectively has been a headache in my (admittedly limited) exposure to with cloud environments. I haven’t found a single cloud training/development program that gives you a decent amount of flexibility in spinning up resources to experiment with. And god forbid you forget, or accidentally screw up the way in which you suspend/terminate something and end up with unintended costs.

Many CTFers have stories of forgetting a cloud GPU instance only to remember when they receive a bill with a comma a couple months later!

While network bandwidth is a huge benefit of cloud, my primary focus is serving myself and my home. My laboratory environment will be almost entirely self-contained and not have much external WAN requirements or concerns. If I wanted to host a public service that many people would consume, I’d have some problems that cloud would certainly help solve. I have gigabit at home that works very reliably, so I’ll learn how to live with that.

Bandwidth is also expensive as hell! Media streaming HD content through AWS would cost a ton. Transferring data to/from devices, especially for experimentation with large datasets of log/security data or whatever as you experiment might suddenly add up to costing a couple hundred bucks a month!

https://twitter.com/acloudguru/status/1108746634458992640?s=20

I calculated the cost of renting a dedicated server with IBM for similar specs - 256GB of RAM, decent CPU, etc. and it would have cost something like $1500 month WITHOUT the storage!

Total, I ended up spending just over $4,000 on this build with taxes/shipping. For some context, $2400 of that alone is just memory/storage, which you would still end up spending if you only got a basic NAS. A nice chassis on top of that would be $400, and if you wanted one capable of running a few containers as well, more like $800. I was also looking at doing a dedicated physical router that would have run me about $400 - but I’ve replicated that and more with my virtual network infrastructure. So, realistically I ended up splurging about $1000 more than a NAS to give myself a massive playground of opportunity, and ability to do damn near anything I want. For another $1200 I can max out the memory to 512GB if I decide that I want to, and that will drop in time as DDR4 becomes cheaper.

Eventually, I also see myself getting a dedicated NAS purely for a massive lake of storage and backup purposes, but I’ll only need that when I get to the point of really pushing this build to its limits. Until then it will have PLENTY of storage to get me started on that self-hosted life.

Return to Table of Contents

Self-Hosted Life

Final word here on the purpose of this build is self-hosted. The opportunity to build an amazing amount of infrastructure using enterprise-grade open-source software is truly mind-blowing. You can replicate damn near any of the best cloud services with competing projects, and assuming you don’t need huge diversity/availability like serving a product to a customer base, a home server is just fine for all of that!

https://github.com/awesome-selfhosted/awesome-selfhosted

Have a website or online service I find myself using, or even paying for? May as well host it myself! Want to unlock capabilities in more legitimately managing my network? There’s a ton of tools for that! Security monitoring? You bet. Want to get rid of Google Drive? Done. URL shortener, pasteboard, note sharing, privacy VPN, home automation? So much good stuff, and no more temptation to buy RaspberryPi for it!

Return to Table of Contents

Introducing - The Matrix

Why name it Matrix? Because it is my own self-contained virtual universe. It contains the ability to run a full reality of IT services within itself, with no dependence on any external factors besides Internet.

Return to Table of Contents

Hardware Specifications

CPU Intel e5-2683v4
- 16c/32t
- 2.1GHz
- 40MB L3
Motherboard Supermicro MBD-X10SRA-O LGA2011-3
* 6x PCIe
* 8x DDR4
* 10x SATA
RAM 256GB LRDIMM DDR4-2666 (4x64GB)
PCI Expansion
* 4-port PCI-E Gigabit Ethernet
* (2x) U.2 to PCIe Adapter - x4 PCIe
Power Supply Gamemax GM700 700W Semi-Modular
Storage
* Boot Drive - 1TB Evo 860
* Storage Disks - (8x) Seagate IronWolf 6TB
* Caching - (2x) Intel P3600 NVMe DC SSD
Chassis SilverStone SST-CS380B-USA
* 8x hotswap 3.5” HDD bay
Cooling
* CPU - Noctua NH-U9DX i4
* Chassis - (3x) Noctua NF-F12 PWM
GPU EVGA GTX 970

Return to Table of Contents

Parts Guide

Here is where the adventure got real. I’ve desired a beefy home server for a while, but never really dug into the dream of actually putting a full build plan put together. It was only after I had done this for about a week and multiple days of research that I actually became dedicated to building the server.

It’s really not an easy task in the modern day to do so. There are many compromises you have to make in market availability of parts, what is hot, what is soon coming. I’ve looked at Epyc 2 CPUs quite a few times and was lightly familiar with the product line, but when I started looking, I chatted with a few folks on Twitter after realizing that availability of the newest stuff was non-existent - like quite literally, you can’t realistically find modern-gen server CPUs at a price affordable to a regular ol’ consumer like me.

What ultimately guided my build falling into place was Amazon, which ended up being my primary spot for components. As much I want to cut off my support of them, you’ll find how critical it was to this build. I ended up finding a good deal on at least a contemporary CPU for a pretty killer deal which narrowed where I was looking elsewhere.

Return to Table of Contents

CPU

Let’s go a bit over CPUs. There’s many aspects you have to take into consideration when selecting one. First is the basics - core count, hyperthreading support, core frequency. The amount and types of memory that it will support. What socket it is will determine what motherboard it can fit into. Cache size is a huge consideration for performance, I think especially when it comes to virtualization when the CPU is constantly juggling a huge variety of different tasks - the more cache, the better. Going into this I had no clue what would be like “OMG that’s a lot of cache” compared to “oh wow that’s barely anything”. Let’s go over the information I covered in my learning and selection process.

The Intel Xeon e5 v4 family of CPUs are mid-range enterprise-level with support for advanced features, and solid specs with a Broadwell architecture which came out in 2016. Modern-gen architectures are in the range of Kaby Lake, Skylake, Coffee Lake, etc. and while it would have been cool to experiment with the more cutting-edge stuff, Broadwell is tried and true, and easily available. I originally got a e5-2620 because I have never really been limited on CPU usage, but I wanted a half-decent amount of L3 cache and such. That had 20MB cache which would have been pretty solid.

To get a better idea of the market, we’ll take for example a Xeon Silver 4210 available from Newegg. This is a $540 Cascade Lake (released in 2019) architecture server CPU. The MSRP is $501.00 - $511.00 so there’s already inflation there. Taking a closer look, it has a 13.75MB L3 Cache, and 2.2GHz frequency. It uses an LGA 3647 socket.

Maybe $540 is a bit rich for your blood. A more modern-gen Xeon E3 v6 is a Kaby Lake CPU released in 2017 that costs $306. It is 4-core with hyperthreading so 8 threads (4c/8t). Alright so maybe you don’t really need a lot of thread support for your build. It has 8MB of cache, and that doesn’t sound terrible if you’re building a server that has a far lighter load. It has DDR4 support, and supports ECC. Cool! Maybe you’re just building a storage server with a ZFS pool that will serve a few VMs and containers… but then you look at the datasheet and realize it supports a max of 64GB RAM, which pretty quickly kills many modern server builds where I say 128GB is the minimum I would want to be able to scale up to. We’ll cover more of this info later, but between ZFS and the flexibility of being able to freely allocate memory to VMs and containers is so worth it.

Now let’s take a closer look at the Xeon e5-2620 v4 I initially ended up with. It cost $289 refurbished on Amazon, with Prime delivery. It has 8 physical cores, and hyperthreading which results in 16 threads. This is quite a bit of headroom for spreading out processing tasks, as well as perhaps some opportunity for dedicating cores to specific tasks later on. It has 20MB of L3 cache which is far more than a lot of the other server CPUs I looked at. It has a 2.10 GHz frequency CPU, and can turbo up to 3.0 GHz, which I think is plenty of runway in processing power. I’ve never really pushed CPU utilization high with my server loads before so I wouldn’t expect frequency to matter a whole lot as long as I have threads to spread load on, with lots of cache and memory available. It supports enterprise-grade levels of memory with 1.5TB max of DDr4. Initially, I would have been just fine with a quad-core so getting 8 cores was awesome. It uses an LGA2011-3 socket which was extremely widely used, and should be well-supported… but we’ll find that isn’t really the case.

Another huge consideration here is availability. I won’t lie - once I got my heart set on this, I was anxious as hell to get it in my hands, and was willing to compromise on a lot just to make sure I could get it soon. Once everything started to look like I would very lightly compromise, and get it all perfectly timed to arrive within 7 days, it all fell into place. After landing on a CPU, I could shop for motherboard and memory. After those 3, the rest is pretty straight-forward and doesn’t really impact the other components much.

I felt like I had done a pretty good job shopping around by this point, and was stoked on the midway range of things I had ended up with.

However! Fantastic budget power landed in my lap. When chatting about the specs, a friend who is also an asshole, was like “why wouldn’t you get an actually cool CPU, you loser?”. And so I landed back up on Amazon, just by chance taking a poke around. Then I found an E5-2650 that would arrive in time with all the other parts… and it was WAY cheaper at $235! Wow! 12-core, 24 threads with 2.2GHz frequency. It has an MSRP of $1100… this is no lightweight chip! Because I was shopping in the same v4 family of the Xeon e5 CPUs, they were all LGA2011-3 socket so the same motherboard would work regardless.

As luck would have it, that CPU was marked as delivered the same day I got the rest of my parts. Except I never got it. I still have a claim open with FedEx that hasn’t been responded to yet because the package was marked “left at dock”, and signed by someone whose name I certainly don’t know. At my apartment complex, we have an office manager that is there through the whole day, there are package lockers that delivery people can use, and they’re allowed direct access to floors to drop off packages. It’s not consistent what they choose to do, but I’ve never lost a package here, and it’s been secure so far. I’m 100% certain that it was stolen/lost by the delivery person, so I’ve gotta figure that out.

However! ULTIMATE budget power then landed in my lap. This extended my final hardware plans out quite a while, but it was 100% worth it - I found a Xeon e5-2683 v4 CPU on Amazon for $230! WHAT! This chip MSRP is $1850. It is 16-core, with 32 threads at 2.10 GHz. It has 40 (fourty!) MB of L3 cache.

And that’s where we are today! I received it in the mail a couple days ago, and dropped it into my server this morning, and it’s working perfectly.

This htop output is a beautiful sight.

Return to Table of Contents

Motherboard

With one anxiety attack completed in choosing a CPU architecture to go with, next big decision was a motherboard. I knew I needed an LGA2011-3 motherboard. Again this is Broadwell, which to my limited recollection, was huge back in the day for on-prem environments. Maybe I’m wrong, and it’s dying out… or maybe it’s still popular and that’s why it’s so hard to find quality parts? I don’t know, but the market for server motherboards was extremely limited. I did have a solid option, but it would have been great to at least have 3-4 options to pick from.

You really want a server motherboard as the first criteria. The amount of RAM slots alone is a huge consideration, and I really wanted 8. You really spend more for high-density memory, and desktop-style motherboards don’t offer any other real benefits. There were a few options for dual-socket motherboards which was a really interesting consideration, but definitely doesn’t fit my use case, so those were out of the equation. Looking at Newegg, I found a very solid Supermicro board. It would ship in time, had good features, and so I went with it.

Supermicro MBD-X10SRA-O LGA2011-3

It has tons of PCIe expansion ports which ended up being super important in my build, so I’m extremely happy I did a bit of a splurge on getting a higher-end board. It has 10 SATA slots which ended up being perfect for how I currently have things configured, and that’s another huge consideration when selecting a proper motherboard. If you need more than that, you’ll need to use a PCI card to expand that out. I already had a pretty good idea that I was going to use 8 HDD because I had a case in mind which would support that, and that seemed like a really solid number of drives for the use case I have in mind.

Not much personality in the choice for this mobo, but it’s been awesome! It’s an ATX-format board, which is a considerable factor on one of the next big decisions.

Return to Table of Contents

RAM

Oh boy, RAM is a complex formula. With CPU selected, you narrow RAM to a max DDR rank, and a max amount. The Xeon e5 supports up to 1.5TB of DDR4, but that doesn’t do me a ton of good. With the motherboard selected, I have these constraints to work with

Okay, so what is RDIMM vs LRDIMM? To achieve higher density memory, LRDIMM (Load-reduced DIMM) is used as compared to RDIMM (Registered DIMM). To get max utilization of 512GB RAM with this motherboard, I would have to use LRDIMM. Technically LRDIMM is lower performance. In my research, I decided it’s probably 5% in efficiency difference, and rather negligible.

https://www.dasher.com/server-memory-rdimm-vs-lrdimm-and-when-to-use-them/

The next significant difference is cost. To max out RDIMM with 256GB RAM across all 8 slots would cost me ~$900 ($450x2). To get 256GB of RAM with LRDIMM cost $1200, so it’s a fairly significant difference in price. The reason I went with this route is because I really would hate to have RAM end up being my bottleneck in getting full utilization of this hardware. A few extra bucks upfront to guarantee that I can expand that to a full 512GB later with few other ramifications seemed like a very solid tradeoff. If you wouldn’t need more than 256GB, RDIMM would be better performing and cheaper.

Return to Table of Contents

Chassis

With all of those other big decisions out of the way, the rest is far more isolated and doesn’t really impact decisions on other components. Because I am going with the all-in-one route with this build, storage is a huge consideration. Many moons ago a DC801 friend recommended this Silverstone case for a NAS, and it’s always stood out to me.

The immediate draw is that it has 8 hot-swap 3.5” bays. Finding that many bays alone is pretty rare, let alone in a capable and customizable format. I was sold almost immediately after getting back to it. My motherboard is a standard ATX format, so I knew I needed a full-sized case - but this is certainly smaller than any full-tower! It has solid cable management, USB 3.0, the 5.25” expansion bays up top.

Initially in my build, I thought I had 2x2.5” SATA SSD that I was going to use, so I currently have an expansion bay in one of the top slots to give me 4x 2.5” hotswap bays. This was a bit expensive ($60), and as I’ll outline later, I found out those 2 drives actually weren’t compatible with SATA, so I have no purpose for those bays other than my boot SSD. I can easily shove that anywhere else in my case, really, so I’m gonna have to decide if this still serves any purpose or if I can return it.

There are other interesting chassis options to go with from Silverstone if you can do micro-ATX motherboards. Otherwise, rack-mounted is probably the only way to go. If I was going to be doing a server + NAS, I likely would have bought a small rack to put in my closet.

Return to Table of Contents

Storage

Even though I pretty much had my heart set, it really took me for sure deciding on the Silverstone NAS case to decide that I was going to use 8 drives. It must be a simple equation from there, right?

HAH. HAAAHAHAHAHA. This is where you start to question who you are. What makes up the consistency of your soul. What happens if a HDD fails? Can you live with this data evaporating? Would it make you mildly upset, or would a part of you die? How much of a hoarder are you, really? Is it your data, or is your data you?

I’m learning about myself every time I think about storage. Reviewing how much I’ve utilized cloud services in the past to store stuff, but also makes me think back about all the random HDD I have had over the years full of documents, music, and other memories that would have been awesome to keep… but how much space would it be, really? I’m almost maxed out on Google Drive, but that’s only like 17GB… how much of that is compressed, how much have I NOT saved over the years because of that? Quite a lot, I know for sure. I’m sick of it. Archiving that legacy is becoming so important to me that I desperately miss my collections of old data backups, screenshots, chat logs, photos, and especially music.

I love HD content, so media streaming is going to be huge. I’d love to have a collection of FLAC for any music I really care about, but also get away from Spotify entirely which would require downloading a truly massive amount of diverse music because I rely on radio functionality so much to find music for my mood. I’ve never run a Plex server before, so getting familiar with how big file sizes are, the diversity of them… I really have no clue how to properly scope for that.

Plus we have all the unrest lately, and crazy shit of the last couple years. Archiving websites, social media feeds, and entire platforms is a reality! How cool would it be to have space to volunteer to ArchiveTeam for when they’re faced with immediately needing 50TB of space that’s needed, from out of nowhere. It’s probably also not great to entirely rely on torrents retaining archives of videos from protests and such, but make sure at least a few people have those datasets backed up on hard storage. I want to participate in those activism efforts, and find my own niche interests to archive and preserve.

How much am I willing to spend? The biggest cost of these builds by far is storage, because the more that you get, and the more of it you actually access, the more you need it redundant and highly available.

How redundant? Because I don’t have any other dedicated storage to rely on, I wanted to do Raidz2 on this build. This gives me 2 lost drives worth of redundancy before I lose any data. If I get a dedicated NAS, I would probably duplicate important data across both. That would allow a lighter amount of redundancy and restore some of my total capacity for use.

Backup plans? This is my core infrastructure, I have a huge need for availability, reliability, and recovery. My VMs need to be redundant. Their storage needs to be redundant. Their configs, backups, snapshots, etc. all start to add up.

Simulation environment is another major component. I’m a security guy, and monitoring/analysis is a huge component of that. That requires data - and monitoring data tends to be quite verbose. To monitor my home network and do a lot of cool security analysis stuff could be up to a couple TB of space if I go all-out with full PCAP, long-term datasets (1-3 years), etc.

So I have redundancy, simulation needs, archive, backup, service needs, overhead for stuff I’ve forgotten, a cost concern, and then runway to expand because data size only balloons exponentially from here. Simple!

There’s really only 2 choices when it comes to buying HDD straight for NAS purposes. Seagate IronWolf, or Western Digital Red. WD were significantly more expensive, and I honestly don’t know of any difference in benefits between the 2 of them. The IronWolf have plenty fine reviews

The cost difference between different capacities is negligible up to 10TB, where you then have to start to decide whether the extra capacity is worth the increase in the base cost. I did a back of napkin calculation that pointed to me realistically using probably 30TB of storage on this server. Here’s a few of the allocations that I sketched out with an extremely surface level investigation… so it will be extremely interesting to see how it all works out.

Going with Raidz2 for proper redundancy, and about 30TB of space sounding right, that leaves me with 6TB drives. Just doing a simple calculating for the striping, 46TB of total space becomes 36TB capacity in Raid. With further overhead of ZFS, and actual sizes of disks considered, I ended up with a total of just over 30TB available to my Proxmox host - that sounds just about perfect! Any less I’d skimp on allocation to resources. More than that and I’d be concerned about having thrown money into storage I don’t really need.

Another consideration with ZFS is that the more space you have, the more RAM it uses. An additional component is that the more free space you have, the better it performs. That’s a significant tradeoff to consider when you’re scoping out the size amount, and I think I landed on a perfect middle ground.

Caching is another topic I had to consider in this. ZFS is amazing technology I’ve only barely scratched the surface of. There are so many concepts to learn, but few that are necessary to really know in order to hit the ground running with it. Proxmox has great ZFS support, so I kind of leaned on it for a lot of it. However, I did learn quite a bit of the fundamentals. One of which is caching!

https://www.ixsystems.com/blog/get-maxed-out-storage-performance-with-zfs-caching/

ZFS has a philosophy of multiple performance tiers when it comes to data availability. The reason that it is a RAM hog is that it uses ARC (Adaptive Replacement Cache). When you’re writing data to storage, it goes first into ARC. It’s also a low-latency source for reading from a ZFS pool. The system first looks to ARC when retrieving data - obviously super fast because it’s RAM!

ARC is balanced between MRU (Most Recently Used) and MFU (Most Frequently Used), which is complex as hell. There’s all sorts of algorithms and logic that goes into how ZFS optimizes the caching in this.

I honestly have no idea how to properly scope what ZFS use is going to look like. The workload for this server is quite unknown at this point, but I expect to really push it to the limits in experimentation. However, because it’s a virtualization host above anything else, RAM availability is important!

To help balance that, and to experiment and learn ZFS more deeply, there’s also the L2ARC - Level 2 ARC! It accelerates random read performance on datasets that may be bigger than what the ARC can support - when I’m thinking data archiving, security datasets, data capture/processing, I’m thinking in the TB level so some additional caching would certainly be useful!

In addition to a cache for L2ARC, ZFS also has the ZIL (ZFS Intent Log), or simply known as ‘log’. A synchronous write is when the OS is writing application data to disk. The data is first cached in RAM, but needs to ensure that is written to stable storage before moving onto the next system call. If that’s relying on a spinning HDD, that could be a lot of latency. Having a dedicated SSD for ZIL means it can write from RAM, to an SSD, and then from ZIL to the hard disk at its own pace.

You can specify particular drives in the system for each of those tasks - cache, and ZIL. To reiterate my philosophy with the build - a huge component in going with these advanced capabilities is to really push myself, learn these technologies better, and experiment with performance. I am NOT certain I’ll actually utilize these optimizations to their fullest, but it was a worthwhile investment to try and learn from it, so I’m super happy with what I ended up with

There was really only one major brand to consider when shopping for new, datacentre-grade SSD. The other best option I found was refurbished - Intel P3600 datacentre PCIe NVMEe SSD, and those were recommended specifically in a few blogs.

In my research it was recommended to use SSD for those purposes, obviously. However! Standard consumer SSD are notorious for not putting up to heavy load very well. It really costs to get enterprise-quality, datacenter-grade SSD which can deal with heavy IOPS load. But they exist! And they aren’t insanely out-of-touch. They certainly weren’t cheap, but $200 for datacenter-grade equipment from Intel is tempting, and I figured 1TB each of NVMe would be perfect amount of runway to use for either caching/log because my “active” data calculations came out to be probably 1-3TB max at any particular moment. I can adapt with this over time as I find out my needs change.

The P3600 are AWESOME… but presented one unique problem for me. That whole “enterprise-grade” thing comes with its own learning experience. When I was researching the P3600, and not being knowledgeable in the hardware world… or smart… I foolishly looked past the obvious signs that I needed to take a closer look. The entire P3600 line is pretty diverse between capacities, and connections. Some are 2.5” drives, others are ½ height PCIe.

I abandoned cognitive dissonance, and convinced myself that these were just very durable 2.5” SATA drives and there were also PCIe versions that would slot into an expansion slot and for some reason they just didn’t list them on the website. Yup, that’s definitely it.

I kept seeing they were PCI drives, but ignored it. It doesn’t help that they fit perfectly into a standard 2.5” hotswap cage, and at a glance, their connector looks like SATA… but it’s NOT! After I put them in, and my system didn’t detect the drives, it took me a pretty disturbing amount of time to figure out why they weren’t working.  I was at the point of looking up extremely niche issues in BIOS such as VT-x compatibility with IOMMU and NVMe drives or something. It turns out they just weren’t even connected to the system. Mega derp moment.

I then had to research what type of drives they actually were - even reaching out to Twitter at one point - to identify what the hell this port was. Eventually in the full datasheet, I noticed this little detail “8639-compatible connector”. Googling that, I found it’s more commonly known as U.2 (SFF-8639), and there are PCIe adapters for them! I found 2 of them on Amazon Prime, and had them delivered the next day… insane. Again it was WEIRD as hell to me that Amazon was the best distributor for individual, niche server components.

I’m not alone in my idiocy! I figured that this was going to end up being some rather uncommon enterprise crap… and sure enough, on Wikipedia it’s described as “developed for the enterprise market and designed to be used with new PCI Express drives along with SAS and SATA drive”. That day I learned! Then I put them in the adapters, slotted them in, and was up and going.

The last thing to figure out was a boot drive for the host. This will contain my hypervisor’s core system - Proxmox installation, core OS maintenance, ISO images, scripts, etc. It’s generally recommended that you go with a simple thumbdrive storage because it doesn’t need to be super fast storage, just reliable and large enough to install an OS to. However, I wanted to get a nice and quick reliable drive, so Evo 860 is the standard. And I figured I may as well get a 1TB to give me enough room for its own logging, maintenance resources such as scripts and backup data. I could have saved a few bucks going simpler here.

Return to Table of Contents

PCI Expansion

My motherboard has tons of expansion capability, which is a damn good thing because I’ve used almost all of it!

Here’s what I currently have in it

Return to Table of Contents

Power Supply

There was nothing special about my decision here, really. I wanted semi-modular because I really only would be connecting CPU, mobo, and SATA. I was originally only aiming for about a 500W, but the market was pretty limited. I wanted as efficient as possible because it would be on 24/7 and probably hit a pretty high use threshold. I could go with a full-sized ATX which was nice. Overall, between calculating the difference in efficiency and difference in price between nicer units and their power rating, Bronze-certified seemed like a fair tradeoff.

I ended up just searching for semi-modular, high-efficiency, looking for what had decent reviews, and quiet. I ended up choosing mine because it had a green fan, even though it’ll never ever be seen /shrug. It was 700W, which given that I’m currently planning on leaving my 970 in and I got that higher-end CPU, it might work out better in the end

GAMEMAX GM-700 Semi-Modular PSU

Return to Table of Contents

Cooling

I think I’m over watercooling. I went with it for my desktop build because I had great aspirations of doing a super tiny gaming PC. That all went out the window and I ended up with a smallish mid-tower case that has a glass case. Pretty much the antithesis of what I wanted, but finding a case without extreme compromise was god damn impossible so I went with what was solid and made me pretty happy.

Air cooling is so efficient, and cheap. I can get basically the highest tier CPU cooler for $60. It’s from Noctua, extremely well-engineered heatsink, beautiful, adjustable for RAM clearance, and they just have hands-down the best fans on the market - silent and huge air circulation.

I went Noctua both for my CPU heatsink, as well as replacing all 3 case fans with high-performance, low-noise fans. Because this will live in my closet 6 feet from where I sleep, noise is a huge consideration! My old lab server had tiny 60mm fans that were jet turbine volume levels…. I really needed to also replace those with higher-grade Noctua fans.

All put together, this is what cooling looks like! 2x on HDD, CPU cooler, and rear exhaust fan.

Return to Table of Contents

GPU

This wasn’t anything I planned on. Luckily, I had built a new gaming PC last year, and had my original one in the closet. I yanked the EVGA 970 GTX from it because I needed graphics output to install the core OS. Sitting on the ground with a keyboard and USB-powered HDMI screen, I got the core OS installed, and the server put on the network.

If you’re planning to build a server, I would strongly recommend making sure you have an old GPU on hand, or buying one on eBay or something for extremely cheap just to make this backup management simple.

There’s nothing quite like getting to yell at your own computer for your own mistakes.

Scientists say it is 1500% more effective in getting a response from your machine than just yelling at a monitor to get through to the cloud

Return to Table of Contents

Total Parts & Costs List





Curious what music inspired me while writing this post?

UADA! So much Uada, amazing American black metal. Give them some love!