Despite numerous setbacks and failed attempts to bootstrap the heart of the lab, I was able to finally get things rolling over the weekend. As a quick review, I chose to use two SuperMicro SuperServer E300-8D machines as the core hardware due to their small footprint, quiet operation, low power consumption, and performance specs.

Each of the servers runs an Intel Xeon D-1518 processor, which has 4 cores, a 6MB cache and only draws 35W of power. Additionally, out of the box they can house one M.2 PCI-e SSD, one mSATA SSD, and one 2.5″ SATA drive. Currently, I’m only using the M.2 slot for an NVMe SSD, but in the future I could add an mSATA drive or expand with more M.2 via PCI-e expansion card. From a memory perspective, the E300-8D servers support up to 128GB of ECC registered RAM, so I went ahead and put the full 128GB in each. Essentially, the memory limit is what convinced me to use the SuperMicros instead of grabbing a pair of higher end Intel NUCs (max 32MB). Now, I could drone on about more technical specs, but pictures are often a lot more descriptive. To that end, I enlisted the help of my Star Lord POP! figure, mostly to provide scale… but also for the humor.

Star Lord, Legendary Outlaw, sizing up the SuperMicro E300-8D.

As you can see, the E300-8D has quite a small footprint overall. It’s basically a 1U rack height, but in terms of length/width it’s smaller than my 14″ ThinkPad. Also, I don’t believe I’ll ever run out of ethernet ports with these machines, there are six 1Gb ports, 2 10Gb SFP+ ports, and 1 port for IPMI. Otherwise, they are pretty basic with a VGA and two USB ports, but they are running headless now that they are bootstrapped anyway. I could have opted for using PXE boot, but for this first lab build I went simple and just plugged in a monitor, keyboard (with built-in touchpad and TrackPoint), and booted from a USB drive.

Compact layout, but plenty of power. He doesn’t know how this machine works.

Anyway, before getting to the operating system installation, I needed to install the hardware. Above is the “before” shot of the system opened up, and below is the “after” shot with all 4 memory slots loaded (left of the CPU/heatsink) and the M.2 NVMe installed (right of the CPU/heatsink). Of note, for future growth, to the right of the NVMe are the 2 PCI-e expansion slots and the mSATA slot (horizontal with white barcode sticker). You might also notice that this picture was taken with the originally attempted Western Digital Black NVMe.

RAM and NVMe drive installed… without any tape.

Before moving on, I want to quickly elaborate on what exactly happened with the WD Black NVMe drives. As I said in the last post, from all indications the drives appeared to be working: recognized in the BIOS, Red Hat Enterprise Linux (RHEL) installation detected them accurately, debugging with smartctl came back with a healthy status, but attempting to write out new partitions would completely fail. I even went so far as to try formatting the drives with a Windows 10 installer (idea from some online research, colleague input, and the fact that WD support indicated they only really supported Windows or Mac), but that ultimately failed as well. Trust me when I say, if there was a permutation of BIOS setting, manual formatting, automated partitioning, hardware re-seating, or operating system, then I tried it. In the end, I ordered a pair of Samsung 970 EVO Plus drives and once installed I was able to run through a minimal RHEL installation without any hitches. As far as I can tell, there must be some hardware or BIOS incompatibility between the SuperMicro motherboard and the WD Black NVMe drives (WDS250G2X0C – and likely also the 500GB model as well).

Two E300-8D servers powered up and networked next to the Synology DiskStation 1618+

With all the baseline installation work completed – including setting up the static IP addresses for the 3 NICs I will be using in each – the final step was to re-home the boxes down to my low-tech lab rack and wire them up. Above is the final (for now) configuration: network hardware sitting on the top, opportunistic use of the wire shelving for routing the ethernet cables, and the rest of the lab hardware (UPS, servers, NAS) on the second shelf. I do have a third empty shelf on the bottom, but I may use that for storing my electronics bins. One thing that I would change/improve if I were doing this again: mount the second shelf one slot higher to leave a little more slack in the network cables. The 3′ cables I chose worked fine, but extra slack would have allowed more options for routing them.

Close up of the simplified network cabling

Lastly, this close up shot of my currently network setup reveals my downfall. As it turns out, having a multiple VLANs on a switch (SG200) behind router (SG300) behind another router (N600-DD) was really cool, but complicated to get working properly. Put briefly: configuring it was easy, getting all the outbound Internet routing working was problematic. In an effort to just get everything up and running, I scrapped the complexity and am simply running everything in a single default VLAN. It would probably take another full post just to explain all the network roadblocks I ran into, discoveries I made, and what my plans are for the future. So, I will leave things as they are… functional, simple, and with more to come soon.