2015 Home Lab Update

It’s been a while since I put together a post about the state of the home lab.  It has been in flux for a bit due to time or lack thereof and waiting for the actual release or the vSphere 6 bits.


The purpose of the lab is to provide enterprise class features and performance for testing, running actual Production applications, validating configurations, with a major focus on Automation and OpenStack/vRealize Suite testing.  I also wanted to pack as much into a smaller footprint as possible as to not require a huge rack or even a mid-size rack. Originally,  the lab was to be migrated to my brother’s new office build but that was put on hold for too long that I couldn’t wait any longer to move it there.  I checked out other places to Colo the equipment since it is Server class infrastructure, but in my geo , Colo space comes at a premium and wasn’t worth the expense.  Then the idea came to house it at our office, however, since the office moved across the hall, we have no air conditioning in our data closet so nothing is running in there other than a switch and a single server.  I had no other choice than to house it at my home, for now.

To get started let’s look at the different lab types out there and the route I chose to stick with and the reasoning behind my decisions:

1. Mobile Labs (Laptop)

  • Good
    • Small form factor
    • Mobile
    • Totally integrated within a single unit
    • Low power and heat
    • Great starter lab at low cost of entry
  • Bad
    • Constrained to limitations of Laptops (ie Max 32GB RAM/Small Storage Capacity)
    • Nested would be a requirement which brings with it, less performance
    • Customization required for network access/CPU Virtualization tech for nested vSphere
    • Requires a Type II Hypervisor like Fusion or Workstation at additional Cost
  • Ugly
    • Management overhead
    • Constantly spinning lab components up and down
    • Many laptops do not support 32GB of RAM which, in my opinion, is not enough memory capacity to have an effective lab outside of vSphere
    • Slow the more you nest



2Whitebox Home Lab Build (Custom build from individual components)

  • Good
    • Choice (Many choices on components)
    • Possible reuse of “desktop” components or use of your current custom workstation/pc
    • Great starter lab at low cost of entry
    • Adding capacity and functionality is easy, storage, RAM, 10Gbe/FC CNA/HBA, Storage Controllers etc..
  • Bad
    • Possibility of no remote Out of Band MGMT if using Desktop Parts
  • Ugly
    • Validation of components and support from vSphere
    • May require custom driver builds for ESXi

9-12-13 335     9-12-13 341

3Small Form Factor (NUC, Brix, Apple Mac Mini)

  • Good
    • Small form factor
    • Low power and heat
    • Small footprint
    • Great WAF
    • Easy to work with and deploy
  • Bad
    • Constrained to limitations of SFF (ie Max 16GB RAM/Small Storage Capacity)
    • Usually not ideal CPU choices
    • Possible Customization required for network access/CPU Virtualization tech for nested vSphere
    • Expensive for what you are getting
  • Ugly
    • No remote out of band MGMT (exception vPro)
    • Limitations on network capacity, usually single NIC, have to mod to add another NIC
    • Limited VM capacity due to above

rc-family-nuc-frontangle-blue-16x9.png.rendition.intel.web.480.270    image2

4. Enterprise Class (Name brand servers, Dell, Supermicro, HP, etc)

  • Good
    • Power, Power, and more power…Excellent Performance
    • Remote MGMT (Out of Band) Easier to Manage remotely
    • Support Hardware (for the most part)
    • Enterprise class features that integrate with vSphere like DPM, etc
    • Designed to run 24/7
    • Higher Density and workload capacities
  • Bad
    • Power and Heat
    • Larger Footprint, Rack may be required
    • Noisy and expensive to run
    • COST: while you can buy previous generation used fairly reasonably, sky’s the limit here
  • Ugly
    • Minimal to zero WAF
    • Cost for hardware, operational overhead, environmental
    • Requires some additional skillsets to manage Enterprise gear (minimal but could be more time consuming)
    • Slower to stand up/deploy

IMG_0995 IMG_0992 IMG_0998

These are the core categories I see out there regarding home labs.  I have identified, what I believe, are the benefits and drawbacks of each.  Each one is certainly not perfect but there are plenty of choices for everyone.

Like most, my journey involved a natural progression of most of these.  I originally started with my Macbook, running Fusion and when Autolab came out, I was ecstatic, however, I outgrew that fast as my skills and interest moved to View, then to vCD, and now on to vCAC, OpenStack, and NSX. This pushed me to build two custom whitebox servers which served my needs for a couple of years running all types of workloads.

Now, my goal(s) are to stand up an environment that would resemble as close as I can get to a production environment with a focus on IaaS platforms on the latest bits that will be released shortly from VMware, PernixData and Veeam.

I have moved to more of a “converged” platform (using that term loosly here) where I can pack as much power in a small footprint as possible.  Here is my current environment as it sits today (Pictured in the Enterprise Lab Section):

A. Storage


Storage is provided by a custom built Supermicro Server (top server) with 24 x 2.5″ drive bays on a Supermicro X8SI6-F Skt 1156 Board with a Intel Xeon X3440 and 32GB RDIMM (4 x 8GB).  This board has served me well. It has an onboard LSI SAS2008 Controller, and I have added two IBM 1015 HBA’s flashed to IT Mode.  I’m running the latest Nexenta Community Edition release 4.x and my drive setup consists of the following:

2 x 60GB Corsair SSD (Mirror) Nexenta Install

2 x 200GB Intel S3700 SSD’s (Write Cache)

13 x 1TB Seagate Hybrid 2.5″ Drives 6 sets of Mirrors and 1 Hotspare

Purchased and Shipped: 5 x Samsung 843T 480GB SSD Data Center Drives in a RAIDz for Tier 0 Storage


Coming Soon

Current Parts ( 3 x Samsung 850 EVO 250GB SSD’s)

To Do: Need purchase Drives

This will eventually house my MGMT VMs.

EMC PX4-300d

File Storage (4 x 3TB WD REDs)

B. Compute

Compute is served in two separate clusters, one for MGMT and one for Cloud Workloads and a single node for DR testing located at a remote site.

Intel NUC DN2820FYKH This is used for my Physical Domain Controller/DNS for the Lab

Supermicro FAT TWIN 6026TT-HDTRF with 2 x Sleds (4 x L5520’s/192GB RAM/8 Intel NICs/2 x 10Gb Broadcom 1020 direct connect/2 x 200GB S3700 Host Cache/PernixData) MGMT Cluster

Supermicro 6026TT-BTRF with 3 x Sleds (6 x L5520’s/144GB RAM/12 Intel NICs/3 x 180GB Corsair SSD’s Host Cache/PernixData) Cloud Cluster

Custom Whitebox Server Asus Z8PE-D12x (2 x 5620’s/48GB RAM/8 x Intel NICs/12TB RAID5 Local) Single Host for DR Testing.

To Do: Build the DR Server and install at Remote Location

C: Network (LAB)

My network is a mix of newer equipment and older switches which need to be replaced.  Waiting on 10Gb to come down further in price.

Routing: Ubiquiti Edgemax Router

Switches: 2 x HP 1800-24g Switches

To Do: Need to purchase 10Gb Switch(s)


Supermicro CSE-RACK14u

To Do: Need to purchase UPS

This setup should last a bit until the next generation of converged infrastructure comes down in price.  Next round, my eyes will be on a single Dell C6220 with integrated 10Gb and a Dell 8024 10Gb-BaseT Switch but both need to come down significantly before I sell off what I have and pursue that route.

I’ll be posting more on the actual build soon as vSphere 6 bits are released!

2 thoughts on “2015 Home Lab Update

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s