vSphere Lab 2013 – CORECLOUD – Phase 1


A few months ago, I realized that I needed to revamp my lab setup to meet the additional needs of hosting several VMware workloads where I could easily deploy and tear down environments quick and painlessly.   The core of my lab was a mixture of Server and Desktop class processors..etc…and it was not standardized at all.  I had too many heterogeneous systems, some with out-of-band management, some without, some single socket lower memory systems, some with dual socket Xeons with larger memory, nested esxi, and it was just messy.

Fast forward to the present and my needs have substantially changed.  Like most people, our labs morph as the needs change, in my case a single socket core series processor running VMware Workstation, through today…which i’ll break down here.

The new architecture is really basic but provides everything I need to meet the specific needs based on where my career is working for a VMware VAR.  I need to be well versed in many VMware products:

vSphere, Horizon, vCloud Director, vCenter Operations Manager, and SRM.

As most of you are aware, there are many subcomponents of the above core VMware products but one thing stood out very clear to me, vCloud Director would have to be the glue that ties my lab together so that I can rapidly provision and tear down vApps as needed.

I tore the old lab down and started from scratch.  I already had an investment in SKT 1366 Xeons with a single Dual Quadcore box with 48GB’s of memory so I decided that I would keep that and look to build an additional SKT 1366 server from used parts from the forums over at HardOCP.  I figured why not?  I can get the parts very reasonable and the performance will still be acceptable.  These two hosts would be the Compute workload for my Private Cloud, CORECLOUD.

Image

Image

Image

Image

CLOUD HOSTS:

Asus Z8PE-D12

2 x Intel E5520’s w/Intel BXSTS100C HSF’s for SKT 1366 Xeons

48GB DDR3 1333 ECC 4GB sticks

2 x Intel PCI-E Dual Port 1Gbe NICS for a total of 6 Nics per Server and 1 Dedicated 10/100 iPMI

1 x Intel X520-DA2 per server Dual 10Gb (Future connectivity to HighSpeed Storage) total 2 x 10Gbe ports per Server (Setup for vMotion on crossover)

4GB USB Key for ESXi 5.x installs

Infrastructure Server

Supermicro X8SI6-F (Awesome board, has 2 Pt LSI2008 SAS6 controller on board)

Xeon 3430

32GB RDIMM (4 x 8GB) ECC DDR3 1333

5 x Western Digital Blacks 500GB & Intel 128GB SSD for Read Cache

Dell branded Intel 6pt 1Gb NIC (Love this card, 6 ports on one card!!) + 1 10/100 iPMI

Storage:

Iomega PX4-300d w/4 x Seagate Barracude SATA6 7.2k Drives RAID5

Nexenta with 2 1Gb and LSI SAS 2008 Passthrough w/ZFS two 500GB Mirrors with SSD Read Cache and hotspare- hardware is my Infrastructure Server

Network:

Ubiquity EdgeMax Router Lite (Awesome Router with enterprise features)

HP 1800-24G (Great switch for an older switch, supports Flow Control, Jumbo Frames, Etc)

Image

The Virtualization platform of choice is VMware vSphere/vCloud Director/vCloud Network-Sec.  I have vCenter Operations Manger, Horizon View, Cisco UCS Platform Emulator, and a nested vApp that will be for SRM testing using a nested UberVNX appliance!

Image

Image

Image

Image

This setup has provided me all the functionality I need, however the infrastructure single node is causing me some resource constraints as I allocate as much memory as I can so that I don’t run into performance problems.  Even though it’s a lab I want it to be snappy and if I have the memory, why not!

Another key point is that the infrastructure cluster is not highly available so i’m setting my self up for failure so my next move is to purchase a Dell C6100.  These are gaining great popularity as they are cheap for what you get.  4 Blades/96GB memory for $700…that’s a steal for the compute power you get in a 2U form factor.

More info here:

http://www.servethehome.com/Server-detail/dell-poweredge-c6100-xs23-ty3-cloud-server-2u-4-node-8-sockets/

In Phase Two I will be implementing the C6100 and the migration of my Infrastructure VM’s to the C6100 environment.  This will allow me to dedicate the current INF server for storage tasks so the case will be replaced with a Supermicro 3u Case with at least 16 SATA6 HotSwap bays, and a true Nexenta build with mirrored SSD read and write cache and better higher capacity drives like some Western Digital Reds.

Stay tuned!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s