Lab Refresh – Storage

I’ve been debating on which route to take when it comes to storage for my Lab Refresh.  As I prepare for the release of vSphere 5.5, I am completely rebuilding my vCloud portion and wanted to provide a couple of service levels for storage.

Since Samsung released the EVO SSD’s, they brought with that 1TB SSD storage and it seemed somewhat affordable at $.50/GB.  As soon as I was going to pull the trigger prices raised into the $850/drive mark which left me with a bad taste in my mouth.  I then decided to start with two Tiers,  TierIII storage served from my Iomega PX4-300d w/4x2TB Seagate NAS Drives, and TierII storage served from a custom Nexenta build.

I’ve already completed a review on the Iomega a couple of years ago, you can find it in this blog if you look back.  Today I want to focus on Nexenta and all the goodness that it brings.  Let first talk about the hardware.

My Infrastructure host is comprised of a Supermicro X8SI6-F, Xeon 3440, and 32GB DDR3 ECC Unbuffered memory.  Not so great right?  Well, the motherboard is what stands out here.  On this motherboard is an LSI SAS2008 6GB Controller.  It contains two ports and with SAS-SATA fan out cables can accommodate a total of 8 drives.  Turns out, this would suit my needs just fine.  If you’re looking for a cheap platform for fast storage, then check this motherboard out as i’m sure it can be had fairly cheap these days.  You can see the review on one of my favorite sites here

The drive decisions were a little sketchy.  I wanted some capacity but also wanted IO and low latency and not be impacted by poor disk performance so I compromised and opted for a set of Hybrid SSD/Spinning Disk and SSD for Read/Write cache.  Nexenta provides a great guide on optimal storage configs for vSphere.  You can see here from their Nexenta VMware guide located here .

I want to point out now that this post is not going to walk you through a Nexenta installation, there are plenty of guides out there and again, posting another one here isn’t going to benefit the community.

Within this document, they outline configurations that provide minimal to optimal performance:


To save on cost I opted for the “Better” Performance configuration.  With that I have the following:

4 x Seagate 2.5″ 1TB Hybrid Drives

2 x Corsair 3 60GB Mirrored Log

1 x Corsair 3 120GB L2ARC

1 x G.Skill Phoenix Pro 100GB (Leftover from older laptop) For local SSD Datastore/Host Cache/vSwap

Within the Nexenta interface you create you volume and change the settings based on drives selected.  You can see my dataset layout here:


From this DataSet I created a Share and enabled NFS on it.  Yes, I love to use NFS because it’s very easy and provides solid benefits over block.  I live in the “block” world at work and rarely do I get to do NFS and i’m hoping that will change soon and that NFS4 is supported in future versions of vSphere.


I provided the proper permissions and setup a separate VLAN binding a physical NIC within each Host in my Cloud Cluster.


While each of my Cloud hosts have a Dual Port 10Gb CNA that i’m using between the two for vMotion, unfortunately my Nexenta box does not.  I have a feeling when I start benchmarking, i’m immediately going to see that the 1Gb connection will be my bottleneck here.  The plan is to stick with it for now until I can get another 10Gb adapter for Nexenta at the same time implement a TierI pool based on the Samsung 1TB Evo’s when they become more reasonably priced.

Here you see my TierII  Datastore that will be used for my new vCloud Deployment as a Silver Storage Pool:


I hope some of you caught the Capabilities at the bottom. Yes, Nexenta supports VAAI which is another reason why this makes such a great storage platform for vSphere!

Here is the final build list and some pics of the build:

Coolmaster HAF XB

Corsair 600Watt Modular

Supermicro X8SI6-F

Xeon 3440 w/Corsair H60

32GB Unbuffered ECC DDR3 1333

4 x Seagate 2.5″ Hybrid 1TB Drives (Includes 8GB SSD on each drive)

2 x Corsair III 60GB SSD’s

1 x Corsair III 120GB SSD

1 x G.Skill Pheonix Pro 100GB SSD

1 x Silicom 6PT 1Gb Controller (Contains 3 Dual Port Controllers on Single Card)

Corsair 4GB USB Drive for ESXi.

2 x SAS to SATA Breakout Cables





I have started some benchmarks and as suspected the 1Gb connection is definitely a bottleneck.  Stay tuned for an official benchmark post after the 5.5 update to the lab!

One thought on “Lab Refresh – Storage

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s