vSphere 5.0 Lab Build Part 2.0: Storage


It’s been too long since my original post on this series.  Now that i’m back from our yearly DR test, let’s move on to the next topic, Storage. I’ll be breaking this down into two parts as well as there are many options to write about.  There are many different approaches to a decent Lab Storage environment for vSphere.  You can take three primary paths, all of which, i’ve done, and all of which have their positives and negatives which I will touch on briefly.  These are my opionions based on my experience and in the end, you’ll see what I ended up with in the end and you’ll see why.

1. Roll your own NAS/SAN.  There are several different pieces of software that allow you to build a storage server/NAS-SAN.  Some of which are gaining some very good popularity such as napp-it, Nexenta, and FreeNAS.  All offer different options and even allow you incorporate enterprise level features such as SSD cache, thin provisioning, as well as easy to use interfaces for block and file level storage.   There are many benefits to rolling your own storage server, the biggest being the vast amount of options, some of which I mentioned above.  Depending on your needs of performance, you can reuse last generation hardware, add a hardware RAID card or HBA which is supported by the software you choose, add disks and your off and running.

Two of the options that I really liked were OpenSoloaris based ZFS options.  http://napp-it.org/

Napp-it provides a nice webgui to manage your OpenSolaris ZFS system.  It provides a simple, easy to use interface to manage and provision your storage.  You can run this on some different versions of OpenSolaris, but I found that OpendIndiana based OpenSolaris system to be very easy to install with a very small footprint.  You can also run an “All-in-one” based solution, running this system as a host on vSphere passing the HBA to the VM via Direct Path I/O.

As you can see from the picture, you can see my LSI card in one of my ESXi hosts which provides  the option to passthrough the card for direct VM access.  We will talk more about that when we get to the vSphere 5.0 portion of the series.  In the meantime, you can click on the napp-it link and get all the info you need on their site.

Nexenta is another nice interface to manage your ZFS pools.   While this is primarily a paid for product, they do have a Community Edition that’s fully functional but limits you to 18TB’s of allowed space.  This wasn’t an issue at all for me as I only had 6TB’s delegated for vSphere. There is a nice ISO for Nexenta Community Edition that makes the install a breeze.  You’ll be up an provisioning storage in no time.  Check out the link for more details.

I will tell you, that with the right hardware and disk setup, you can achieve very good IOPS for a custom built storage box.  I personally used  a Core i7 920/Asus P6T Deluxe/24GB of DDR3 running ESXi 4.1 U1 Direct I/O passthrough of my IBM BR10i HBA.  I also ran 6 1TB drives in a RaidZ pool w/Gskill Pheonix Pro 64GB SSD for Read Cache which provided solid I/O meter results.  The biggest problem I had with this setup was the heat and electricity it generated.  I have a small office in our basement and now that I have a little girl on the way, my home office is being turned into a nursery so I need a cool and quiet office space for work..etc.

2.  VSA – Virtual SAN Appliance  The Virtual San Appliance is basically a Virtualized SAN, containing all the required software including the OS, which is usually a custom Unix/Linux OS per vendor. There are several and I mean several VSA’s out there today you can implement, some free, some require licences etc.  I’ll talk about the two that i’ve used and the ones that are gaining more notice today.

  • EMC Uber VNX (NFS ONLY) http://nickapedia.com/2011/04/08/new-uber-model-uber-vnx-nfs-v1/  Nick Weaver, a vSpecialist with EMC has done a pheonomanl job an easy to use VSA based on both the EMC Celerra, which actually supports both block and file storage, as well as the EMC’s new VNX line, which, in it’s current iteration, only supports NFS, which is really all I use for my Lab.  I’m not going down the path as to which is better, etc, there are benefits to using both, but NFS is gaining some solid support from both VMware community and the NAS providers like EMC and NetApp.  The Uber VNX and Celerra support Deduplication, Thin Provisioning, Compression, and Replication to test your SRM skills..etc.  Roll out two Uber VNX VSA’s and Replicate between the two to test multi-site failover..etc.  I still use this in my lab today because of the above functionality.  On top of that, EMC has great API’s that work directly with vSphere and plugins to manage your storage from within vCenter.  Very nice indeed!  I also want to mention that Nick’s site is a great source of information, though I wish he would update it more often.  Here are some screenshots:

I’m going to end it here this evening.  Stay tuned for Part B of Part II Storage for the vSphere 5.0 Lab series!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s