I was going to go through and show you a step by step guide for my vSphere 4.1 Lab that I built last year to assist me in my studies for the VMware Certified Professional 410 Exam, but have decided to move on from there and rebuild my entire lab for vSphere 5.0. There are plenty of guides out there already as it pertains to a single host environment.
As you may already be aware, ESXi 4.x allowed you to run additional ESXi guests, however, the VM’s built on those guests could only be 32bit, which greatly reduced the flexibility of the environment, ie you had to run vCenter in a separte box or a VM under VMware Workstation..etc. While official support for 64bit Nested VM’s is not there as of yet, there is a work around to enable 64bit guest support in 5.0. This should allow you to do most tasks while running a single physical box with ESXi 5.0 and nested ESXi 5.0 guests running 64bit VM’s. We’ll get to that later.
I wanted to make sure that I had enough power under the hood, to make this a fully useable lab creating multiple scenarios and testing the latest Cloud Infrastructure products, like vShield, vCloud Director..etc. Based on that, I decided to go with a single dual socket Intel Box with 48GB of memory. I know, I know, the free version is not going to support that amount of memory. This is a pure lab environment, and I will be standing up and tearing down different environments weekly. I’ve done it some many times already, it’s like second nature now anyway..lol. Along with the compute/memory capacity, I wanted to make sure that I could access this system in case of an issue or to install software, etc outside of the OS layer so iKVM support was a feature I had to have. On top of that, this server will sit in my basement equipment room so I I opted for iKVM features so I didn’t have to run up and down the stairs everytime something happened..or some configuration was needed. I also needed this baby to stay nice and cool and have somewhat of a small footprint and maintain a bit of redundancy on the power side. I chose a case that fit my needs on all levels, except it’s a bit noisy, but I can live with that since it sits in my basement.
Local storage was not a necessity as I ordered a new NAS, the Iomega StorCenter PX4-300d. We will get into more detail on that in the second part of this series when we talk about storage. I will also give a little write up on the device as I test it to see if it meets the criteria for this environment. I’m sure it will, as it’s getting good reviews around the net. Anyway, ESXi will be installed on a Corsair 4GB USB Key and installed internally on a USB internal port on the motherboard.
I had a pair of dual port Intel PT1000’s from my previous build that we will be leveraging with this new set up. You can’t go wrong with these cards, fully supported with ESXi. The Motherboard also has two onboard Intel GB nics as well for a total of 6 useable fully supported 1Gb NIC’s.
On to the complete parts list:
All these parts new @ newegg costs: $2,791
I opted to search around the user communities I belong too and sold some of my older environment to fund this as well as obtaining the case from work as it was being thrown away. When all was said and done, spent about $700 out of pocket. Not bad for a powerful compute backbone.
Here is a teaser of the build in progress. Stay tuned for Part 2 when the Iomega NAS Arrives.