Been a while


I’ve been meaning to dedicate more time to this blog but have been extremely busy with work and home stuff.  Having a  teenager and a six month old is a juggle but I woudn’t trade it for the world.

Anyway..I thought I would update ya’ll on what I’ve been doing the last few months, and what a busy few months it’s been.

  • Training:  In order to meet the deadline for meeting the VCP5 certification, I had to take the test prior to February 29th.  I took the test on the day and passed.  Even though I scrambled and studied for about 2 weeks, I not only did better than the VCP4, but I though the test was a much better experience.  As to not provide any information that may be “out of bounds” here, I will will tell you this, it was much more focused on hands on experience with vSphere 5.  I’m glad that VMware is going this route and I hope they stick to it.  Too many certs are focused on a group of facts, half of which you will forget after the test.  I know a lot of people in this industry and I know people who are extremely talented within IT Infrastructure support, some of which do not have a single cert, but I would pick them over most Certified personel in a heartbeat based on what i’ve seen interviewing for my Data Center Infrastructure group.  Certs should be about experience and they should be situational focused.  Enough of this “trick” question, or you have to do it a certain way when it can be done easier and which will save time.  Shout out to VMware and also shot out to Cisco, they definately have been doing this for a while.
  • Work:
  1. I have implemented a couple of more blades in our Dell M1000e chasis to finish up our server consolidation.  Currently we are at about 65% virtualized with both vSphere 4.1 u2 and 2008 R2 Hyper-V (Hyper-V is a requirement for one of our clients, not sure why but that’s what they wanted)  The blades i’ve added were m710HD w/dual six-core processors and 192GB’s of memory.  Most of the remaining servers are database servers or application environments that contain db’s, Oracle and SQL hence the high memory and processor cores.  This will be our DB cluster for all db’s in our master DC in Utica, NY.
  2. I implemented a new EMC VNX5500.  What a beast.  FAST Cache and FAST VP are really great along with the X-Blades for File, it’s working great.  This will handle 600 View 4.6 Desktops, the Database servers mentioned above, and our File Storage.  This is also be instrumental when we move to the next phase for our integrated Private Cloud infrastructure using vCloud Director and MGMT.  8GB Fibre Channel and SSD for Cache and Linked Clone Replicas, awesome performance!
  3. I’ve migrated our View 4.6 environment from our Clariion CX240 to the VNX5500.  What a breeze.  I had both SAN’s connected to our vSphere enviornment and just did a sVmotion.  Worked like a charm and the Linked Clone parent vm’s were migrated and the linked pools recreated with ZERO issues.  It took all of a 2 hour window and we were up and running.
  4. I migrated our Physical Windows Server FileServer Clusters to CIFS running on our X-Blades within our VNX5500 on NL-SAS 1TB drives.  It’s working like a charm and with that we totally decommissioned our file servers with the exception of one that handles our FTP and Print Server functions, however, that has been P2V’d to our vSphere environment.
  5. On the Hyper-V side, i’ve upgraded to the latest version of SCVMM.  While it works pretty well, I still think it’s quirky and what i’ve noticed is that sometimes the timing is just off and while I know it has to communicate with the Host Agents to complete the tasks, it still feels, well, slow.  What’s funny is SCVMM is running in a VM on our vSphere environment!!
  6. We are planning on replacing our current 10Gbe SAN switches from the HP 5800 series to the HP5900 series to accomodate greater throuhput on the backend.  We will also be replacing all Qlogic 10Gbe iSCSI adpaters to Intel 10Gbe adapters.  The failure rate we have had with these the past year has been about 36%..totally unacceptable!  On top of that, i’m replacing our Twinax cables with Fibre and SFP’s.  While I believe Twinax has it’s place, it’s certainly not the cabling you want to implement when you are running 96 cables from the hosts.  the rack is a mess, mostly because, I found that the Twinx very succeptable to failure with the slightest bends.
  7. I have another HP p4000 SAN that’s ready to be installed for a bigger Dev environment.  Again, I would’ve went with EMC as the costs is very comparable and the P4000 doesn’t have 1/2 the featureset as the VNX but once again, the client is asking for HP as they get a really good deal on the products.

Updates to follow on that front this week with Pics..etc.

  • Home Lab: The home lab has been stripped down and I’m builing an internal vCloud lab.  Here is my proposed diagram of the lab.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s