In Part III of the series i’ll be going over the simple post installation configuration. We will kick it off right after the install and when you first logon to Prism. I posted a Video on my YouTube channel to go through the configurations rather than spelling it out here with a bunch of screen caps. Click to watch below:
In this segment of Nutanix Community Edition – The Ultimate Home Lab Platform, we will look at the platforms available to run Nutanix Community Edition. From my first post in this series I briefly discussed how we package Nutanix Community Edition so that compatibility is top of mind. This allows us to get Community Edition in the hands of users with little hardware limitations.
A while ago I wrote about several of the Home Lab choices out there, all of which were focused solely on-premises infrastructure. With the cloud becoming more popular in business, it’s also gaining popularity with home labers like myself. I personally have an Amazon account as well as a Ravello Account for those times I want to spin up something quickly and tear it down.
Let’s walk through some of the hardware platforms available today to include options in the “cloud.” As I go through these I will post how to guides either done by me, others and/or outlined in our Community Edition Forums for the Nutanix Community Edition install process. This will help not only choose the right platform but check out the install effort (minimal) on each platform. Let’s start with the most basic
- Intel NUC/MAC Mini/Gigabyte Brix/SFF (Small Form Factor): It’s not secret that these platforms are all the rage now. As they have matured over the last couple years, we are able to pack much more into a smaller system that is quiet, low on power consumption, unobtrusive (basically fit a datacenter on your desk) , fairly reasonable in cost all leading to great WAF (Wife Acceptance Factor) or based on personal preferences. These tiny machines are great for the home office as they don’t take up too much space and can be easily mobile.
The latest platform getting all attention is Intel’s latest NUC based on the new Broadwell Architecture, particularly the Intel Skull Canyon. These little systems pack a ton of power in a very attractive SFF package. More than half the height less than a standard NUC & just a bit wider, packing in a Core i7 Quad Core CPU that supports VT-D, up to 32GB RAM (2 x 16GB DDR4 SODIMMs) , 2 x M.2 PCI-E slots, an Intel GB Ethernet controller, USB 3.1 and something we are seeing more common that could bring massive bandwidth with the right accessory, USB-C. Now if you fancy going SSD instead, you can still go with a standard NUC that supports an M.2 and 2.5” SSD.
Figure 2Figure 3
Some people are leveraging Mac Mini’s however my view at this time and it’s entirely my opinion, they are not worth the cost of admission since you can now get the latest & greatest in a NUC. The decent ones were really the 2012 Mac Mini family which had a Quad Core option, and you *can get 32GB’s of RAM in those using something like Intelligent Memory (quite expensive). These systems still go for a premium even for their age and the latest Mac Mini’s don’t offer Quad Core CPUs. Based on that, I can’t recommend it unless you can get them at a killer price.
Richard Arsenian, Nutanix Solutions Architect and VCDX #126 and NPX #09 (@) has put some great effort into Community Edition on the NUC. You can see below the custom configs he has done as well as Nutanix CE using a NUC on a Drone project you can check out all the details below.
It is important to understand that with our current Community Edition Package since we use KVM Device Emulation we do not support PCI-E drives at this time so PCI-E m.2 like the Samsang 950 Pro and Intel AIC (PCI-E) form factor cards as well like the 750 series. You will have to leverage SATA m.2 and SATA SSD if you want all flash. An example would be a Sandisk X400 or Samsung EVO m.2 SATA drive or any other standard 2.5″ SATA SSDs.
Single Server/Multi-Server Systems
- Single Node Bare Metal: As you recall, you can run Nutanix Community Edition in a Single or Multinode (3 or 4 max) host configuration. The Single Node configuration offers no resilience because we don’t have other nodes to Replicate data to. There are options, however, like running two Single Node Community Edition clusters and use replication between the two to offer you some resiliency. You could also run the single node solution and run a backup process to backup your VM’s to external storage like a NAS as well. One thing to note is that none of these options offer the redundancy that would be provided in a 3 or 4 node Community Edition Cluster with nondisruptive fault tolerance. Still, a single node is a great solution if you want to get started using AHV right away at a low price point. This is the installation for a single-node system running on a single Supermicro SYS-1026T-6RFT+ 1U Rackmount Server w/Dual Intel 5620’s, 96GB RAM, 2 x 480GB Samsung DC SSD’s and 6 x 1TB Seagate Hybrid Drives.
2. Multi-node Bare Metal: This is the ideal Community Edition hardware platform. It’s ideal to have 3-4 systems to leverage all the functionality of Nutanix. In multi-node bare metal offerings we can provide all the resiliency you can expect in our production platform w/RF2 (Replication Factor 2). Node goes down or you need to take it down for maintenance, no problem. Need to run Scale Out Filer for Home Directories or File Shares for a small Citrix VDI Deployment (we’ll get more into this later), no problem. Wanna run a non disruptive update to Community Edition, go for it. Multi-node bare metal means 3 or 4 nodes of physical hardware, can be Intel NUCs, can be PC’s, can be Mac Minis, can even be Enterprise-Grade Servers. Personally, I have two clusters, one compiled of three-node Supermicro Xeon D-1541 servers in my PROD cluster and a three-node Supermicro SYS-1026T-6RFT+ Intel 5620 DR cluster in a remote site. This gives me best performance, best resiliency and allows me to leverage most of the Nutanix Feature set.
To install Nutanix Community Edition in a Multi-node cluster (3-4 nodes), follow the video posted above but make sure you DO NOT select Single Node Cluster from the install form. You must run the installer on each node. After the installer completes on each node, ssh into one of the controller vm IP’s and log in as USERNAME: nutanix PASS: nutanix/4u .
You then run the follwing command using “cluster” commands:
cluster -s cvm_ip_addrs create
Replace “cvm_ip_addrs” with all the IP’s of the CVMs on each node you identified during the install process followed by a comma. Example:
cluster -s 10.10.10.2,10.10.10.3,10.10.10.4,10.10.10.5 create
This should also start the cluster but you can check by running:
Make sure all services show UP:
If the services do not show “UP” run this command:
That’s it! You should be able to open a Browser and connect to one of the CVM IP’s to access the Prism UI. You can use the video posted above to go through the rest of the basic login settings (I will have further initial setup blogs later in this series).
Nested and Cloud
- ESXi: Most of you are running ESXi in you home labs. Without have to procure new hardware, you may have the necessary resources in your ESXi lab to run Nutanix Community Edition.
- VMware Workstation: Some of us are using VMware Workstation for their Home Lab setup maybe running on your PC or even a separate spare workstation.
- Ravello/Amazon: For those that do not have resources or have no inclination to move forward in that direction you can leverage Ravello/Amazon to host your Nutanix Community Edition Instance.
- Nutanix Test Drive: This is by far the fastest, cheapest and easiest way to get your hands on Nutanix Community Edition. This is a free service we provide to you to get your feet wet with Nutanix Community Edition for a couple of hours. You can get access to Test Drive by going to Nutanix Test Drive .
For installation tutorials on ESXi, VMware Workstation and Ravello you can access the content below.
By now you should have a good handle on all the platforms you can run Nutanix Community Edition. If not, head on over to the NEXT Community to get all the info you need to get started. In Part III we’ll move beyond basic install, get in to initial configuration and move further into the packed feature set and the simplicity of Prism! Stay tuned, more to come in the coming weeks!
Okay, okay, so I may be a bit biased cause I work for Nutanix, right? Wrong. All that know me know I call it how I see it. If you have read any of my Home Lab posts, you’ll see i’ve had many iterations over the years..and I mean many. Anywhere from Macbooks running Fusion, single PC’s running Workstation, All-in-ones running ESXi, Hyper-V, enterprise-grade servers and all types of storage. While hardware is an important choice, especially due to WAF (for me at least, not speaking for all), and ease of MGMT, Power and Cooling..etc, once again, it boils down to the software. Sounds familiar, right?
Sorry hardware junkies, this is a software world and we all have to live in it (and I’m a hardware junkie myself). Hardware is simply a means to an end and that end, of course, is running applications or platforms that build applications. It’s funny most of the posts I read across the Web, on sites such as servethehome.com (one of my favorite sites), people are not just using their home servers for “testing stuff,” they’re using it to actually run production workloads in parallel. I know what you’re thinking, “yeah, but Mike, it’s home workloads so no biggie, right?” Not really. A good portion of you, including myself, are using your home servers for storage, file servers storing your most precious data (photos, home videos, etc) streaming software like Plex, transcoding, and in the age of social, all things editing, Photos, Videos. Some of you are actually developing and building applications that could be the next big iPhone App or killer utility that is sorely needed. You may be running containers and building portable apps and quite possibly some of you could be the next Zuckerberg building the next big social platform or next “big” thing that’s gonna change the world. A lot of us are running their NVR software and recording storage on our home servers. If those use cases are not considered important, then quite frankly, I don’t know what is. If that’s the case (which is what I believe) then shouldn’t your home platform be agile, robust, simple, reliable, etc? Shouldn’t it have a lot of the attributes of a Production environment running in a DC? Shouldn’t it be a PLATFORM as well to allow you to run just about any type of workload, be simple to setup, deploy, manage?
Now, if your like me, you’re coming to the realization that life commitments don’t let us tinker as much. When I was single, getting my geek on was an afterthought because I could do it when I wanted as much as I wanted (within $ reason of course). My first foray into real “server” “lab” platforms was my trusty Abit BP6 w/Dual Celerons running at 533MHz. It seems like yesterday that was all the rage.
I know some of you all remember those days some of you earlier and some of you spoiled millennials running your Intel Core platforms day one in your “safe spaces.” Anyway, I was learning Microsoft Server at the time so Windows NT Service Pack 4 was the way to go back then. It was fun but really the focus was how much could I overclock this bad boy and it was really hardware focused and quite frankly I wasn’t running anything that couldn’t be torn down and rebuilt.
Now it’s more common to want some of the benefits at home that you would see in the datacenter however in today’s most common platform (VMware ESXi) those benefits come at a price whether that’s using VMUG Advantage which is a subscription or you’re a vExpert in which licenses are good for a year and then you have to reapply. Some Home Labers are running free ESXi then running multiple nested instances of ESXi for the 60 day trial. This turns into a MGMT nightmare where you’re spinning up and tearing down regularly. Also, you are still running extra instances to get some of the additional functionality like Operations or Replication, logging, etc requiring additional hardware resources. This can get expensive very very quickly. Hyper-V is another option but a lot of the same applies especially around the MGMT stack.
Let’s take a look at Nutanix Community Edition. Nutanix delivers a prepackaged version of our software that can work on a ton of hardware platforms, mac mini’s, Intel Nucs, laptops, all the way to enterprise grade hardware. We provide a solution that removes disk controller limitations under the covers, so we leverage KVM’s Device Emulation instead of PCI passthrough. This has the benefit of compatibility working with a ton of hardware, AHCI to RAID Controllers and HBAs (though recommend to created Single Disk RAID0 presented as individual volumes w/RAID0). What’s key about Nutanix Community Edition is we really don’t limit any functionality unless noted. For example, in the latest release you can leverage 99% of the feature set. A small exception is that you can’t use Prism Central right now, but that’s coming soon. You can deploy Nutanix Community Edition in Single Node (no redundancy), three and four node configurations.
This is only Part 1 of this series of posts on why I believe Nutanix Community Edition is the ultimate Home Lab platform. In the next series of posts we will address hardware choices, installation, and start getting into building VM’s through the advanced functionality and get into the fun stuff like ABS, AFS, Containers w/Persistent Storage. This is by no means a primer on our Acropolis Hypervisor but I will show you the many use cases and things we can do with the Platform. BTW, did I mention it’s free?
For more information on Nutanix Community Edition, please go here to check it out. If you don’t have hardware, you can take CE for a spin by going here which will stand up a virtual instance you can play with.
In the meantime, you can get familiar with Nutanix by checking the video below.
Stay tuned for more content coming your way!
It seems that i’m revamping my lab constantly. What i’m finding is that as I age out older Home Lab tech, i’m also rethinking the Home Lab architecture and how it would serve not just me, but the entire household better.
My Home Lab is not a “spin up when I need it” design. It serves as a functional part of the household as well.
For basic NAS duties I have been relying on an older EMC/Lenovo PX4-300d. This device has served me very well and will continue to be a part of the lab but it’s based on older Intel Atom/SATAII technology so I will re-purpose this device as a backup device. It’s low power and will provide enough storage to keep my most valuable Data backed up.
This part of the Home Network is fully functional and provides 24/7 functionality. Instead of running a complex Server/SAN solution, I decided to replace it with a more robust platform that can serve multiple duties. I needed something that could not only provide production NAS duties, but also act as DLNA Server, a UBNT Wireless Controller, a Secondary Virtualized Domain Controller (backup to my physical Intel NUC DC) and still have enough resources to spin up the occasional test VM(s) and/or services here and there. Since I now have a robust Network Switch with my purchase of the Del X1052 (4 x SFP+) ports, I had one SFP+ available and I also wanted to be able to access the NAS from my Workstation via a 10GbE Direct Connection. In the end i put together a great little system based on the following:
- Motherboard: SuperMicro X10SLL-F
- CPU:Intel Xeon 1230V3
- RAM: 32GB DDR3-1600 R-ECC
- AIC: Intel X520/LSI 9211-8i/Intel AIC 750 NVMe 800GB (Fast Local Storage)
- Spinners: 6x Seagate Enterprise 4TB (Raid Z2)
- SSD: 2 x Intel S3710 800GB Mirror (Fast Shared Storage)
- Cache: 2 x Samsung 843T 480GB Mirrored ZIL
- Case: SUPERMICRO CSE-825TQ-563LPB
- PSU: Supermicro 560Watt Gold
- OS: ESXi 6.x
- Third Party Software: FreeNAS w/Plugins/CentOS/Windows Server Core 2012R2
Here’s a shot of the system on my Test Bench. Build log will be coming shortly.
This system provides 13TB’s usable of reliable storage that can tolerate up to two disk failures. Believe me, i’ll be lucky if I need that much as far as capacity. I don’t store HD Movies any longer which used to be the majority of the capacity required, I let the streaming companies do that now, VUDU, Netflix, Hulu, etc. The Intel 750 NVMe was in my workstation not doing much so I re-purposed this for lightening fast local VM storage. The Intel S3710’s will be used for scratch space for Video Editing duties as I start to include more video how-to’s in the blog. I’m gearing up for an entire series on Nutanix Community Edition from start to finish, showing all the features/functionality available as well as more build logs and other fun things like gaming and a new feature i’m adding, The Armory (self-explanatory but here’s a hint)
As you can see, an All-in-one Virtual Host/NAS is a great solution for a majority of Home users out there. Sky’s the limit here. You can build these with an array of hardware which will provides the home user a ton of useful 24/7 functionality whether it’s to host VMs, NAS only, services or a combination of many different types of use cases, the AIO is a great option for anyone.
There are a ton of resources out there but one of my favorites is:
Pictures can say a thousand words, we hear that all the time. I found this picture on the internet and I couldn’t think of a better representation of how customers feel today when thinking about Hybrid Cloud. I think most customers can relate to the guy walking across the trusses of the bridge, with high risk, difficult migration paths and no simplicity. What good is a bridge if it’s difficult to cross or in only allows traffic one way?
The body of water, or “obstacle” represents the challenge(s) businesses have today to move applications from their Data Centers on prem to the Cloud and back again. This two way approach is ideal for any business. While today, there are some providers that allow for two way application migration, these solutions force you down a single path, in other words you are locked in to their cloud, their terms and choice is nonexistent. Let’s break down what I believe are the three paths to cloud today that do not provide the simplicity, choice, and application mobility that should be the main tenets of any Enterprise Cloud:
Disjointed Cloud: This model is sometimes related to terms like shadow IT where teams outside of the IT Org that provides the Infrastructure Services are going into Amazon or Azure and swiping their credit card to get the IaaS that they require to develop and/or run their applications. This is less than ideal in any organization, not only from a cost model or security perspective, but also think about the impact that this has overall on the organization itself, rogue teams doing their own thing without the IT Organization even knowing whats happening. This model typically falls flat when the Amazon bills start to roll in and/or there is no thought about how to get these applications migrated in house for consumption, if required.
One of my favorite movies, Indiana Jones and the Temple of Doom. Anyone who has ever tried to move an application out of Amazon can relate to this! Once you cross, it’s quite a feat to get back, just ask Facebook about their Instagram migration.
All-In Cloud: This model is all or nothing. Typically, moving 100% of your applications to the cloud comes from a directive from Senior leaders within an organization, CIO’s, CFO’s etc because it comes down to a key assumption that is not quite accurate and that is cloud is cheap or cheaper than having your own Data Center for ALL their applications. The perception is that Cloud is a panacea that will solve all their problems and eliminate the issues that legacy IT models have. Don’t get me wrong, there are tremendous benefits here, life cycle MGMT goes away, Data Center space is not needed or re-purposed, agility to stand up services quickly, simple MGMT interface(s). Sounds great doesn’t it? Wait for the monthly bill. Don’t fool yourself, you are still paying for these things and since you have opted to house everything in the cloud, worse yet, you’re paying for both predictable and unpredictable workloads. Why would you pay high costs for workloads where you fully understand the performance/capacity usage characteristics? If you did a comparison of cost for these workloads, on prem vs cloud, you’ll discover that it’s very costly to house these applications in the cloud.
This is typically the reaction once the CIO/CFO sees the bill from an “All-In” Cloud Model!
Locked-In Hybrid Cloud: In this model we see tech giants like VMware and Microsoft offering their own Cloud Services, ie vCloud Air and Azure. This means that VM/Application Mobility are tied directly to their technologies, vSphere and Hyper-V. The expectation from these companies is that since you are already are invested in their products, you’ll be more apt to consume their cloud services. Because of this, you simply lose choice. You are tied into Azure or vCloud Air, their technology and their terms. Today’s Enterprise Cloud platform should allow you do deploy your applications anywhere. Any Hypervisor, Containers, Bare Metal AND/OR Multi Cloud offerings.
Source: Red Hat
Today, more and more, customers are looking at different Hypervisors, and run-times. We are seeing everything from a complete migration away from costly Hypervisors like Apple’s move to KVM, or what I see in the enterprise, migrating workloads like Tier2, Tier3 workloads to KVM and keeping Tier1 workloads on vSphere or Hyper-V. Same applies to container technologies like Docker. This is a choice customers have today however, these choices are not offering the simplicity and ease of movement of applications between these run-times and the same is also true for Cloud. When you look at consumer technology, choice is paramount for success. Take for instance a Roku Streaming Media Device. Roku has created a “platform” to deliver multiple types of content. You aren’t so interested in the device itself more than the choices you have with that device. One of the biggest benefits of Roku is that there is a ton of content and no “lock-in” to the media apps you want to use. You want to use Hulu or Netflix as your TV service, go ahead, you want to use Vudu or Amazon for your movies, sure thing. This is what we are used to in the consumer world and we will EXPECT this type of choice in the Enterprise. This will require a new way of thinking and a platform to get us there.
What does this world look like? Let’s go back to the bridge analogy. Think of this new world as a multilevel bridge with bidirectional paths to different platforms and Cloud Services.
The bridge is just part of the story and you’ll hear this analogy used time and time again. It’s the other part that’s the missing piece and that’s how traffic (applications) flow across the bridge to the destination and back. This process must be simple, without change to the application and with minimal risk and skill set much like today’s consumer devices provide all of us daily. It must provide consistency in application availability and performance between the different platforms.
At Nutanix, we are delivering on this promise. Our vision and direction is true and we continue to provide mind blowing innovation in the Application Mobility space. It’s companies that will provide seamless bi-directional travel across the bridge that will be successful today and years to come. I want to leave you with some demos that drive this point home.
Nutanix is more than just a Hyper-Converged Infrastructure company, we have evolved into the Enterprise Cloud Company!
In the next video in my series I go through the Nutanix Community Edition installation on the Single Node build.
Listed below is the link that you will access to obtain the Nutanix Community Edition Software and join the Community:
Part V of my series will start putting it all together and working with the Nutanix solution to go over a good part of features and functionality and how i’m using these Nutanix CE in my Home Lab!
I’ve decided to do something a little different. Nutanix provided me a new GoPro as a Holiday gift and I went out and got some accessories to use it to record some How To videos for the blog. This is my first foray into Video Editing so please, keep the laughter to a minimum.
This is Part III of the series in which I will cover the actual build log of the “storage only” node that will be going offsite for backups to my brother’s office rack. In Part II of the Home Lab build, I mentioned that there is a IPSec VPN tunnel configured between our Ubiquiti Edgemax Routers to have a secure connection accross the Internet to the secondary site.
This server will be a single node Nutanix Community Edition Cluster built with the following:
Case/PS: Supermicro CSE-216A-R900LPB (Had this case from previous ZFS Build)
MB: X10SLL-F, BIOS 3.0 (current), BMC 1.35 (current)
CPU: E3-1230v3 3.3Ghz, quad core with HT
Memory: 32GB DDR3-1600 R-ECC (4x8GB, Samsung M391B1G73QH0-YK0)
Controller: LSI 9211-8i
Network: Intel X520
Drives: Samsung SM853T 480GB SSD/7 x Seagate Hybrid 1TB 2.5″ Drives
USB: Sandisk Ultrafit 16GB USB 3.0 Flash drive
Since this only runs a single CVM (Controller Virtual Machine), it does not require the Compute/Memory resources to run additional VMs. The 32GB of RAM is for all the Data Reduction technologies like Dedupe and Compression capabilities.
As a disclaimer, there is also a suggested drive limitation for Community Edition and it’s four disks. The reason for this is that Community Edition does not support PCI passthrough to the controller. We do LUN passthrough which provides a small queue depth. CE was built this way to provide the best possible compatibility so that users can use anything from a Mac Mini to older server technology and even Ravello or nested installations. I have run greater than 4 disks under smaller workloads which essentially will be the same in my Home Lab.
Enjoy the video.