Solving the Compute and Storage scalability dilemma – #Nutanix, via @josh_odgers
The topic of Compute, Network and STORAGE is a hot topic as I’ve written in blog posts before this one (How to pick virtualization (HW, NW, Storage) solution for your #VDI environment? – #Nutanix, @StevenPoitras) … and still a lot of colleagues and customers are struggling with finding better solutions and architecture.
How can we ensure that we get the same or better performance of our new architecture? How can we scale in a more simple and linear manner? How can we ensure that we don’t have a single point of failure for all of our VM’s etc..? How are others scaling and doing this in a better way?
I’m not a storage expert, but I do know and read that many companies out there are working on finding the optimal solution for Compute and Storage, and how they can get the cost down and be left with a more simple architecture to manage…
This is a topic that most need to address as well now when more and more organisations are starting to build their private clouds, because how are you going to scale it and how can you get closer to the delivery that the big players provide? Gartner even had Software-Defined-Storage (SDS) as the number 2 trend going forward: #Gartner Outlines 10 IT Trends To Watch – via @MichealRoth, #Nutanix, #VMWare
Right now I see Nutanix as the leader here! They rock! Just have a look at this linear scalability:
If you want to learn more how Nutanix can bring great value please contact us at EnvokeIT!
For an intro of Nutanix in 2 minutes have a look at these videos:
Overview:
Simple Explanation of How Nutanix Works:
And of course others are trying to catch up and are inventing interesting solutions as well… BUT, as I see it it’s a pros to get both Compute and Storage from one (1) vendor delivering support etc., and that performs in amazing ways!! For all those that are arguing vSAN vs. Nutanix etc. I’d like to reference this great quote by Dheeraj:
Here’s to all those who’d love a dogfight b/w VSAN & Nutanix: Android & iOS competed, but together they hurt the PC biz most. Beware SANs!
Have a look as well at this great blog post where Josh visualises the problem!
Scaling problems with traditional shared storage
At VMware vForum Sydney this week I presented “Taking vSphere to the next level with converged infrastructure”.
Firstly, I wanted to thank everyone who attended the session, it was a great turnout and during the Q&A there were a ton of great questions.
One part of the presentation I got a lot of feedback on was when I spoke about Performance and Scaling and how this is a major issue with traditional shared storage.
So for those who couldn’t attend the session, I decided to create this post.
So lets start with a traditional environment with two VMware ESXi hosts, connected via FC or IP to a Storage array. In this example the storage controllers have a combined capability of 100K IOPS.
As we have two (2) ESXi hosts, if we divide the performance capabilities of the storage controllers between the two hosts we get 50K IOPS per node.
This is an example of what I have typically seen in customer sites, and day 1, and performance normally meets the customers requirements.
As environments tend to grow over time, the most common thing to expand is the compute layer, so the below shows what happens when a third ESXi host is added to the cluster, and connected to the SAN.
The 100K IOPS is now divided by 3, and each ESXi host now has 33K IOPS.
This isn’t really what customers expect when they add additional servers to an environment, but in reality, the storage performance is further divided between ESXi hosts and results in less IOPS per host in the best case scenario. Worst case scenario is the additional workloads on the third host create contention, and each host may have even less IOPS available to it.
But wait, there’s more!
What happens when we add a forth host? We further reduce the storage performance per ESXi host to 25K IOPS as shown below, which is HALF the original performance.
At this stage, the customers performance is generally significantly impacted, and there is no easy or cost effective…
Continue reading here!