Archive
#Nutanix – the ultimate Virtual Computing Platform for VDI – CBRC-like Functionality For Any #VDI Solution with #Nutanix – #IaaS – via @andreleibovici
It’s really great to see the capabilities of the Nutanix platform! Just read this great blog post by @andreleibovici around Content Based Read Cache (CBRC) and how this isn’t necessary at all on a platform like Nutanix!
Conclusion
Overtime I will discuss more about the technology behind Content and Extent Caches. For now what is important to know is that Nutanix provides a better in-memory microsecond latency benefits provided by CBRC for any VDI solution on any of the aforementioned hypervizors, for both Linked and Full-Clones. In fact, Nutanix engineers even recommend Horizon View administrators to disable CBRC because the Nutanix approach is less costly to the overall infrastructure.
It is amazing when your world turns upside down and a technology that used to be awesome becomes mostly irrelevant. It amazes me how fast technology evolves and help organizations to achieve better performance and lower OPEX.
For a long time I have discussed the benefits of CBRC (Content Based Read Cache) available with Horizon View 5.1 onwards, allowing Administrators to drastically cut-down on read IO operations, offloading the storage infrastructure and providing greater end-user experience.
Here are few of the blog posts I wrote on CBRC technology: Understanding CBRC (Content Based Read Cache), Understanding CBRC – RecomputeDigest Method, Sizing for VMware View Storage Accelerator (CBRC), View Storage Accelerator Performance Benchmark, CBRC and Local Mode in VMware View 5.1, View Storage Accelerator (CBRC) Hashing Function.
CBRC helps to address some of the performance bottlenecks and the increase of storage cost for VDI. CBRC is a 100% host-based RAM-Based caching solution that helps to reduce read IOs issued to the storage subsystem and thus improves scalability of the storage subsystem while being completely transparent to the guest OS. However, CBRC comes at a cost.
When the View Storage Accelerator feature (CBRC) is enabled, a per-VMDK digest file is created to store hash information about the VMDK blocks. The estimated size of each digest file is roughly:
- 5 MB per GB of the VMDK size [hash-collision detection turned-off (Default)]
- 12 MB per GB of the VMDK size [hash-collision detection turned-on]
The digest file creation for a large replica disk can take a large amount of time and a bulky quantity of IOPS, therefore it’s is recommendable not to run the operation, create new desktop pools, or recompose existing pools during production hours.
CBRC also uses a RAM to manage the cached disk blocks. The per-VMDK digest file is also loaded into memory. That is the reason why CBRC should not be enabled under memory-overcommit environments. If a host is memory over-committed and CBRC is enabled – the memory pressure is increased as CBRC also uses memory for the cache. In such cases, the host could experience increased swapping and the overall host performance could be impacted.
Whilst I wrote about CBRC benefits, I also received numerous negative comments about the technology, including lack of support for full-clone desktops, being unsupported for layering tools like Unidesk, and taking too long to generate new hashes for every replica.
CBRC is a platform feature (vSphere), however it is only enabled and available via Horizon View. Other VDI products such as XenDesktop or vWorkspace cannot utilize the feature.
Nutanix suppress the need for CBCR, providing similar functionality to any VDI solution running on top of vSphere, Hyper-V or KVM. Nutanix has a de-duplication engine built into the solution that works real-time for data stored in DRAM and Flash.
Content Cache (Dynamic Read cache) Read more…
#VDI Calculator v5 is Now Available with Major New Features – #IaaS, #Storage, #BYOD via @andreleibovici
This is awesome! Great work by @andreleibovici!
I am happy to announce the General Availability of the new VDI Calculator v5. This new version is the single biggest release since I started delivering the calculator. I have completely re-architected the way the calculator works, allowing multiple types of desktops to be configured in a single calculation for a single solution.
All existing features have been retained and will work in the exact same way you are used to, but you now have the ability to select different options for different types of desktops or desktop pools.
As an example, you may choose Desktop Type 1 to be a ‘student’ desktop using Linked Clones with 10 different pools; conversely you may choose Desktop Type 2 to be a ‘professor’ desktop using Full Clones with 5 individual pools. This new calculator gives you much more granular control over your calculations eliminating repetitive tasks when sizing larger environments.
To enable multi-desktop pool calculations just select ‘-’ and ‘+’ in the top bar menu.
Another additional feature is what I call ‘Ask for Help‘. During the application session when you select the Update option a new screen will show up asking if you would like to be contacted by VDI solutions vendors that can help reduce costs, improve performance or improve manageability of your VDI solution. If you are interested…
Continue reading here!
//Richard
There was a big flash, and then the dinosaurs died – via @binnygill, #Nutanix
Great blog post by @binnygill! 😉
This is how it was supposed to end. The legacy SAN and NAS vendors finally realize that Flash is fundamentally different from HDDs. Even after a decade of efforts to completely assimilate Flash into the legacy architectures of the SAN/NAS era, it’s now clear that new architectures are required to support Flash arrays. The excitement around all-flash arrays is a testament to how different Flash is from HDDs, and its ultimate importance to datacenters.
Consider what happened in the datacenter two decades ago: HDDs were moved out of networked computers, and SAN and NAS were born. What is more interesting, however, is what was not relocated.
Although it was feasible to move DRAM out with technology similar to RDMA, it did not make sense. Why move a low latency, high throughput component across a networking fabric, which would inevitably become a bottleneck?
Today Flash is forcing datacenter architects to revisit this same decision. Fast near-DRAM-speed storage is a reality today. SAN and NAS vendors have attempted to provide that same goodness in the legacy architectures, but have failed. The last ditch effort is to create special-purpose architectures that bundle flash into arrays, and connect it to a bunch of servers. If that is really a good idea, then why don’t we also pool DRAM in that fashion and share with all servers? This last stand will be a very short lived one. What is becoming increasingly apparent is that Flash belongs on the server – just like DRAM.
For example, consider a single Fusion-IO flash card that writes at 2.5GB/s throughput and supports 1,100,000 IOPS with just 15 microsec latency (http://www.fusionio.com/products/iodrive2-duo/). You can realize these speeds by attaching the card to your server and throwing your workload at it. If you put 10 of these cards in a 2U-3U storage controller, should you expect 25GB/s streaming writes, and 11 million IOPS at sub millisecond latencies. To my knowledge no storage controller can do that today, and for good reasons.
Networked storage has the overhead of networking protocols. Protocols like NFS and iSCSI are not designed for massive parallelism, and end up creating bottlenecks that make crossing a few million IOPS on a single datastore an extremely hard computer science problem. Further, if an all-flash array is servicing ten servers, then the networking prowess of the all-flash array should be 10X of that of each server, or else we end up artificially limiting the bandwidth that each server can get based on how the storage array is shared.
No networking technology, whether it be Infiniband, Ethernet, or fibre channel can beat the price and performance of locally-attached PCIe, or even that of a locally-attached SATA controller. Placing flash devices that operate at almost DRAM speeds outside of the server requires unnecessary investment in high-end networking. Eventually, as flash becomes faster, the cost of a speed-matched network will become unbearable, and the datacenter will gravitate towards locally-attached flash – both for technological reasons, as well as for sustainable economics.
The right way to utilize flash is to treat it as one would treat DRAM — place it on the server where it belongs. The charts below illustrate the dramatic speed up from server-attached flash.
Continue reading here!
//Richard
Solving the Compute and Storage scalability dilemma – #Nutanix, via @josh_odgers
The topic of Compute, Network and STORAGE is a hot topic as I’ve written in blog posts before this one (How to pick virtualization (HW, NW, Storage) solution for your #VDI environment? – #Nutanix, @StevenPoitras) … and still a lot of colleagues and customers are struggling with finding better solutions and architecture.
How can we ensure that we get the same or better performance of our new architecture? How can we scale in a more simple and linear manner? How can we ensure that we don’t have a single point of failure for all of our VM’s etc..? How are others scaling and doing this in a better way?
I’m not a storage expert, but I do know and read that many companies out there are working on finding the optimal solution for Compute and Storage, and how they can get the cost down and be left with a more simple architecture to manage…
This is a topic that most need to address as well now when more and more organisations are starting to build their private clouds, because how are you going to scale it and how can you get closer to the delivery that the big players provide? Gartner even had Software-Defined-Storage (SDS) as the number 2 trend going forward: #Gartner Outlines 10 IT Trends To Watch – via @MichealRoth, #Nutanix, #VMWare
Right now I see Nutanix as the leader here! They rock! Just have a look at this linear scalability:
If you want to learn more how Nutanix can bring great value please contact us at EnvokeIT!
For an intro of Nutanix in 2 minutes have a look at these videos:
Overview:
True or False: Always use Provisioning Services – #Citrix, #PVS, #MCS
Another good blog post from Daniel Feller:
Test your Citrix muscle…
True or False: Always use Provisioning Services
Answer: False
There has always been this aura around Machine Creation Services in that it could not hold a candle to Provisioning Services; that you would be completely insane to implement this feature in any but the simplest/smallest deployments.
How did we get to this myth? Back in March of 2011 I blogged about deciding between MCS and PVS. I wanted to help people decide between using Provisioning Services and the newly released Machine Creation Services. Back in 2011, MCS an alternative to PVS in that MCS was easy to setup, but had some limitations when compared to PVS. My blog and decision tree were used to help steer people into the PVS route except for the use cases where MCS made sense.
Two and a half years passed and over that time, MCS has grown up. Unfortunately, I got very busy and didn’t keep this decision matrix updated. I blame the XenDesktop product group. How dare they improve our products. Don’t they know this causes me more work?
It’s time to make some updates based on improvements of XenDesktop 7 (and these improvements aren’t just on the MCS side but also on the PVS side as well).
So let’s break it down:
- Hosted VDI desktops only: MCS in XenDesktop 7 now supports XenApp hosts. This is really cool, and am very happy about this improvement as so many organizations understand that XA plays a huge part in any successful VDI project.
- Dedicated Desktops: Before PVD, I was no fan of doing dedicated VDI desktops with PVS. With PVD, PVS dedicated desktops is now much more feasible, like it always was with MCS
- Boot/Logon Storms: PVS, if configured correctly, would cache many of the reads into system memory, helping to reduce the Read IOPS. Hypervisors have improved over the past 2 years to help us with the large number of Read disk operations. This helps lessen the impact of the boot/logon storms when using MCS.
#Citrix #PVS vs. #MCS Revisited – #Nutanix, #Sanbolic
Another good blog post from Citrix and Nick Rintalan around the famous topic whether to go for PVS or MCS! If your thinking about this topic then don’t miss this article. Also ensure that you talk to someone who have implemented an image mgmt/provisioning service like this to get some details on lessons learnt etc., also with the change in the hypervisor layer and the cache features this is getting really interesting…
AND don’t forget the really nice storage solutions that exists out there like Nutanix and Melio that really solves some challenges out there!!
http://go.nutanix.com/rs/nutanix/images/TG_XenDesktop_vSphere_on_Nutanix_RA.pdf
Melio Solutions – Virtual Desktop Infrastructure
Back to the Citrix blog post:
It’s been a few months since my last article, but rest assured, I’ve been keeping busy and I have a ton of stuff in my head that I’m committed to getting down on paper in the near future. Why so busy? Well, our Mobility products are keeping me busy for sure. But I also spent the last month or so preparing for 2 different sessions at BriForum Chicago. My colleague, Dan Allen, and I co-presented on the topics of IOPS and Folder Redirection. Once Brian makes the videos and decks available online, I’ll be sure to point people to them.
So what stuff do I want to get down on paper and turn into a future article? To name a few…MCS vs. PVS (revisited), NUMA and XA VM Sizing, XenMobile Lessons Learned “2.0″, and Virtualizing PVS Part 3. But let’s talk about that first topic of PVS vs MCS now.
Although BriForum (and Synergy) are always busy times, I always try to catch a few sessions by some of my favorite presenters. One of them is Jim Moyle and he actually inspired this article. If you don’t know Jim, he is one of our CTPs and works for Atlantis Computing – he also wrote one of the most informative papers on IOPS I’ve ever read. I swear there is not a month that goes by that I don’t get asked about PVS vs. MCS (pros and cons, what should I use, etc.). I’m not going to get into the pros and cons or tell you what to use since many folks like Dan Feller have done a good job of that already, even with beautiful decision trees. I might note that Barry Schiffer has an updated decision tree you might want to check out, too. But I do want to talk about one of the main reasons people often cite for not using MCS – it generates about “1.6x or 60% more IOPS compared to PVS“. And ever since Ken Bell sort of “documented” this in passing about 2-3 years ago, that’s sort of been Gospel and no one had challenged it. But our CCS team was seeing slightly different results in the field and Jim Moyle also decided to challenge that statement. And Jim shared the results of his MCS vs. PVS testing at BriForum this year – I think many folks were shocked by the results.
What were those results? Here is a summary of the things I thought were most interesting:
- MCS generates 21.5% more average IOPS compared to PVS in the steady-state (not anywhere near 60%)
- This breaks down to about 8% more write IO and 13% more read IO
- MCS generates 45.2% more peak IOPS compared to PVS (this is closer to the 50-60% range that we originally documented)
- The read-to-write (R/W) IO ratio for PVS was 90%+ writes in both the steady-state and peak(nothing new here)
- The R/W ratio for MCS at peak was 47/53 (we’ve long said it’s about 50/50 for MCS, so nothing new here)
- The R/W ratio for MCS in the steady-state was 17/83 (this was a bit of a surprise, much like the first bullet)
So how can this be?!?
I think it’s critical to understand where our initial “1.5-1.6x” or “50-60%” statement comes from – that takes into account not just the steady-state, but also the boot and logon phases, which are mostly read IOPS and absolutely drive up the numbers for MCS. If you’re unfamiliar with the typical R/W ratios for a Windows VM during the various stages of its “life” (boot, logon, steady-state, idle, logoff, etc.), then this picture, courtesy of Project VRC, always does a good job explaining it succinctly:
We were also looking at peak IOPS and average IOPS in a single number – we didn’t provide two different numbers or break it down like Jim and I did above in the results, and a single IOPS number can be very misleading in itself. You don’t believe me? Just check out my BriForum presentation on IOPS and I’ll show you several examples of how…
Continue reading here!
//Richard