Archive

Posts Tagged ‘computing’

Gartner Identifies the Top 10 Strategic Technology Trends for 2015 – #Nutanix, #WebScale, #Dell, #EnvokeIT, #Gartner

October 10, 2014 Leave a comment

As usual it’s very interesting when Gartner takes a look at the trends for the coming year. I must say that I agree with many of them, one of the trends is very close to my heart and what I think should have been on the agenda of most CIO’s prior to 2015, and this is: Web-Scale IT.

Why haven’t more enterprise and solution architects been looking earlier at how to simplify the delivery of the “commodity” service that IaaS should be in todays IT world. Yes I know that most enterprises have a “legacy” environment that is hard to just transform, they have a service delivery organisation with certain competences and are being bombarded by salesmen from the older legacy providers that this new way is scary (up until they themselves come up with a story on web-scale of course). But it’s time to wake up and look at how you can change your Compute, Network and Storage components to reduce complexity, increase flexibility/agility, focus on core business (apps and services on top) and also reduce your TCO.

One way is of course to move to the cloud and let someone else bother about this, but I yet don’t see that the larger enterprises are looking at this and there is a hesitation though most haven’t gotten to the point of understanding the TCO model and how to compare their As-Is costs to the cost that they get from the costing tools of Azure, Amazon etc. Why is this? My view is that most don’t have a clear understanding of their own As-Is TCO, they understand how much a server costs, and storage costs, but not the TCO when it comes to facility/datacenter costs, power & cooling, HW costs, support and operational costs, license costs and the overview of that in a TCO model they can understand or compare with “the cloud”.

Ok, as usual I’m getting a bit sidetracked but I love this topic and I must encourage you to contact EnvokeIT if you need help to understand the Web-Scale IT concept and how it can add value to you and your business. We work with Nutanix and Dell and can assist in assessing your existing As-Is solution and forming the To-Be target architecture and the strategy to get there based on your requirements and needs. Of course we’re not locked into Dell or Nutanix and have experience within Azure and other public cloud providers as well as other hardware vendor solutions like HP, NetApp etc.

If you like to see a really cool solution that is coming then have a look at my previous post including a short and cool video: Dell + Nutanix = awesome!

Here we have the top 10 trends for 2015 that Gartner have identified:

Analysts Examine Top Industry Trends at Gartner Symposium/ITxpo 2014, October 5-9 in Orlando

Gartner, Inc. today highlighted the top 10 technology trends that will be strategic for most organizations in 2015. Analysts presented their findings during the sold out Gartner Symposium/ITxpo, which is taking place here through Thursday.

Gartner defines a strategic technology trend as one with the potential for significant impact on the organization in the next three years. Factors that denote significant impact include a high potential for disruption to the business, end users or IT, the need for a major investment, or the risk of being late to adopt. These technologies impact the organization’s long-term plans, programs and initiatives.
Read more…

#Nutanix Announces Global Agreement with #Dell

Wow! This is interesting! 😀

Strategic Relationship Significantly Expands Access and Distribution of Nutanix Solutions with Dell’s World-Class Hardware, Services and Marketing to Accelerate Adoption of Web-scale Converged Infrastructure in the Enterprise

SAN JOSE, CALIF. – June 24, 2014 – Nutanix, the leading provider of next-generation datacenter infrastructure solutions, today announced it has signed an original equipment manufacturing (OEM) agreement with Dell to offer a new family of converged infrastructure appliances based on Nutanix web-scale technology. The combination of Nutanix’s groundbreaking software running on Dell’s industry-leading servers delivers a flexible, scale-out platform that brings IT simplicity to modern datacenters. The Nutanix and Dell collaboration is designed from the ground up to deliver innovative web-scale technology to enterprises of any size. The agreement also includes joint sales, marketing, support and service investments, as well as alignment of product roadmaps.

The new Dell XC Series of Web-scale Converged Appliances will be built with Nutanix software running on Dell PowerEdge servers, and will be available in multiple variants to meet a wide range of price and performance options. The appliances will deliver high-performance converged infrastructure ideal for powering a broad spectrum of popular enterprise use cases, including virtual desktop infrastructure (VDI), virtualized business applications, multi-hypervisor environments and more. Nutanix’s web-scale software runs on all popular virtualization hypervisors, including VMware vSphere™, Microsoft Hyper-V™ and open source KVM, and is uniquely able to span multiple hypervisors in the same environment. The Dell XC Series appliances are scheduled for availability in the fourth quarter of this year and will be sold by Dell sales teams and channel partners worldwide.

“Nutanix is a recognized leader in the converged infrastructure market with a software-driven offering that fits with Dell’s efforts to redefine datacenter economics and simplify IT for our customers,” said Alan Atkinson, vice president and general manager, Dell Storage. “By combining market-leading infrastructure and software technologies from both companies with Dell’s world-class go-to-market capabilities, we believe our new solutions will be positioned to be a significant player in the growing, multi-billion dollar converged infrastructure market.”

“Dell is a world-class leader in servers, storage and networking, and has established itself as a valuable IT partner for many of the world’s largest organizations,” said Dheeraj Pandey, co-founder and CEO, Nutanix. “Nutanix is teaming with Dell to accelerate our global sales growth through Dell’s vast direct and channel sales networks. In Dell, we chose a company that shares our vision of disrupting traditional datacenter infrastructures with intelligent software running on x86 hardware to power all datacenter services.” Read more…

OpenStack and Nutanix – perfect match! (perfect with VMware and Microsoft as well of course) – #Nutanix, #OpenStack, #IaaS

This is a good post by Dwayne Lessner around how perfect match OpenStack and Nutanix is (not just OpenStack of course, Nutanix rocks with VMware and Microsoft as well)!

Nutanix NDFS also provides an advanced and unique feature set for OpenStack based
private clouds. Key features include:

  • Simplicity – The same great platform that simplified your virtualisation deployment can simplify the compute and storage deployment for key OpenStack services (Glance, Nova, Horizon, Keystone, Neutron, Cinder, and Swift)
  • Single Scalable Fabric – NDFS provides a single fabric for data storage that integrates seamlessly with OpenStack services. NDFS-based storage is easy to provision, manage, and operate at scale.
  • Hypervisor Agnostic – Just like OpenStack, Nutanix NDFS was designed from the ground up to be hypervisor agnostic. Nutanix enables customers to choose between KVM, Hyper-V, and the VMware ESXi hypervisor for deployments of OpenStack.
  • Enterprise Ready – Nutanix enables a full set of enterprise storage features including Elastic Deduplication, Compression, In-Memory and Flash-based Caching, VM-Data Locality, intelligent Information Lifecycle Management (ILM), Snapshots, Fast Clones, and Live Migration.

OpenStack on Nutanix

Read more here

Here you also have the link to the webinar with topic:

Building OpenStack on a Single 2U Appliance

OpenStack promises to be the open source cloud operating system. Automated provisioning and management of network, server and storage resource via a single dashboard is great, but how can you get the same one-stop-shop simplicity for the underlying infrastructure?Attend this advanced private cloud webinar and learn:

  • Why OpenStack is much more than just hype
  • A summary of key OpenStack technologies
  • Why to consider converged infrastructure for building private clouds
  • The right way to scale-out OpenStack deployments 

Watch the webinar here!

//Richard

#Nutanix – the ultimate Virtual Computing Platform for VDI – CBRC-like Functionality For Any #VDI Solution with #Nutanix – #IaaS – via @andreleibovici

February 3, 2014 Leave a comment

It’s really great to see the capabilities of the Nutanix platform! Just read this great blog post by @andreleibovici around Content Based Read Cache (CBRC) and how this isn’t necessary at all on a platform like Nutanix!

Conclusion

Overtime I will discuss more about the technology behind Content and Extent Caches. For now what is important to know is that Nutanix provides a better in-memory microsecond latency benefits provided by CBRC for any VDI solution on any of the aforementioned hypervizors, for both Linked and Full-Clones. In fact, Nutanix engineers even recommend Horizon View administrators to disable CBRC because the Nutanix approach is less costly to the overall infrastructure.

It is amazing when your world turns upside down and a technology that used to be awesome becomes mostly irrelevant. It amazes me how fast technology evolves and help organizations to achieve better performance and lower OPEX.
 
For a long time I have discussed the benefits of CBRC (Content Based Read Cache) available with Horizon View 5.1 onwards, allowing Administrators to drastically cut-down on read IO operations, offloading the storage infrastructure and providing greater end-user experience.
 
Here are few of the blog posts I wrote on CBRC technology: Understanding CBRC (Content Based Read Cache)Understanding CBRC – RecomputeDigest MethodSizing for VMware View Storage Accelerator (CBRC)View Storage Accelerator Performance BenchmarkCBRC and Local Mode in VMware View 5.1View Storage Accelerator (CBRC) Hashing Function.
 
CBRC helps to address some of the performance bottlenecks and the increase of storage cost for VDI. CBRC is a 100% host-based RAM-Based caching solution that helps to reduce read IOs issued to the storage subsystem and thus improves scalability of the storage subsystem while being completely transparent to the guest OS. However, CBRC comes at a cost.
 
When the View Storage Accelerator feature (CBRC) is enabled, a per-VMDK digest file is created to store hash information about the VMDK blocks. The estimated size of each digest file is roughly:

  • 5 MB per GB of the VMDK size [hash-collision detection turned-off (Default)]
  • 12 MB per GB of the VMDK size [hash-collision detection turned-on]

The digest file creation for a large replica disk can take a large amount of time and a bulky quantity of IOPS, therefore it’s is recommendable not to run the operation, create new desktop pools, or recompose existing pools during production hours.

CBRC also uses a RAM to manage the cached disk blocks. The per-VMDK digest file is also loaded into memory. That is the reason why CBRC should not be enabled under memory-overcommit environments. If a host is memory over-committed and CBRC is enabled – the memory pressure is increased as CBRC also uses memory for the cache. In such cases, the host could experience increased swapping and the overall host performance could be impacted.

Whilst I wrote about CBRC benefits, I also received numerous negative comments about the technology, including lack of support for full-clone desktops, being unsupported for layering tools like Unidesk, and taking too long to generate new hashes for every replica.

CBRC is a platform feature (vSphere), however it is only enabled and available via Horizon View. Other VDI products such as XenDesktop or vWorkspace cannot utilize the feature.

Nutanix suppress the need for CBCR, providing similar functionality to any VDI solution running on top of vSphere, Hyper-V or KVM. Nutanix has a de-duplication engine built into the solution that works real-time for data stored in DRAM and Flash.

 

CC_Pools

Content Cache (Dynamic Read cache) Read more…

Nutanix NX-3000 review: Virtualization cloud-style – #Nutanix, #IaaS

January 29, 2014 Leave a comment

A great review of the Nutanix Virtual Computing Platform! 🙂

Nutanix NX-3000 Series
Nutanix NX-3000 review: Virtualization cloud-style

What do you get when you combine four independent servers, lots of memory, standard SATA disks and SSD, 10Gb networking, and custom software in a single box? In this instance, the answer would be a Nutanix NX-3000. Pigeonholing the Nutanix product into a traditional category is another riddle altogether. While the company refers to each unit it sells as an “appliance,” it really is a clustered combination of four individual servers and direct-attached storage that brings shared storage right into the box, eliminating the need for a back-end SAN or NAS.

I was recently given the opportunity to go hands on with a Nutanix NX-3000, the four nodes of which were running version 3.5.1 of the Nutanix operating system. It’s important to point out that the Nutanix platform handles clustering and file replication independent of any hosted virtualization system. Thus, a Nutanix cluster will automatically handle node, disk, and network failures while providing I/O at the speed of local disk — and using local SSD to accelerate access to the most frequently used data. Nutanix systems support the VMware vSphere and Microsoft Hyper-V hypervisors, as well as KVM for Linux-based workloads.

[ The Nutanix NX-3000 is an InfoWorld 2014 Technology of the Year Award winner. Read about the other winning products in our slideshow, “InfoWorld’s 2014 Technology of the Year Award winners.” | For quick, smart takes on the news you’ll be talking about, check out InfoWorld TechBrief — subscribe today. ]

Nutanix was founded by experienced data center architects and engineers from the likes of Google, Facebook, and Yahoo. That background brings with it a keen sense of what makes a good distributed system and what software pieces are necessary to build a scalable, high-performance product. A heavy dose of innovation and ingenuity shows up in a sophisticated set of distributed cluster management services, which eliminate any single point of failure, and in features like disk block fingerprinting, which leverages a special Intel instruction set (for computing an SHA-1 hash) to perform data deduplication and to ensure data integrity and redundancy.

A Nutanix cluster starts at one appliance (technically three nodes, allowing for the failure of one node) and scales out to any number of nodes. The NDFS (Nutanix Distributed File System) provides a single store for all of your VMs, handling all disk and I/O load balancing and eliminating the need to use virtualization platform features like VMware’s Storage DRS. Otherwise, you manage your VMs no differently than you would on any other infrastructure, using VMware’s or Microsoft’s native management tools.

Nutanix architecture
The hardware behind the NX-3000 comes from SuperMicro. Apart from the fact that it squeezes four dual-processor server blades inside one 2U box, it isn’t anything special. All of the magic is in the software. Nutanix uses a combination of open source software, such as Apache Cassandra and ZooKeeper, plus a bevy of in-house developed tools. Nutanix built cluster configuration management services on ZooKeeper and heavily modified Cassandra for use as the primary object store for the cluster.

Test Center Scorecard
 
  20% 20% 20% 20% 10% 10%  
Nutanix NX-3000 Series 10 9 10 9 9 8
9.3 EXCELLENT

 

Continue reading here!

//Richard

There was a big flash, and then the dinosaurs died – via @binnygill, #Nutanix

November 15, 2013 Leave a comment

Great blog post by @binnygill! 😉

This is how it was supposed to end. The legacy SAN and NAS vendors finally realize that Flash is fundamentally different from HDDs. Even after a decade of efforts to completely assimilate Flash into the legacy architectures of the SAN/NAS era, it’s now clear that new architectures are required to support Flash arrays. The excitement around all-flash arrays is a testament to how different Flash is from HDDs, and its ultimate importance to datacenters.

Consider what happened in the datacenter two decades ago: HDDs were moved out of networked computers, and SAN and NAS were born. What is more interesting, however, is what was not relocated.

Although it was feasible to move DRAM out with technology similar to RDMA, it did not make sense. Why move a low latency, high throughput component across a networking fabric, which would inevitably become a bottleneck?

Today Flash is forcing datacenter architects to revisit this same decision. Fast near-DRAM-speed storage is a reality today. SAN and NAS vendors have attempted to provide that same goodness in the legacy architectures, but have failed. The last ditch effort is to create special-purpose architectures that bundle flash into arrays, and connect it to a bunch of servers. If that is really a good idea, then why don’t we also pool DRAM in that fashion and share with all servers? This last stand will be a very short lived one. What is becoming increasingly apparent is that Flash belongs on the server – just like DRAM.

For example, consider a single Fusion-IO flash card that writes at 2.5GB/s throughput and supports 1,100,000 IOPS with just 15 microsec latency (http://www.fusionio.com/products/iodrive2-duo/). You can realize these speeds by attaching the card to your server and throwing your workload at it. If you put 10 of these cards in a 2U-3U storage controller, should you expect 25GB/s streaming writes, and 11 million IOPS at sub millisecond latencies. To my knowledge no storage controller can do that today, and for good reasons.

Networked storage has the overhead of networking protocols. Protocols like NFS and iSCSI are not designed for massive parallelism, and end up creating bottlenecks that make crossing a few million IOPS on a single datastore an extremely hard computer science problem. Further, if an all-flash array is servicing ten servers, then the networking prowess of the all-flash array should be 10X of that of each server, or else we end up artificially limiting the bandwidth that each server can get based on how the storage array is shared.

No networking technology, whether it be Infiniband, Ethernet, or fibre channel can beat the price and performance of locally-attached PCIe, or even that of a locally-attached SATA controller. Placing flash devices that operate at almost DRAM speeds outside of the server requires unnecessary investment in high-end networking. Eventually, as flash becomes faster, the cost of a speed-matched network will become unbearable, and the datacenter will gravitate towards locally-attached flash – both for technological reasons, as well as for sustainable economics.

The right way to utilize flash is to treat it as one would treat DRAM — place it on the server where it belongs. The charts below illustrate the dramatic speed up from server-attached flash.

Continue reading here!

//Richard

#Apache #CloudStack grows up – #Citrix, #IaaS – via @sjvn

On June 4th, the 4.1.0 release of the Apache CloudStack Infrastructure-as-a-Service (IaaS) cloud orchestration platform arrived. This is the first major CloudStack release since its March 20th graduation from the Apache Incubator.

CloudStackLogo

It’s also the first major release of CloudStack since Citrix submitted the project to the Apache Foundation in 2012. Apache CloudStack is an integrated software platform that enables users to build a feature-rich IaaS. Apache claims that the new version includes an “intuitive user interface and rich API [application programming interface] for managing the compute, networking, accounting, and storage resources for private, hybrid, or public clouds.”

This release includes numerous new features and bug fixes from the 4.0.x cycle. It also includes major changes in the codebase to make CloudStack easier for developers; a new structure for creating RPM/Debian packages; and completes the changeover to using Maven, the Apache software project management tool.

Apache CloudStack 4.1.0’s most important new features are:

  • An API discovery service that allows an end point to list its supported APIs and their details.
  • Added an Events Framework to CloudStack to provide an “event bus” with publish, subscribe, and unsubscribe semantics. Includes a RabbitMQ plug-in that can interact with AMQP (Advanced Message Queuing Protocol) servers.
  • Implement L3 router functionality for the VMware Nicira network virtualization platform (NVP) plug-in
  • Support for Linux’s built-in Kernel-based Virtual Machine (KVM) virtualization with NVP L3 router
    functionality.
  • Support for AWS (Amazon Web Service) style regions

What all this adds up to, according to CloudStack Project Management Committee (PMC) member Joe Brockmeier, is that today’s CloudStack is “a mature, stable project, [that] is also free as in beer and speech. We believe that if you’re going to be building an IaaS cloud for private or public consumption, you’ll be better served choosing an open platform that any organization can participate in and contribute to.”

Brockmeier concluded, “CloudStack is a very mature offering that’s relatively easy to deploy and manage, and it’s known to power some very large clouds–e.g., Zynga with tens of thousands of nodes–and very distributed clouds–such as DataPipe, which…

Continue reading here!

//Richard

%d bloggers like this: