Archive

Posts Tagged ‘HDD’

FINALLY!! Nutanix Community Edition (CE) is here and it’s FREE!! – #Nutanix, #EnvokeIT, #Virtualization via (@andreleibovici

This is so cool! I know that a lot of people out there has beeen waiting for this, including myself! 😉

Nutanix CE is a great way to get you started with Nutanix in your own lab environment; and it is now available to everyone now. CE is a fully working Acropolis + Prism stack that enables you to not only host your virtual machines, but enjoy all the benefits of Nutanix. The features available with CE are the exact same enjoyed by paying customers, being the difference that it is a community supported edition and there is a maximum limit of 4 nodes.

Some of the features available with CE are:

  • De-duplication
  • Compression
  • Erasure Coding
  • Asynchronous DR
  • Shadow Cloning
  • Single server (RF=1), three servers (RF=2) or four servers (RF=2)
  • Acropolis Hypervisor (all VM operations, high availability etc.)
  • Analytics
  • Full API framework for development, orchestration and automation
  • Self-Healing
  • ToR integration

Metro Availability, Synchronous Replication, Cloud Connect and Prism Central as not part of Nutanix CE.

Since you will be providing the hardware there are some minimum requirements:

Screen Shot 2015-06-06 at 8.47.23 AM

 

 

 

 

 

 

Nutanix CE extends the Nutanix commitment to fostering an open, transparent and community-centric approach to innovative solutions for mainstream enterprises. Nutanix CE enables a complete hyperconverged infrastructure deployment in just 60 minutes or less on your own hardware and without virtualization or software licensing.

To get started access “Getting Started with Nutanix Community Edition”, create an account and you will be able to register for download. The first…

As usual your more than welcome to contact me at richard at envokeit.com or contact us at EnvokeIT if you want to know more about Nutanix!

Continue reading here!

//Richard

#Nutanix Triumphs at V3 Technology Awards 2013 for Best Virtualisation Product – #IaaS

December 4, 2013 Leave a comment

This is great! A great product takes another award!!! 😉

V3 Readers Award Nutanix with Prestigious Industry Recognition in Highly Competitive Category

Nutanix also won the Best of VMworld 2013 Gold Award for Private Cloud Computing!

LONDON, December 3, 2013 – Nutanix, the leading provider of hyper-efficient, massively scalable and elegantly simple datacentre infrastructure solutions, has been awarded for its continuing innovation in optimising datacentre infrastructure at the V3 Technology Awards 2013. During a ceremony at the Waldorf Hilton Hotel, the company was awarded Best Virtualisation product, beating a host of well respected and larger, more established organisations in the virtualisation market.

V3.co.uk is a leading source of news and analysis for technology professionals, written by a team of expert IT journalists in the UK and Silicon Valley. The awards were hotly contested this year, with more than 450 entries from 150 companies.

“It’s great to see a new company like Nutanix being recognised at the V3 Technology Awards, among the industry giants. It wasn’t an easy task whittling down the hundreds of entries to create the shortlist, and then V3 readers voted in their thousands for their favourites, making this a significant achievement and a well-deserved win. Well done Nutanix!” said Madeline Bennett, Editor, V3 and The INQUIRER.

Alan Campbell, Regional Director of Western Europe at Nutanix, commented on the success: “Nutanix is a company that is constantly innovating and striving to provide the best platform for its customers, so this recognition by a highly respected publication is a testament to the hard work of our team. Virtualisation is a rapidly evolving technology which we are proud to be at the forefront of and to receive an award in the UK, a key market for us, is an honour.

As the fastest growing enterprise…

Continue reading here!

//Richard

There was a big flash, and then the dinosaurs died – via @binnygill, #Nutanix

November 15, 2013 Leave a comment

Great blog post by @binnygill! 😉

This is how it was supposed to end. The legacy SAN and NAS vendors finally realize that Flash is fundamentally different from HDDs. Even after a decade of efforts to completely assimilate Flash into the legacy architectures of the SAN/NAS era, it’s now clear that new architectures are required to support Flash arrays. The excitement around all-flash arrays is a testament to how different Flash is from HDDs, and its ultimate importance to datacenters.

Consider what happened in the datacenter two decades ago: HDDs were moved out of networked computers, and SAN and NAS were born. What is more interesting, however, is what was not relocated.

Although it was feasible to move DRAM out with technology similar to RDMA, it did not make sense. Why move a low latency, high throughput component across a networking fabric, which would inevitably become a bottleneck?

Today Flash is forcing datacenter architects to revisit this same decision. Fast near-DRAM-speed storage is a reality today. SAN and NAS vendors have attempted to provide that same goodness in the legacy architectures, but have failed. The last ditch effort is to create special-purpose architectures that bundle flash into arrays, and connect it to a bunch of servers. If that is really a good idea, then why don’t we also pool DRAM in that fashion and share with all servers? This last stand will be a very short lived one. What is becoming increasingly apparent is that Flash belongs on the server – just like DRAM.

For example, consider a single Fusion-IO flash card that writes at 2.5GB/s throughput and supports 1,100,000 IOPS with just 15 microsec latency (http://www.fusionio.com/products/iodrive2-duo/). You can realize these speeds by attaching the card to your server and throwing your workload at it. If you put 10 of these cards in a 2U-3U storage controller, should you expect 25GB/s streaming writes, and 11 million IOPS at sub millisecond latencies. To my knowledge no storage controller can do that today, and for good reasons.

Networked storage has the overhead of networking protocols. Protocols like NFS and iSCSI are not designed for massive parallelism, and end up creating bottlenecks that make crossing a few million IOPS on a single datastore an extremely hard computer science problem. Further, if an all-flash array is servicing ten servers, then the networking prowess of the all-flash array should be 10X of that of each server, or else we end up artificially limiting the bandwidth that each server can get based on how the storage array is shared.

No networking technology, whether it be Infiniband, Ethernet, or fibre channel can beat the price and performance of locally-attached PCIe, or even that of a locally-attached SATA controller. Placing flash devices that operate at almost DRAM speeds outside of the server requires unnecessary investment in high-end networking. Eventually, as flash becomes faster, the cost of a speed-matched network will become unbearable, and the datacenter will gravitate towards locally-attached flash – both for technological reasons, as well as for sustainable economics.

The right way to utilize flash is to treat it as one would treat DRAM — place it on the server where it belongs. The charts below illustrate the dramatic speed up from server-attached flash.

Continue reading here!

//Richard

#Windows server 2012 Storage Spaces – using PowerShell – via LazyWinAdmin

November 12, 2013 Leave a comment

Very good work on this blog post about Windows Storage Spaces!

WS2012 Storage – Creating a Storage Pool and a Storage Space (aka Virtual Disk) using PowerShell

 

In my previous posts I talked about how to use NFS and iSCSI technologies hosted on Windows Server 2012 and how to deploy those to my Home Lab ESXi servers.

One point I did not covered was: How to do the Initial setup with the physical disk, Storage pooling and the creating the Virtual Disk(s) ?

The cost to acquire and manage highly available and reliable storage can represent a significant part of the IT budget. Windows Server 2012 addresses this issue by delivering a sophisticated virtualized storage feature called Storage Spaces as part of the WS2012 Storage platform. This provides an alternative option for companies that require advanced storage capabilities at lower price point.

Overview

  • Terminology
  • Storage Virtualization Concept
  • Deployment Model of a Storage Space
  • Quick look at Storage Management under Windows Server 2012Identifying the physical disk(s)
    • Server Manager – Volumes
    • PowerShell – Module Storage
  • Creating the Storage Pool
  • Creating the Virtual Disk
  • Initializing the Virtual Disk
  • Partitioning and Formating

Terminology

Storage Pool: Abstraction of multiple physical disks into a logical construct with specified capacity
Group of physical disks into a container, the so-called storage pool, such that the total capacity collectively presented by those associated physical disks can appear and become manageable as a single and seemingly continuous space.

There are two primary types of pools which are used in conjunction with Storage Spaces, as well as the management API in Windows Server 2012: Primordial Pool and Concrete Pool.

Primordial Pool: The Primordial pool represents all of the disks that Storage Spaces is able to enumerate, regardless of whether they are currently being used for a concrete pool. Physical Disks in the Primordial pool have a property named CanPool equal to “True” when they meet the requirements to create a concrete pool.

 

Concrete Pool: A Concrete pool is a specific collection of Physical Disks that was formed by the user to allow creating Storage Spaces (aka Virtual Disks).

#Sanbolic Brings Public Cloud Economics to the Enterprise – #Melio

March 18, 2013 1 comment

Ok, I must say that this product is great!!! If you haven’t looked at it before then please do! And contact us at EnvokeIT if you want more details!

Sanbolic Enables Distributed Flash, SSD and HDD to Achieve Enterprise Systems Capability and Scale-Out In Server-Side and Commodity Storage Deployments

Waltham, MA – (March 18, 2013) – Sanbolic® today announced the general availability of its Melio version 5 (Melio5™) software – delivering distributed scale-out, high-availability and enterprise data services through software. Server-side flash has seen rapid adoption for applications such as hyperscale web serving, but limited adoption in general purpose enterprise applications. With the launch of Melio5, Sanbolic enables enterprise customers to dramatically improve their storage infrastructure economics by enabling server-side flash, SSD and HDD as primary persistent storage. Melio5 aggregates across nodes for scale-out and availability while providing RAID, remote replication, quality of Service (QoS), snapshots and systems functionality through a software layer on commodity hardware. This provides customers with the ability to deploy commodity and server-based storage architecture with similar economics and flexibility as public cloud data centers such as Google and Facebook.

With validation by hundreds of enterprise and government organizations running in production, Melio volume management and file system technology addresses the needs of high performing cost effective storage infrastructure on-premise. Melio5’s architecture is designed to scale up to 2,048 nodes and up to 65,000 storage devices enabling linear performance scalability in a cluster.

Melio5 also eliminates the need to deploy a redundant flash caching layer in front of legacy storage area network (SAN) hardware by directly incorporating flash into hybrid volumes and intelligently placing data based on file system access profiles. A hybrid volume will place random access data such as file system metadata on flash sectors while placing sequential data on low cost hard disk drives to greatly reduce the cost of capacity. The result is a highly scalable, high performance storage system, with a much lower cost than legacy storage arrays.

“Typically, server and disk drive vendors operate on gross margins in the 20-30% range. Storage array vendors, on the other hand, are often twice that or more,” said Eric Slack, Senior Analyst,Storage Switzerland. “Sanbolic’s approach leverages the architecture that the big social media and public cloud companies use, to fix this problem. By replacing storage arrays (and storage array margins) with commodity server and disk drive hardware and enabling it with intelligence through software, companies can significantly reduce storage infrastructure costs.”

Terri McClure, Senior Analyst, Enterprise Strategy Group (ESG), stated, “Sanbolic’s Melio5 software enables corporate users to take advantage of flash and SSD in conjunction with commodity hardware to create an intelligent, cost effective, and high performance storage architecture like the huge public cloud companies run, while still ensuring enterprise workload scalability and high availability.”

“Melio5 lets us solve one of the biggest challenges for our customers today – the upfront and management cost for storage – without sacrificing systems capability or performance. The Lego-like modular capability of Melio allows our customers to scale-out their storage and servers based on off-the-self commodity components, without downtime,” said Mattias Tornblom, CEO, EnvokeIT.

“LSI and Sanbolic’s shared vision and complementary products help customers to dramatically improve the performance, flexibility and economics of their on-premise storage infrastructure,” said Brent Blanchard, Senior Director of Worldwide Channel Sales and Marketing, LSI Corporation. “LSI’s Nytro™ family of server-side flash acceleration cards and leading SAS-based server storage connectivity solutions…

Continue reading here or here!

//Richard

%d bloggers like this: