Archive

Posts Tagged ‘node’

FINALLY!! Nutanix Community Edition (CE) is here and it’s FREE!! – #Nutanix, #EnvokeIT, #Virtualization via (@andreleibovici

This is so cool! I know that a lot of people out there has beeen waiting for this, including myself! šŸ˜‰

Nutanix CE is a great way to get you started with Nutanix in your own lab environment; and it is now available to everyone now. CE is a fully working Acropolis + Prism stack that enables you to not only host your virtual machines, but enjoy all the benefits of Nutanix. The features available with CE are the exact same enjoyed by paying customers, being the difference that it is a community supported edition and there is a maximum limit of 4 nodes.

Some of the features available with CE are:

  • De-duplication
  • Compression
  • Erasure Coding
  • Asynchronous DR
  • Shadow Cloning
  • Single server (RF=1), three servers (RF=2) or four servers (RF=2)
  • Acropolis Hypervisor (all VM operations, high availability etc.)
  • Analytics
  • Full API framework for development, orchestration and automation
  • Self-Healing
  • ToR integration

Metro Availability, Synchronous Replication, Cloud Connect and Prism Central as not part of Nutanix CE.

Since you will be providing the hardware there are some minimum requirements:

Screen Shot 2015-06-06 at 8.47.23 AM

Ā 

Ā 

Ā 

Ā 

Ā 

Ā 

Nutanix CE extends the Nutanix commitment to fostering an open, transparent and community-centric approach to innovative solutions for mainstream enterprises. Nutanix CE enables a complete hyperconverged infrastructure deployment in just 60 minutes or less on your own hardware and without virtualization or software licensing.

To get started access ā€œGetting Started with Nutanix Community Editionā€, create an account and you will be able to register for download. The first…

As usual your more than welcome to contact me at richard at envokeit.com or contact us at EnvokeIT if you want to know more about Nutanix!

Continue reading here!

//Richard

#Nutanix is the Visionary leader in #Gartner magic quadrant! – #IaaS, #PaaS, #DaaS, #Storage, #Converged

I’m not surprised at all and think that this is a good report by Gartner!

Nutanix is absolutely the visionary leader and once more and more units are shipped they will also climb higher into the leaders section and totally rule! I must say that this is a really impressive product that truly isĀ web-scale ready for SMB to large enterprise workloads!! Contact us at EnvokeIT if you need more details! We know the product and how it can deliver value to you!

magic_quadrant_for_integrated_systems_nutanix

The integrated system market is growing at 50% or more per year, creating an unusual mix of major vendors and startups to consider. This new Magic Quadrant will aid vendor selection in this dynamic sector.

Nutanix has close working relationships with multiple top software vendors, and workloads like VDI, Hadoop and DBMS servers are well-represented among the installed base. Maximum neutrality is a major focus for Nutanix, as it works to build trust across a wide variety of vendors. The vendor frequently targets specific workload needs to penetrate new accounts, and then expands the workload reach to compete with incumbent vendors as client confidence is built. Nutanix claims that 50% of first-time clients expand their configurations within six months (and 70% do so within 12 months).

Market Definition/Description

Integrated systems are combinations of server, storage and network infrastructure, sold with management software that facilitates the provisioning and management of the combined unit. The market for integrated systems can be subdivided into broad categories, some of which overlap. Gartner categorizes these classes of integrated systems (among others):

  • Integrated stack systems (ISS) — Server, storage and network hardware integrated with application software to provide appliance or appliancelike functionality. Examples include Oracle Exadata Database Machine, IBM PureApplication System and Teradata.
  • Integrated infrastructure systems (IIS) — Server, storage and network hardware integrated to provide shared compute infrastructure. Examples include VCE Vblock, HP ConvergedSystem and IBM PureFlex System.
  • Integrated reference architectures — Products in which a predefined, presized set of components are designated as options for an integrated system whereby the user and/or channel can make configuration choices between the predefined options. These may be based on an IIS or ISS (with additional software, or services to facilitate easier deployment). Other forms of reference architecture, such as EMC VSPEX, allow vendors to group separate server, storage and network elements from a menu of eligible options to create an integrated system experience. Most reference architectures are, therefore, based on a partnership between hardware and software vendors, or between multiple hardware vendors. However, reference architectures that support a variety of hardware ingredients are more difficult to assess versus packaged integrated systems, which is why they are not evaluated by this research.
  • Fabric-based computing (FBC) — A form of integrated system in which the overall platform is aggregated from separate (or disaggregated) building-block modules connected over a fabric or switched backplane. Unlike the majority of IIS and ISS solutions, which group and package existing technology elements in a fabric-enabled environment, the technology ingredients of an FBC solution will be designed solely around the fabric implementation model. So all FBCs are an example of either an IIS or an ISS; but most IIS and ISS solutions available today would not yet be eligible to be counted as an FBC. Examples include SimpliVity, Nutanix and HP Moonshot System.

Read the whole Gartner Magic Quadrant for Integrated Systems here!

//Richard

 

Making #OpenStack Grizzly Deployments Less Hairy – #Puppet, #PuppetLabs

October 29, 2013 Leave a comment
Interesting! OpenStack needs a bit more “simplicity”! šŸ˜‰
October 24, 2013 by Chris Hoge inĀ OpenStack

Today, I’m excited to announce aĀ new module from Puppet Labs for OpenStack Grizzly. I’ve been working on this module with the goal of demonstrating how to simplify OpenStack deployments by identifying their independent components and customizing them for your environment.

The puppetlabs-grizzly module is a multi-node deployment of OpenStack built on theĀ puppetlabs-openstack modules. There are two core differences in how it handles deploying OpenStack resources. First, it uses a “roles and profiles” model. Roles allow you to identify a node’s function, and profiles are the components that describe that role. For example, a typical controller node is composed of messaging, database and API profiles. Roles and profiles allow you to clearly define what a node does with a role, while being flexible enough to mix profiles to compose new roles.

The second difference is that the module leveragesĀ Hiera, a database that allows you to store configuration settings in a hierarchy of text files. Hiera can useĀ FacterĀ facts about a given node to set values for module parameters, rather than storing those values in the module itself. If you have to change a network setting or password, Hiera allows you to change it in your Hiera text file hierarchy, rather than changing it in the module.

Check out parts 1 and 2 of the demo, which walks you through how to deploy OpenStack with the puppetlabs-grizzly module.

Multi-node OpenStack Grizzly with Puppet Enterprise: Deployment (Part 1 of 2)

True Scale Out Shared Nothing Architecture – #Compute, #Storage, #Nutanix via @josh_odgers

October 26, 2013 Leave a comment

This is yet another great blog post by Josh! Great work and keep it up! šŸ˜‰

I love this statement:

I think this really highlights what VMware and players like Google, Facebook & Twitter have been saying for a long time, scaling out not up, and shared nothing architecture is the way of the future.

At VMware vForum Sydney this week I presented ā€œTaking vSphere to the next level with converged infrastructureā€.

Firstly, I wanted to thank everyone who attended the session, it was a great turnout and during the Q&A there were a ton of great questions.

I got a lot of feedback at the session and when meeting people at vForum about how the Nutanix scale out shared nothing architecture tolerates failures.

I thought I would summarize this capability as I believe its quite impressive and should put everyone’s mind at ease when moving to this kind of architecture.

So lets take a look at a 5 node Nutanix cluster, and for this example, we have one running VM. The VM has all its data locally, represented by the ā€œAā€ , ā€œBā€ and ā€œCā€ and this data is also distributed across the Nutanix cluster to provide data protection / resiliency etc.

Nutanix5NodeCluster

So, what happens when an ESXi host failure, which results in the Nutanix Controller VM (CVM) going offline and the storage which is locally connected to the Nutanix CVM being unavailable?

Firstly, VMware HA restarts the VM onto another ESXi host in the vSphere Cluster and it runs as normal, accessing data both locally where it is available (in this case, the ā€œAā€ data is local) and remotely (if required) to get data ā€œBā€ and ā€œCā€.

Nutanix5nodecluster1failed

Secondly, when data which is not local (in this example ā€œBā€ and ā€œCā€) is accessed via other Nutanix CVMs in the cluster, it will be ā€œlocalizedā€ onto the host where the VM resides for faster future access.

It is importaint to note, if data which is not local is not accessed by the VM, it will remain remote, as there is no benefit in relocating it and this reduces the workload on the network and cluster.

The end result is the VM restarts the same as it would using traditional storage, then the Nutanix cluster ā€œcuratorā€ detects if any data only has one copy, and replicates the required data throughout the cluster to ensure full resiliency.

The cluster will then look like a fully functioning 4 node cluster as show below.

5NodeCluster1FailedRebuild

The process of repairing the cluster from a failure is commonly incorrectly compared to a RAID pack rebuild. With a raid rebuild, a small number of disks, say 8, are under heavy load re striping data across a hot spare or a replacement drive. During this time the performance of everything on the RAID pack is significantly impacted.

With Nutanix, the data is distributed across the entire cluster, which even with a 5 node cluster will be at least 20 SATA drives, but with all data being written to SSD then sequentially offloaded to SATA.

The impact of this process is much less than a RAID…

Continue reading here!

//Richard

%d bloggers like this: