Archive

Posts Tagged ‘cluster’

Microsoft and AzureCon delivers! Love it! – #Azure, #AzureCon, #EnvokeIT, #IoT, #SaaS, #PaaS

I really love the way that Microsoft and Azure delivers! It’s so amazing with all the PaaS and SaaS offerings they now have on top of the traditional IaaS delivery. There is no other cloud provider out there that delivers anything near it! I’m amazed and so happy to be a part of this journey!

If you didn’t have the time to look at AzureCon you have a lot of great videos and topics to go through!!

Here is a short overview of the many great things released and presented:

  • General Availability of 3 new Azure regions in India
  • Announcing new N-series of Virtual Machines with GPU capabilities
  • Announcing Azure IoT Suite available to purchase
  • Announcing Azure Container Service
  • Announcing Azure Security Center

Watching the Videos

All of the talks presented at AzureCon (including the 60 breakout talks) are now available to watch online.  You can browse and watch all of the sessions here.

Announcing General Availability of 3 new Azure regions in India

Yesterday we announced the general availability of our new India regions: Mumbai (West), Chennai (South) and Pune (Central).  They are now available for you to deploy solutions into.

This brings our worldwide presence of Azure regions up to 24 regions, more than AWS and Google combined. Over 125 customers and partners have been participating in the private preview of our new India regions.   We are seeing tremendous interest from industry sectors like Public Sector, Banking Financial Services, Insurance and Healthcare whose cloud adoption has been restricted by data residency requirements.  You can all now deploy your solutions too.

Announcing N-series of Virtual Machines with GPU Support

This week we announced our new N-series family of Azure Virtual Machines that enable GPU capabilities.  Featuring NVidia’s best of breed Tesla GPUs, these Virtual Machines will help you run a variety of workloads ranging from remote visualization to machine learning to analytics.

The N-series VMs feature NVidia’s flagship GPU, the K80 which is well supported by NVidia’s CUDA development community. N-series will also have VM configurations featuring the latest M60 which was recently announced by NVidia. With support for M60, Azure becomes the first hyperscale cloud provider to bring the capabilities of NVidia’s Quadro High End Graphics Support to the cloud. In addition, N-series combines GPU capabilities with the superfast RDMA interconnect so you can run multi-machine, multi-GPU workloads such as Deep Learning and Skype Translator Training.

Announcing Azure Security Center

This week we announced the new Azure Security Center—a new Azure service that gives you visibility and control of the security of your Azure resources, and helps you stay ahead of threats and attacks.  Azure is the first cloud platform to provide unified security management with capabilities that help you prevent, detect, and respond to threats.

Azure_Security_Center

The Azure Security Center provides a unified view of your security state, so your team and/or your organization’s security specialists can get the information they need to evaluate risk across the workloads they run in the cloud.  Based on customizable policy, the service can provide recommendations. For example, the policy might be that all web applications should be protected by a web application firewall. If so, the Azure Security Center will automatically detect when web apps you host in Azure don’t have a web application firewall configured, and provide a quick and direct workflow to get a firewall from one of our partners deployed and configured: Read more…

FINALLY!! Nutanix Community Edition (CE) is here and it’s FREE!! – #Nutanix, #EnvokeIT, #Virtualization via (@andreleibovici

This is so cool! I know that a lot of people out there has beeen waiting for this, including myself! 😉

Nutanix CE is a great way to get you started with Nutanix in your own lab environment; and it is now available to everyone now. CE is a fully working Acropolis + Prism stack that enables you to not only host your virtual machines, but enjoy all the benefits of Nutanix. The features available with CE are the exact same enjoyed by paying customers, being the difference that it is a community supported edition and there is a maximum limit of 4 nodes.

Some of the features available with CE are:

  • De-duplication
  • Compression
  • Erasure Coding
  • Asynchronous DR
  • Shadow Cloning
  • Single server (RF=1), three servers (RF=2) or four servers (RF=2)
  • Acropolis Hypervisor (all VM operations, high availability etc.)
  • Analytics
  • Full API framework for development, orchestration and automation
  • Self-Healing
  • ToR integration

Metro Availability, Synchronous Replication, Cloud Connect and Prism Central as not part of Nutanix CE.

Since you will be providing the hardware there are some minimum requirements:

Screen Shot 2015-06-06 at 8.47.23 AM

 

 

 

 

 

 

Nutanix CE extends the Nutanix commitment to fostering an open, transparent and community-centric approach to innovative solutions for mainstream enterprises. Nutanix CE enables a complete hyperconverged infrastructure deployment in just 60 minutes or less on your own hardware and without virtualization or software licensing.

To get started access “Getting Started with Nutanix Community Edition”, create an account and you will be able to register for download. The first…

As usual your more than welcome to contact me at richard at envokeit.com or contact us at EnvokeIT if you want to know more about Nutanix!

Continue reading here!

//Richard

Metro Availability – Nutanix site-to-site cluster! Sooo cool! – #Nutanix, #EnvokeIT

October 10, 2014 Leave a comment

This is a really cool feature, I know many companies right now that are thinking about refreshing their platform (computer, network and storage) solution(s) and datacenter strategy. Most have dual datacenters today and would like to simplify the setup and ensure that they don’t have to handle two private clouds and manually create disaster recovery processes and technical solutions for ensuring that they can ensure high availability of their applications running on top of the IaaS solution.

This is where this new feature from Nutanix comes into play, now you can get data protection and mirroring of your data across two or more sites built into the product. Think about it, you can ensure your application availability in the event of downtime (planned or unplanned). Really cool!! 🙂

Introducing Metro Availability

Business-critical applications demand continuous data availability. This means that access to applications and data must be preserved even during a datacenter outage or planned maintenance event. Many IT teams use metro area networks to maintain connectivity between datacenters so that if one site goes down the other location can run all applications and services with minimal disruption. To keep the applications running, however, requires immediate access to all data. 

Nutanix is the first hyper-converged infrastructure vendor to deliver continuous data protection across multiple datacenters. Using synchronous mirroring, Metro Availability stretches datastores for virtual machine clusters across two or more sites located up to 400km apart. All functionality is natively integrated into Nutanix software, and supported across all Nutanix platforms with no hardware changes. Enterprise IT teams benefit from improved business operations by maintaining application availability during planned and unplanned site downtime. 

Virtualization teams can now non-disruptively migrate virtual machines between sites during planned maintenance events, providing continuous data protection with zero recovery point objective (RPO) and a near zero recovery time objective (RTO). Metro Availability is deployed within minutes and managed directly from Nutanix Prism UI, eliminating any need for additional management consoles. 

  • More Flexibility – Only Nutanix enables customers to deploy different configurations for primary and secondary sites, and support one-to-many and many-to-one topologies. Customers are no longer forced to have identical platforms and hardware configurations at each site
  • VM Awareness  – Individual VMs can be mirrored across sites using Metro Availability, giving administrators unparalleled flexibility in configuring multi-site deployments and improving overall system efficiency
  • 2X Greater Distances Between Sites – Nutanix Metro Availability supports single datastores stretched up to 400km – twice what current systems support today

Metro Availability enhances and extends the already rich set of integrated data protection and high availability capabilities in the Nutanix solution, catering to the diverse needs of enterprise customers.

Official release not you can find here!

And contact EnvokeIT if you want more information on how this can provide value to you!

//Richard

 

#Hyper-V 2012 R2 Network Architectures Series (Part 1 of 7) – Introduction

This is a great blog post series! Good job Cristian Edwards!

Hi Virtualization gurus,

Since 6 months now, I’ve been working on the internal readiness about Hyper-V Networking in 2012 R2 and all the options and functionalities that exists and how to make them work together and I realize that a common question in our team or from our customers is what are the best practices or the best approaches when defining the Hyper-V Network Architectures of your Private Cloud or your Virtualization farm. Hence I decided to write this series of posts that I think they might be helpful at least to do the brainstorm to find the best approach for every particular scenario. The reality is that each environment is different and use different hardware, but at least I can help you identify 5 common scenarios on how to squeeze the performance of your hardware.

I want to make clear that there is no just one right answer or configuration  and your hardware can help you determine the best configuration for a robust, reliable and performer Hyper-V Network Architecture.  Please note that I will do some personal recommendation based on my experience. These recommendations might or might not be the official – generic recommendations from Microsoft, so please call you support contact in case of any doubt.

The series will contain these post:

1. Hyper-V 2012 R2 Network Architectures Series (Part 1 of 7 ) – Introduction (This Post)

2. Hyper-V 2012 R2 Network Architectures Series (Part 2 of  7) – Non-Converged Networks, the classical but robust approach

3. Hyper-V 2012 R2 Network Architectures Series (Part 3 of  7) – Converged Networks Managed by SCVMM and Powershell

4. Hyper-V 2012 R2 Network Architectures Series (Part 4 of 7 ) – Converged Networks using Static Backend QoS

5. Hyper-V 2012 R2 Network Architectures Series (Part 5 of 7) – Converged Networks using Dynamic QoS

6. Hyper-V 2012 R2 Network Architectures Series (Part 6 of 7 ) – Converged Network using CNAs

7. Hyper-V 2012 R2 Network Architectures Series (Part 7 of 7 ) – Conclusions and Summary

8. Hyper-V 2012 R2 Network Architectures (Part 8 of 7) – Bonus

Continue reading here!

//Richard

 

Configuring #XenMobile Device Manager HA Clustering – #MDM, #Citrix

March 7, 2014 1 comment

A couple of nice videos from Albert Alvarez  here about how to cluster XenMobile device manager!

In my previous post we configured clustered Node 1.  In this second Part we will complete the cluster configuration in Node 2  and will validate and test the configuration..

//Richard

#Windows server 2012 Storage Spaces – using PowerShell – via LazyWinAdmin

November 12, 2013 Leave a comment

Very good work on this blog post about Windows Storage Spaces!

WS2012 Storage – Creating a Storage Pool and a Storage Space (aka Virtual Disk) using PowerShell

 

In my previous posts I talked about how to use NFS and iSCSI technologies hosted on Windows Server 2012 and how to deploy those to my Home Lab ESXi servers.

One point I did not covered was: How to do the Initial setup with the physical disk, Storage pooling and the creating the Virtual Disk(s) ?

The cost to acquire and manage highly available and reliable storage can represent a significant part of the IT budget. Windows Server 2012 addresses this issue by delivering a sophisticated virtualized storage feature called Storage Spaces as part of the WS2012 Storage platform. This provides an alternative option for companies that require advanced storage capabilities at lower price point.

Overview

  • Terminology
  • Storage Virtualization Concept
  • Deployment Model of a Storage Space
  • Quick look at Storage Management under Windows Server 2012Identifying the physical disk(s)
    • Server Manager – Volumes
    • PowerShell – Module Storage
  • Creating the Storage Pool
  • Creating the Virtual Disk
  • Initializing the Virtual Disk
  • Partitioning and Formating

Terminology

Storage Pool: Abstraction of multiple physical disks into a logical construct with specified capacity
Group of physical disks into a container, the so-called storage pool, such that the total capacity collectively presented by those associated physical disks can appear and become manageable as a single and seemingly continuous space.

There are two primary types of pools which are used in conjunction with Storage Spaces, as well as the management API in Windows Server 2012: Primordial Pool and Concrete Pool.

Primordial Pool: The Primordial pool represents all of the disks that Storage Spaces is able to enumerate, regardless of whether they are currently being used for a concrete pool. Physical Disks in the Primordial pool have a property named CanPool equal to “True” when they meet the requirements to create a concrete pool.

 

Concrete Pool: A Concrete pool is a specific collection of Physical Disks that was formed by the user to allow creating Storage Spaces (aka Virtual Disks).

True Scale Out Shared Nothing Architecture – #Compute, #Storage, #Nutanix via @josh_odgers

October 26, 2013 Leave a comment

This is yet another great blog post by Josh! Great work and keep it up! 😉

I love this statement:

I think this really highlights what VMware and players like Google, Facebook & Twitter have been saying for a long time, scaling out not up, and shared nothing architecture is the way of the future.

At VMware vForum Sydney this week I presented “Taking vSphere to the next level with converged infrastructure”.

Firstly, I wanted to thank everyone who attended the session, it was a great turnout and during the Q&A there were a ton of great questions.

I got a lot of feedback at the session and when meeting people at vForum about how the Nutanix scale out shared nothing architecture tolerates failures.

I thought I would summarize this capability as I believe its quite impressive and should put everyone’s mind at ease when moving to this kind of architecture.

So lets take a look at a 5 node Nutanix cluster, and for this example, we have one running VM. The VM has all its data locally, represented by the “A” , “B” and “C” and this data is also distributed across the Nutanix cluster to provide data protection / resiliency etc.

Nutanix5NodeCluster

So, what happens when an ESXi host failure, which results in the Nutanix Controller VM (CVM) going offline and the storage which is locally connected to the Nutanix CVM being unavailable?

Firstly, VMware HA restarts the VM onto another ESXi host in the vSphere Cluster and it runs as normal, accessing data both locally where it is available (in this case, the “A” data is local) and remotely (if required) to get data “B” and “C”.

Nutanix5nodecluster1failed

Secondly, when data which is not local (in this example “B” and “C”) is accessed via other Nutanix CVMs in the cluster, it will be “localized” onto the host where the VM resides for faster future access.

It is importaint to note, if data which is not local is not accessed by the VM, it will remain remote, as there is no benefit in relocating it and this reduces the workload on the network and cluster.

The end result is the VM restarts the same as it would using traditional storage, then the Nutanix cluster “curator” detects if any data only has one copy, and replicates the required data throughout the cluster to ensure full resiliency.

The cluster will then look like a fully functioning 4 node cluster as show below.

5NodeCluster1FailedRebuild

The process of repairing the cluster from a failure is commonly incorrectly compared to a RAID pack rebuild. With a raid rebuild, a small number of disks, say 8, are under heavy load re striping data across a hot spare or a replacement drive. During this time the performance of everything on the RAID pack is significantly impacted.

With Nutanix, the data is distributed across the entire cluster, which even with a 5 node cluster will be at least 20 SATA drives, but with all data being written to SSD then sequentially offloaded to SATA.

The impact of this process is much less than a RAID…

Continue reading here!

//Richard

%d bloggers like this: