Archive

Archive for the ‘Cloud’ Category

#Windows #Azure Desktop Hosting Deployment Guide – #RDS, #BYOD – via @michael_keen

November 12, 2013 Leave a comment

This is great! Have a look at this guide!

Hello everyone, this is Clark Nicholson from the Remote Desktop Virtualization Team. I’m writing today to let you know that we have just published the Windows Azure Desktop Hosting Deployment Guide. This document provides guidance for deploying a basic desktop hosting solution based on the Windows Azure Desktop Hosting Reference Architecture Guide. This document is intended to provide a starting point for implementing a Desktop Hosting service on Windows Azure virtual machines. A production environment will need additional deployment steps to provide advanced features such as high availability, customized desktop experience, RemoteApp collections, etc.

For more information, please see Remote Desktop Services and Windows Azure Infrastructure Services.

Continue reading here!

//Richard

#Rackspace launches high performance cloud servers – #IaaS via @ldignan

November 5, 2013 Leave a comment

Rackspace on Tuesday rolled out new high performance cloud servers with all solid-state storage, more memory and the latest Intel processors.

The company aims to take its high performance cloud servers and pitch them to companies focused on big data workloads. Rackspace’s performance cloud servers are available immediately in the company’s Northern Virginia region and will come online in Dallas, Chicago and London this month. Sydney and Hong Kong regions will launch in the first half of 2014.

Among the key features:

  • The public cloud servers have RAID 10-protected solid state drives;
  • Intel Xeon E5 processors;
  • Up to 120 Gigabytes of RAM;
  • 40 Gigabits per second of network throughput.

Overall, the public cloud servers, which run on OpenStack, provide a healthy performance boost of Rackspace’s previous offering. The performance cloud servers are optimized for Rackspace’s cloud block storage.

Rackspace said it will offer the performance cloud servers as part of a hybrid data center package.

Continue reading here!

//Richard

Making #OpenStack Grizzly Deployments Less Hairy – #Puppet, #PuppetLabs

October 29, 2013 Leave a comment
Interesting! OpenStack needs a bit more “simplicity”! 😉
October 24, 2013 by Chris Hoge in OpenStack

Today, I’m excited to announce a new module from Puppet Labs for OpenStack Grizzly. I’ve been working on this module with the goal of demonstrating how to simplify OpenStack deployments by identifying their independent components and customizing them for your environment.

The puppetlabs-grizzly module is a multi-node deployment of OpenStack built on the puppetlabs-openstack modules. There are two core differences in how it handles deploying OpenStack resources. First, it uses a “roles and profiles” model. Roles allow you to identify a node’s function, and profiles are the components that describe that role. For example, a typical controller node is composed of messaging, database and API profiles. Roles and profiles allow you to clearly define what a node does with a role, while being flexible enough to mix profiles to compose new roles.

The second difference is that the module leverages Hiera, a database that allows you to store configuration settings in a hierarchy of text files. Hiera can use Facter facts about a given node to set values for module parameters, rather than storing those values in the module itself. If you have to change a network setting or password, Hiera allows you to change it in your Hiera text file hierarchy, rather than changing it in the module.

Check out parts 1 and 2 of the demo, which walks you through how to deploy OpenStack with the puppetlabs-grizzly module.

Multi-node OpenStack Grizzly with Puppet Enterprise: Deployment (Part 1 of 2)

Hyperscale Invades the Enterprise and the Impact on Converged Infrastructure – via @mathiastornblom

October 29, 2013 Leave a comment

This is really interesting! Look at this video!

In this whiteboard presentation, Wikibon Senior Analyst Stu Miniman shares how enterprise IT can learn from the architectural models of hyperscale companies. He walks through Wikibon’s definition of software-led infrastructure and how converged infrastructure solutions meet the market’s requirements.

Continue reading or watch the whole channel here!

//Richard

#Microsoft launches its #Azure #Hadoop service! – via @maryjofoley

October 28, 2013 Leave a comment

This is really cool!

Microsoft’s cloud-based distribution of Hadoop — which it has been developing for the past year-plus with Hortonworks — is generally available as of October 28.

Microsoft officials also are acknowledging publicly that Microsoft has dropped plans to deliver a Microsoft-Hortonworks developed implementation of Windows Server, which was known as HDInsight Server for Windows. Instead, Microsoft will be advising customers who want Hadoop on Windows Server to go with Hortonworks Data Platform (HDP) for Windows.

Windows Azure HDInsight is “100 percent Apache Hadoop” and builds on top of HDP. HDInsight includes full compatibility with Apache Hadoop, as well as integration with Microsoft’s own business-intelligence tools, such as Excel, SQL Server and PowerBI.

“Our vision is how do we bring big data to a billion people,” said Eron Kelly, Microsoft’s SQL Server General Manager. “We want to make the data and insights accessible to everyone.” 

Making the Hadoop big-data framework available in the cloud, so that users can spin up and spin down Hadoop clusters when needed is one way Microsoft intends to meet this goal, Kelly said.

Microsoft and Hortonworks originally announced plans to bring the Hadoop big-data framework to Windows Server and Windows Azure in the fall of 2011. Microsoft made a first public preview of its Hadoop on Windows Server product (known officially as HDInsight Server for Windows) available in October 2012.

Microsoft made available its first public preview of its Hadoop on Windows Azure service, known as HDInsight Service, on March 18. Before that…

Continue reading here!

//Richard

True Scale Out Shared Nothing Architecture – #Compute, #Storage, #Nutanix via @josh_odgers

October 26, 2013 Leave a comment

This is yet another great blog post by Josh! Great work and keep it up! 😉

I love this statement:

I think this really highlights what VMware and players like Google, Facebook & Twitter have been saying for a long time, scaling out not up, and shared nothing architecture is the way of the future.

At VMware vForum Sydney this week I presented “Taking vSphere to the next level with converged infrastructure”.

Firstly, I wanted to thank everyone who attended the session, it was a great turnout and during the Q&A there were a ton of great questions.

I got a lot of feedback at the session and when meeting people at vForum about how the Nutanix scale out shared nothing architecture tolerates failures.

I thought I would summarize this capability as I believe its quite impressive and should put everyone’s mind at ease when moving to this kind of architecture.

So lets take a look at a 5 node Nutanix cluster, and for this example, we have one running VM. The VM has all its data locally, represented by the “A” , “B” and “C” and this data is also distributed across the Nutanix cluster to provide data protection / resiliency etc.

Nutanix5NodeCluster

So, what happens when an ESXi host failure, which results in the Nutanix Controller VM (CVM) going offline and the storage which is locally connected to the Nutanix CVM being unavailable?

Firstly, VMware HA restarts the VM onto another ESXi host in the vSphere Cluster and it runs as normal, accessing data both locally where it is available (in this case, the “A” data is local) and remotely (if required) to get data “B” and “C”.

Nutanix5nodecluster1failed

Secondly, when data which is not local (in this example “B” and “C”) is accessed via other Nutanix CVMs in the cluster, it will be “localized” onto the host where the VM resides for faster future access.

It is importaint to note, if data which is not local is not accessed by the VM, it will remain remote, as there is no benefit in relocating it and this reduces the workload on the network and cluster.

The end result is the VM restarts the same as it would using traditional storage, then the Nutanix cluster “curator” detects if any data only has one copy, and replicates the required data throughout the cluster to ensure full resiliency.

The cluster will then look like a fully functioning 4 node cluster as show below.

5NodeCluster1FailedRebuild

The process of repairing the cluster from a failure is commonly incorrectly compared to a RAID pack rebuild. With a raid rebuild, a small number of disks, say 8, are under heavy load re striping data across a hot spare or a replacement drive. During this time the performance of everything on the RAID pack is significantly impacted.

With Nutanix, the data is distributed across the entire cluster, which even with a 5 node cluster will be at least 20 SATA drives, but with all data being written to SSD then sequentially offloaded to SATA.

The impact of this process is much less than a RAID…

Continue reading here!

//Richard

Solving the Compute and Storage scalability dilemma – #Nutanix, via @josh_odgers

October 24, 2013 Leave a comment

The topic of Compute, Network and STORAGE is a hot topic as I’ve written in blog posts before this one (How to pick virtualization (HW, NW, Storage) solution for your #VDI environment? – #Nutanix, @StevenPoitras) … and still a lot of colleagues and customers are struggling with finding better solutions and architecture.

How can we ensure that we get the same or better performance of our new architecture? How can we scale in a more simple and linear manner? How can we ensure that we don’t have a single point of failure for all of our VM’s etc..? How are others scaling and doing this in a better way?

I’m not a storage expert, but I do know and read that many companies out there are working on finding the optimal solution for Compute and Storage, and how they can get the cost down and be left with a more simple architecture to manage…

This is a topic that most need to address as well now when more and more organisations are starting to build their private clouds, because how are you going to scale it and how can you get closer to the delivery that the big players provide? Gartner even had Software-Defined-Storage (SDS) as the number 2 trend going forward: #Gartner Outlines 10 IT Trends To Watch – via @MichealRoth, #Nutanix, #VMWare

Right now I see Nutanix as the leader here! They rock! Just have a look at this linear scalability:

If you want to learn more how Nutanix can bring great value please contact us at EnvokeIT!

For an intro of Nutanix in 2 minutes have a look at these videos:

Overview:

Read more…

How to pick virtualization (HW, NW, Storage) solution for your #VDI environment? – #Nutanix, @StevenPoitras

September 13, 2013 Leave a comment

Here we are again… a lot of companies and Solution Architects are scratching their heads thinking about how we’re going to do it “this time”.

Most of you out there have something today, probably running XenApp on your VMware or XenServer hypervisor with a FC SAN or something, perhaps provisioned using PVS or just managed individually. There is also most likely a “problem” with talking to the Storage team that manage the storage service for the IaaS service that isn’t built for the type of workloads that XenApp and XenDesktop (VDI) requires.

So how are you going to do it this time? Are you going to challenge the Storage and Server/IaaS service and be innovative and review the new cooler products and capabilities that now exists out there? They are totally changing the way that we build Virtual Cloud Computing solutions where; business agility, simplicity, cost savings, performance and simple scale out is important!

There is no one solution for everything… but I’m getting more and more impressed by some of the “new” players on the market when it comes to providing simple and yet so powerful and performing Virtual Cloud Computing products. One in particular is Nutanix that EnvokeIT has partnered with and they have a truly stunning product.

But as many have written in many great blog posts about choosing your storage solution for your VDI solution you truly need to understand what your service will require from the underlying dependency services. And is it really worth to do it the old way? You have your team that manages the IaaS service, and most of the times it just provides a way for ordering/provisioning VM’s, then the “VDI” team leverages that one using PVS or MCS. Some companies are not even where  they can order that VM as a service or provision it from the Image Provisioning (PVS/MCS) service, everything is manual and they call it a IaaS service… is it then a real IaaS service? My answer would be now… but let’s get back to the point I was trying to make!

This HW, Hypervisor, Network, Storage (and sometimes orchestrator) components are often managed by different teams. Each team are also most of the times not really up to date in terms of understanding what a Virtualization/VDI service will require from them and their components. They are very competent in understanding the traditional workload of running a web server VM or similar, but not really dealing with boot storms from hundreds to thousands of VDI’s booting up, people logging in at the same time and the whole pattern of IOPS that is generated in these VM’s “life-cycle”.

This is where I’d suggest everyone to challenge their traditional view on building Virtualization and Storage services for running Hosted Shared Desktop (XenApp/RDS) and Hosted Virtual Desktop (VDI/XenDesktop) on!

You can reduce the complexity, reduce your operational costs and integrate Nutanix as a real power compute part of your internal/private cloud service!

One thing that also is kind of cool is the integration possibilities of the Nutanix product with OpenStack and other cloud management products through its REST API’s.  And it supports running both Hyper-V, VMware ESXi and KVM as hypervisors in this lovely bundled product.

If you want the nitty gritty details about this product I highly recommend that you read the Nutanix Bible post by Steven Poitras here.

Nutanix_Bible640CVM_Dist-1024x384

Read more…

#Ericsson to build three Global #ICT Centers

September 3, 2013 Leave a comment

This is really cool!

Ericsson press release:

  • High-tech, sustainable global ICT Centers to support R&D and Services organizations to bring innovation faster to the market
  • Two centers located in Europe; one in North America
  • Another step in providing industry leading cloud-enabled technology
  • Also establishing a new R&D hardware design building in Stockholm

Ericsson (NASDAQ:ERIC) is planning to invest approximately SEK 7 billion in the coming five years to build three global ICT Centers. Two will be located in Sweden, in Stockholm and Linköping, while the third one, in North America, will be located in Canada, in Montreal, Quebec.

The centers will be located close to Ericsson’s main R&D hubs and will be the new platform for more than 24,000 Ericsson R&D engineers around the world, supporting their lean and agile ways of working. Team of experts will be able to collaborate beyond borders more easily and efficiently.

Ericsson’s customers will also be able to connect remotely for interoperability testing, trials and will have early access to innovation on new business services in real-time from the comfort of their locations.

The three ICT Centers combined will be up to 120,000 square meters, approximately the size of 14 football fields. The new centers will house the company’s complete portfolio, enabling the R&D organization to develop and verify solutions, creating the foundation for the next generation technology and cloud-based services.

Hans Vestberg, President and CEO, Ericsson, says: “The new ICT Centers are examples of Ericsson’s passion for driving the development of the industry. Great ideas come from collaboration, and at these centers we will push the boundaries of possibility on next generation technology and services. Flexibility enabled by new ways of working will realize innovation faster to the market and to our customers.”

The centers will have a leading-edge design, built in a modular and scalable way, securing an efficient use of resources and space adaptable to the business needs. Ericsson estimates that the combination of architecture, design and locations will reduce energy consumption up to 40 percent. This significant reduction in carbon footprint is instrumental in Ericsson’s vision of a more sustainable future.

The two ICT Centers in Sweden will begin initial operations from end of 2013 and from end of 2014 respectively and the North American ICT Center from early 2015.

The new hardware design building in Stockholm, Sweden, will provide similar benefits as the global ICT Centers in use of equipment and energy savings. It will enable R&D hardware design activities in Stockholm to consolidate into one modern creative environment…..

Continue reading here!

#Gartner Magic Quadrant for Cloud Infrastructure as a Service – #IaaS

August 29, 2013 1 comment

Market Definition/Description

Cloud computing is a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using Internet technologies. Cloud infrastructure as a service (IaaS) is a type of cloud computing service; it parallels the infrastructure and data center initiatives of IT. Cloud compute IaaS constitutes the largest segment of this market (the broader IaaS market also includes cloud storage and cloud printing). Only cloud compute IaaS is evaluated in this Magic Quadrant; it does not cover cloud storage providers, platform as a service (PaaS) providers, software as a service (SaaS) providers, cloud services brokerages or any other type of cloud service provider, nor does it cover the hardware and software vendors that may be used to build cloud infrastructure. Furthermore, this Magic Quadrant is not an evaluation of the broad, generalized cloud computing strategies of the companies profiled.

In the context of this Magic Quadrant, cloud compute IaaS (hereafter referred to simply as “cloud IaaS” or “IaaS”) is defined as a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities, are owned by a service provider and offered to the customer on demand. The resources are scalable and elastic in near-real-time, and metered by use. Self-service interfaces are exposed directly to the customer, including a Web-based UI and, optionally, an API. The resources may be single-tenant or multitenant, and hosted by the service provider or on-premises in the customer’s data center.

We draw a distinction between cloud infrastructure as a service, and cloud infrastructure as atechnology platform; we call the latter cloud-enabled system infrastructure (CESI). In cloud IaaS, the capabilities of a CESI are directly exposed to the customer through self-service. However, other services, including noncloud services, may be delivered on top of a CESI; these cloud-enabled services may include forms of managed hosting, data center outsourcing and other IT outsourcing services. In this Magic Quadrant, we evaluate only cloud IaaS offerings; we do not evaluate cloud-enabled services. (See “Technology Overview for Cloud-Enabled System Infrastructure” and “Don’t Be Fooled by Offerings Falsely Masquerading as Cloud Infrastructure as a Service” for more on this distinction.)

This Magic Quadrant covers all the common use cases for cloud IaaS, including development and testing, production environments (including those supporting mission-critical workloads) for both internal and customer-facing applications, batch computing (including high-performance computing [HPC]) and disaster recovery. It encompasses both single-application workloads and “virtual data centers” (VDCs) hosting many diverse workloads. It includes suitability for a wide range of application design patterns, including both “cloud-native”….

Figure 1. Magic Quadrant for Cloud Infrastructure as a Service

Figure 1.Magic Quadrant for Cloud Infrastructure as a Service

Source: Gartner (August 2013)

Continue reading here!

//Richard