Archive

Posts Tagged ‘IaaS’

#Gartner report – How to Choose Between #Hyper-V and #vSphere – #IaaS

November 19, 2013 Leave a comment

The constant battle between the hypervisor and orchestration of  IaaS etc. is of course continuing! But it is really fun I must say that Microsoft is getting more and more mature with it’s offerings in this space, great job!

One of the things that I tend to think most of is the cost, scalability and flexibility of the infrastructure that we build and how we build it, I often see that we tend to do what we’ve done for so many years now. We buy our SAN/NAS storage, we buy our servers but lean towards Blade servers though we think that’s the latest and coolest, and then we try to squeeze that into some sort of POD/FlexPods/UCS or whatever we like to call it to find our optimal “volume of Compute, Network and Storage” that we can scale. But is this scalable like the bigger cloud players like Google, Amazon etc.? Is this 2013 state of the art? I think that we’re just fooling ourselves a bit and build whatever we’ve done for all these years and don’t really provide the business with anything new… but that’s my view… I know what I’d look at and most of you that have read my earlier blog posts know that I love the way of scaling out and doing more like the big players using something like Nutanix and ensure that you choose the right IaaS components as a part of that stack, as well as the orchestration layer (OpenStack, System Center, CloudStack, Cloud Platform or whatever you prefer after you’ve done your homework).

Back to the topic a bit, I’d say that the hypervisor is of no importance anymore, that’s why everyone if giving it away for free or to the open source community! Vendors are after the more IaaS/PaaS orchestration layer and get into that because if they get that business then they have nested their way into your business processes, that’s where ultimately that will deliver the value as IT services in an automated way once you’ve got your business services and processes in place, and then it’s harder to make a change and they will live fat and happy on you for some years to come! 😉

Read more…

There was a big flash, and then the dinosaurs died – via @binnygill, #Nutanix

November 15, 2013 Leave a comment

Great blog post by @binnygill! 😉

This is how it was supposed to end. The legacy SAN and NAS vendors finally realize that Flash is fundamentally different from HDDs. Even after a decade of efforts to completely assimilate Flash into the legacy architectures of the SAN/NAS era, it’s now clear that new architectures are required to support Flash arrays. The excitement around all-flash arrays is a testament to how different Flash is from HDDs, and its ultimate importance to datacenters.

Consider what happened in the datacenter two decades ago: HDDs were moved out of networked computers, and SAN and NAS were born. What is more interesting, however, is what was not relocated.

Although it was feasible to move DRAM out with technology similar to RDMA, it did not make sense. Why move a low latency, high throughput component across a networking fabric, which would inevitably become a bottleneck?

Today Flash is forcing datacenter architects to revisit this same decision. Fast near-DRAM-speed storage is a reality today. SAN and NAS vendors have attempted to provide that same goodness in the legacy architectures, but have failed. The last ditch effort is to create special-purpose architectures that bundle flash into arrays, and connect it to a bunch of servers. If that is really a good idea, then why don’t we also pool DRAM in that fashion and share with all servers? This last stand will be a very short lived one. What is becoming increasingly apparent is that Flash belongs on the server – just like DRAM.

For example, consider a single Fusion-IO flash card that writes at 2.5GB/s throughput and supports 1,100,000 IOPS with just 15 microsec latency (http://www.fusionio.com/products/iodrive2-duo/). You can realize these speeds by attaching the card to your server and throwing your workload at it. If you put 10 of these cards in a 2U-3U storage controller, should you expect 25GB/s streaming writes, and 11 million IOPS at sub millisecond latencies. To my knowledge no storage controller can do that today, and for good reasons.

Networked storage has the overhead of networking protocols. Protocols like NFS and iSCSI are not designed for massive parallelism, and end up creating bottlenecks that make crossing a few million IOPS on a single datastore an extremely hard computer science problem. Further, if an all-flash array is servicing ten servers, then the networking prowess of the all-flash array should be 10X of that of each server, or else we end up artificially limiting the bandwidth that each server can get based on how the storage array is shared.

No networking technology, whether it be Infiniband, Ethernet, or fibre channel can beat the price and performance of locally-attached PCIe, or even that of a locally-attached SATA controller. Placing flash devices that operate at almost DRAM speeds outside of the server requires unnecessary investment in high-end networking. Eventually, as flash becomes faster, the cost of a speed-matched network will become unbearable, and the datacenter will gravitate towards locally-attached flash – both for technological reasons, as well as for sustainable economics.

The right way to utilize flash is to treat it as one would treat DRAM — place it on the server where it belongs. The charts below illustrate the dramatic speed up from server-attached flash.

Continue reading here!

//Richard

#Rackspace launches high performance cloud servers – #IaaS via @ldignan

November 5, 2013 Leave a comment

Rackspace on Tuesday rolled out new high performance cloud servers with all solid-state storage, more memory and the latest Intel processors.

The company aims to take its high performance cloud servers and pitch them to companies focused on big data workloads. Rackspace’s performance cloud servers are available immediately in the company’s Northern Virginia region and will come online in Dallas, Chicago and London this month. Sydney and Hong Kong regions will launch in the first half of 2014.

Among the key features:

  • The public cloud servers have RAID 10-protected solid state drives;
  • Intel Xeon E5 processors;
  • Up to 120 Gigabytes of RAM;
  • 40 Gigabits per second of network throughput.

Overall, the public cloud servers, which run on OpenStack, provide a healthy performance boost of Rackspace’s previous offering. The performance cloud servers are optimized for Rackspace’s cloud block storage.

Rackspace said it will offer the performance cloud servers as part of a hybrid data center package.

Continue reading here!

//Richard

Making #OpenStack Grizzly Deployments Less Hairy – #Puppet, #PuppetLabs

October 29, 2013 Leave a comment
Interesting! OpenStack needs a bit more “simplicity”! 😉
October 24, 2013 by Chris Hoge in OpenStack

Today, I’m excited to announce a new module from Puppet Labs for OpenStack Grizzly. I’ve been working on this module with the goal of demonstrating how to simplify OpenStack deployments by identifying their independent components and customizing them for your environment.

The puppetlabs-grizzly module is a multi-node deployment of OpenStack built on the puppetlabs-openstack modules. There are two core differences in how it handles deploying OpenStack resources. First, it uses a “roles and profiles” model. Roles allow you to identify a node’s function, and profiles are the components that describe that role. For example, a typical controller node is composed of messaging, database and API profiles. Roles and profiles allow you to clearly define what a node does with a role, while being flexible enough to mix profiles to compose new roles.

The second difference is that the module leverages Hiera, a database that allows you to store configuration settings in a hierarchy of text files. Hiera can use Facter facts about a given node to set values for module parameters, rather than storing those values in the module itself. If you have to change a network setting or password, Hiera allows you to change it in your Hiera text file hierarchy, rather than changing it in the module.

Check out parts 1 and 2 of the demo, which walks you through how to deploy OpenStack with the puppetlabs-grizzly module.

Multi-node OpenStack Grizzly with Puppet Enterprise: Deployment (Part 1 of 2)

True Scale Out Shared Nothing Architecture – #Compute, #Storage, #Nutanix via @josh_odgers

October 26, 2013 Leave a comment

This is yet another great blog post by Josh! Great work and keep it up! 😉

I love this statement:

I think this really highlights what VMware and players like Google, Facebook & Twitter have been saying for a long time, scaling out not up, and shared nothing architecture is the way of the future.

At VMware vForum Sydney this week I presented “Taking vSphere to the next level with converged infrastructure”.

Firstly, I wanted to thank everyone who attended the session, it was a great turnout and during the Q&A there were a ton of great questions.

I got a lot of feedback at the session and when meeting people at vForum about how the Nutanix scale out shared nothing architecture tolerates failures.

I thought I would summarize this capability as I believe its quite impressive and should put everyone’s mind at ease when moving to this kind of architecture.

So lets take a look at a 5 node Nutanix cluster, and for this example, we have one running VM. The VM has all its data locally, represented by the “A” , “B” and “C” and this data is also distributed across the Nutanix cluster to provide data protection / resiliency etc.

Nutanix5NodeCluster

So, what happens when an ESXi host failure, which results in the Nutanix Controller VM (CVM) going offline and the storage which is locally connected to the Nutanix CVM being unavailable?

Firstly, VMware HA restarts the VM onto another ESXi host in the vSphere Cluster and it runs as normal, accessing data both locally where it is available (in this case, the “A” data is local) and remotely (if required) to get data “B” and “C”.

Nutanix5nodecluster1failed

Secondly, when data which is not local (in this example “B” and “C”) is accessed via other Nutanix CVMs in the cluster, it will be “localized” onto the host where the VM resides for faster future access.

It is importaint to note, if data which is not local is not accessed by the VM, it will remain remote, as there is no benefit in relocating it and this reduces the workload on the network and cluster.

The end result is the VM restarts the same as it would using traditional storage, then the Nutanix cluster “curator” detects if any data only has one copy, and replicates the required data throughout the cluster to ensure full resiliency.

The cluster will then look like a fully functioning 4 node cluster as show below.

5NodeCluster1FailedRebuild

The process of repairing the cluster from a failure is commonly incorrectly compared to a RAID pack rebuild. With a raid rebuild, a small number of disks, say 8, are under heavy load re striping data across a hot spare or a replacement drive. During this time the performance of everything on the RAID pack is significantly impacted.

With Nutanix, the data is distributed across the entire cluster, which even with a 5 node cluster will be at least 20 SATA drives, but with all data being written to SSD then sequentially offloaded to SATA.

The impact of this process is much less than a RAID…

Continue reading here!

//Richard

#Gartner Magic Quadrant for Cloud Infrastructure as a Service – #IaaS

August 29, 2013 1 comment

Market Definition/Description

Cloud computing is a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using Internet technologies. Cloud infrastructure as a service (IaaS) is a type of cloud computing service; it parallels the infrastructure and data center initiatives of IT. Cloud compute IaaS constitutes the largest segment of this market (the broader IaaS market also includes cloud storage and cloud printing). Only cloud compute IaaS is evaluated in this Magic Quadrant; it does not cover cloud storage providers, platform as a service (PaaS) providers, software as a service (SaaS) providers, cloud services brokerages or any other type of cloud service provider, nor does it cover the hardware and software vendors that may be used to build cloud infrastructure. Furthermore, this Magic Quadrant is not an evaluation of the broad, generalized cloud computing strategies of the companies profiled.

In the context of this Magic Quadrant, cloud compute IaaS (hereafter referred to simply as “cloud IaaS” or “IaaS”) is defined as a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities, are owned by a service provider and offered to the customer on demand. The resources are scalable and elastic in near-real-time, and metered by use. Self-service interfaces are exposed directly to the customer, including a Web-based UI and, optionally, an API. The resources may be single-tenant or multitenant, and hosted by the service provider or on-premises in the customer’s data center.

We draw a distinction between cloud infrastructure as a service, and cloud infrastructure as atechnology platform; we call the latter cloud-enabled system infrastructure (CESI). In cloud IaaS, the capabilities of a CESI are directly exposed to the customer through self-service. However, other services, including noncloud services, may be delivered on top of a CESI; these cloud-enabled services may include forms of managed hosting, data center outsourcing and other IT outsourcing services. In this Magic Quadrant, we evaluate only cloud IaaS offerings; we do not evaluate cloud-enabled services. (See “Technology Overview for Cloud-Enabled System Infrastructure” and “Don’t Be Fooled by Offerings Falsely Masquerading as Cloud Infrastructure as a Service” for more on this distinction.)

This Magic Quadrant covers all the common use cases for cloud IaaS, including development and testing, production environments (including those supporting mission-critical workloads) for both internal and customer-facing applications, batch computing (including high-performance computing [HPC]) and disaster recovery. It encompasses both single-application workloads and “virtual data centers” (VDCs) hosting many diverse workloads. It includes suitability for a wide range of application design patterns, including both “cloud-native”….

Figure 1. Magic Quadrant for Cloud Infrastructure as a Service

Figure 1.Magic Quadrant for Cloud Infrastructure as a Service

Source: Gartner (August 2013)

Continue reading here!

//Richard

Hosting #Citrix Desktops from the #Amazon Cloud – #AWS, #BYOD, #DaaS, #NetScaler

A good blog post by Ken Oestreich.

That’s right. Run your XenApp on AWS and NetScaler on AWS .

Those capabilities has been around for a while, and over time Citrix has been working to make set-up and configuration even easier.

Whether you are a large enterprise, smaller business, or even a service provider, deploying on the AWS cloud could yield you many more benefits and operational advantages than you could get than deploying XenApp on your own equipment.

Is it for me?

It could be. If you answer “yes” to any of the following, you may want to look more closely:

  • You’re Moving infrastructure to the cloud – if you wish to leverage the cloud to host infrastructure – either for convenience, cost, capital expense avoidance, availability, or other attributes.
  • You’re Cost-conscious – Amazon’s EC2 cloud often provides customers with a significant reduction in hardware, networking and/or storage costs, particularly due to the pay-as-you-go nature of EC2 capacity. This helps avoid over-provisioning, and allows for real-time matching of capacity to demand.
  • You don’t have a data center – Many customers chose to avoid building on-premesis data centers altogether while remaining staunch believers in Citrix software. These are small/medium businesses require agile – and often outsourced – infrastructure
  • You have modest administration/deployment knowledge –  Many customers prefer not to invest in the skills needed to maintain data center hardware, but insist on retaining application administration skills. Leveraging IaaS infrastructure in the cloud is the ideal approach whereby hardware configuration and maintenance is avoided.
  • You have a dynamic business that needs to quickly react to change – Businesses with significant growth curves or seasonality often over-provision infrastructure for peak use, locking-up precious fixed capital that is frequently idle.

Tools, resources, economics

The Citrix community has made available Amazon CloudFormation scripts that greatly simplify configuration, set-up and operation of large-scale XenApp instances. We also have spent hours looking at the economics of running your Citrix infrastructure on AWS. These include

We also make it easy to use products/licenses on AWS…

Continue reading here

//Richard

#Apache #CloudStack grows up – #Citrix, #IaaS – via @sjvn

On June 4th, the 4.1.0 release of the Apache CloudStack Infrastructure-as-a-Service (IaaS) cloud orchestration platform arrived. This is the first major CloudStack release since its March 20th graduation from the Apache Incubator.

CloudStackLogo

It’s also the first major release of CloudStack since Citrix submitted the project to the Apache Foundation in 2012. Apache CloudStack is an integrated software platform that enables users to build a feature-rich IaaS. Apache claims that the new version includes an “intuitive user interface and rich API [application programming interface] for managing the compute, networking, accounting, and storage resources for private, hybrid, or public clouds.”

This release includes numerous new features and bug fixes from the 4.0.x cycle. It also includes major changes in the codebase to make CloudStack easier for developers; a new structure for creating RPM/Debian packages; and completes the changeover to using Maven, the Apache software project management tool.

Apache CloudStack 4.1.0’s most important new features are:

  • An API discovery service that allows an end point to list its supported APIs and their details.
  • Added an Events Framework to CloudStack to provide an “event bus” with publish, subscribe, and unsubscribe semantics. Includes a RabbitMQ plug-in that can interact with AMQP (Advanced Message Queuing Protocol) servers.
  • Implement L3 router functionality for the VMware Nicira network virtualization platform (NVP) plug-in
  • Support for Linux’s built-in Kernel-based Virtual Machine (KVM) virtualization with NVP L3 router
    functionality.
  • Support for AWS (Amazon Web Service) style regions

What all this adds up to, according to CloudStack Project Management Committee (PMC) member Joe Brockmeier, is that today’s CloudStack is “a mature, stable project, [that] is also free as in beer and speech. We believe that if you’re going to be building an IaaS cloud for private or public consumption, you’ll be better served choosing an open platform that any organization can participate in and contribute to.”

Brockmeier concluded, “CloudStack is a very mature offering that’s relatively easy to deploy and manage, and it’s known to power some very large clouds–e.g., Zynga with tens of thousands of nodes–and very distributed clouds–such as DataPipe, which…

Continue reading here!

//Richard

#Citrix #NetScaler 10 on Amazon Web Services – #AWS

Yes, it’s here! 🙂

Mainstream IT is fast embracing the enterprise cloud transformation and selecting the right cloud networking technologies has thus quickly emerged to be an imperative. As mainstream IT adopts IaaS (Internet as a service) cloud services, they will require a combination of the elasticity and flexibility, expected of cloud offerings and the powerful advanced networking services used within emerging enterprise cloud datacenters. 

Citrix® NetScaler® 10 delivers elasticity, simplicity and expandability of the cloud to enterprise cloud datacenters and already powers the largest and most successful public clouds in the world. With NetScaler 10, Citrix delivers a comprehensive cloud network platform that mainstream enterprises can leverage to fully embrace a cloud-first network design. 

Citrix and Amazon Web Services (AWS) have come together to deliver industry-leading application delivery controller technology. NetScaler on AWS delivers the same services used to ensure the availability, scalability and security of the largest public and private clouds for AWS environments. Whether the need is to optimize, secure or control delivery of enterprise and cloud services, NetScaler for AWS can help accomplish these initiatives economically, and according to business demands. 

The full suite of NetScaler capabilities such as availability, acceleration, offload and security functionality is available in AWS, enabling users to leverage tried-and-true NetScaler functionality such as rewrites and redirects, content caching, Citrix Access Gateway™ Enterprise SSL VPN, and application firewall within their AWS deployments. Additional benefits include usage of Citrix CloudBridge™ and Citrix Branch Repeater™ as a joint solution. 

Citrix NetScaler transforms the cloud into an extension of the datacenter by eliminating the barriers to enterprise-class cloud deployments. Together, NetScaler and AWS delivers a broad set of capabilities for the Enterprise IT: 

Hybrid Cloud Environment 

Hybrid clouds that span enterprise datacenters and extend into AWS can benefit from the same cloud networking platform, significantly easing…

Continue reading here!

//Richard

#Citrix Introducing #CloudBridge 2000 and 3000

Ok, this is interesting!

Citrix is pleased to announce the new WAN-optimization appliances: CloudBridge 2000 and CloudBridge 3000. These appliances come loaded with our WAN-optimization and XenDesktop acceleration technologies including rich protocol optimization, advanced TCP flow-control, adaptive compression and smart acceleration.

This blog highlights some of key features of these appliances.

Un-matched Scalability: A pay-grow offering that is unique in the WAN-optimization industry

Using the pay-grow offering, CloudBridge 2000 can be scaled from a throughput of 10 Mbps to 20 Mbps and further to 50 Mbps with just a license upgrade. Similarly CloudBridge 3000 can be scaled from 50 Mbps to 100 Mbps and further to 155 Mbps. This avoids the cost, time and logistics overhead associated with a forklift replacement. So if you have small office and expect to grow in future then these appliances are ideal for you.

 

 

 

Series 2000 3000
Application Large Branch/Small Enterprise Medium Enterprise
Licensed Bandwidth 10/20/50 50/100/155
Concurrent HDX Sessions 100/200/300* 300/400/500*
Pay-to-Grow Yes Yes
Disk Storage 600 GB SSD 4 x 600 GB SSD
Interfaces Four 1 GigE Copper FTW

2 x 1 GigE Cu (HA/Mgmt)

6 – GigE Cu or 4 – Fiber FTW

2 x 1 GigE Cu (LOM/Mgmt)

Power Supplies 1 x 300 watt 2 x 300 watt, hot swap
* Session count is limited by link bandwidth, no session count is enforced.  Published numbers are for guidance only.

Built-in reliability

CB 2000 and CB 3000 models come prepackaged with Network bypass cards for the traffic interfaces. This ensures that the traffic to your network is never interrupted, even in case of power failure to the appliance.

Also with these models do not contain any rotating disks. Instead they use SSDs as storage resulting in enhanced disk-access speed and…

Continue reading here on the blog post and also look at this Service Delivery Network video where you can look at Citrix’s story on how enterprise and cloud networks are unified into a service delivery fabric that optimizes and secures applications and data.

//Richard