Archive

Archive for the ‘Infrastructure’ Category

Why huge IaaS/PaaS/DaaS providers don’t use Dell and HP, and why they can do VDI cheaper than you! – via @brianmadden

February 3, 2014 Leave a comment

Yes, why do people and organisations still think that they can build IaaS/PaaS/DaaS services within their enterprise’s and believe that they will be able to do so with the “same old architecture” and components used before? It’s not going to be comparable to the bigger players that are using newer and more scalable architectures with cheaper components.

Enterprises just don’t have that innovation power that companies like Google, Facebook and Amazon has! And if they do then most of the time they are stuck in their old way of doing things from a service delivery point of view, stopping them from thinking outside of the box though the service delivery organisation isn’t ready for it..

This is a great blog post on this from Brian, great work!!

Last month I wrote that it’s not possible for you to build VDI cheaper than a huge DaaS provider like Amazon can sell it to you. Amazon can literally sell you DaaS and make a profit all for less than it costs you to actually build and operate an equivalent VDI system on your own. (“Equivalent” is the key word there. Some have claimed they can do it cheaper, but they’re achieving that by building in-house systems with lower capabilities than what the DaaS providers offer.)

One of the reasons huge providers can build VDI cheaper than you is because they’re doing it at scale. While we all understand the economics of buying servers by the container instead of by the rack, there’s more to it than that when it comes to huge cloud provider. Their datacenters are not crammed full of HP or Dell’s latest rack mount, blade, or Moonshot servers; rather, they’re stacked floor-to-ceiling with heaps of circuit boards you’d hardly recognize as “servers” at all.

Building Amazon’s, Google’s, and Facebook’s “servers”

For most corporate datacenters, rack-mounted servers from vendors like Dell and HP make sense. They’re efficient in that they’re modular, manageable, and interchangeable. If you take the top cover off a 1U server, it looks like everything is packed in there. On the scale of a few dozen racks managed by IT pros who have a million other things on their mind, these servers work wonderfully!

Read more…

Nutanix NX-3000 review: Virtualization cloud-style – #Nutanix, #IaaS

January 29, 2014 Leave a comment

A great review of the Nutanix Virtual Computing Platform! 🙂

Nutanix NX-3000 Series
Nutanix NX-3000 review: Virtualization cloud-style

What do you get when you combine four independent servers, lots of memory, standard SATA disks and SSD, 10Gb networking, and custom software in a single box? In this instance, the answer would be a Nutanix NX-3000. Pigeonholing the Nutanix product into a traditional category is another riddle altogether. While the company refers to each unit it sells as an “appliance,” it really is a clustered combination of four individual servers and direct-attached storage that brings shared storage right into the box, eliminating the need for a back-end SAN or NAS.

I was recently given the opportunity to go hands on with a Nutanix NX-3000, the four nodes of which were running version 3.5.1 of the Nutanix operating system. It’s important to point out that the Nutanix platform handles clustering and file replication independent of any hosted virtualization system. Thus, a Nutanix cluster will automatically handle node, disk, and network failures while providing I/O at the speed of local disk — and using local SSD to accelerate access to the most frequently used data. Nutanix systems support the VMware vSphere and Microsoft Hyper-V hypervisors, as well as KVM for Linux-based workloads.

[ The Nutanix NX-3000 is an InfoWorld 2014 Technology of the Year Award winner. Read about the other winning products in our slideshow, “InfoWorld’s 2014 Technology of the Year Award winners.” | For quick, smart takes on the news you’ll be talking about, check out InfoWorld TechBrief — subscribe today. ]

Nutanix was founded by experienced data center architects and engineers from the likes of Google, Facebook, and Yahoo. That background brings with it a keen sense of what makes a good distributed system and what software pieces are necessary to build a scalable, high-performance product. A heavy dose of innovation and ingenuity shows up in a sophisticated set of distributed cluster management services, which eliminate any single point of failure, and in features like disk block fingerprinting, which leverages a special Intel instruction set (for computing an SHA-1 hash) to perform data deduplication and to ensure data integrity and redundancy.

A Nutanix cluster starts at one appliance (technically three nodes, allowing for the failure of one node) and scales out to any number of nodes. The NDFS (Nutanix Distributed File System) provides a single store for all of your VMs, handling all disk and I/O load balancing and eliminating the need to use virtualization platform features like VMware’s Storage DRS. Otherwise, you manage your VMs no differently than you would on any other infrastructure, using VMware’s or Microsoft’s native management tools.

Nutanix architecture
The hardware behind the NX-3000 comes from SuperMicro. Apart from the fact that it squeezes four dual-processor server blades inside one 2U box, it isn’t anything special. All of the magic is in the software. Nutanix uses a combination of open source software, such as Apache Cassandra and ZooKeeper, plus a bevy of in-house developed tools. Nutanix built cluster configuration management services on ZooKeeper and heavily modified Cassandra for use as the primary object store for the cluster.

Test Center Scorecard
 
  20% 20% 20% 20% 10% 10%  
Nutanix NX-3000 Series 10 9 10 9 9 8
9.3 EXCELLENT

 

Continue reading here!

//Richard

#Gartner analyst slams #OpenStack, again – #IaaS

November 22, 2013 Leave a comment

Good article and I must agree that OpenStack has quite a long way to go before the “average” enterprise embraces it…

OpenStack still has maturing to do before it’s really ready for the enterprise, analyst says

Network World – Gartner analyst Allessandro Perilli recently attended his first summit for the open source cloud platform OpenStack and he says the project has a long way to go before it’s truly an enterprise-grade platform.

In a blog post reviewing his experience, the analyst – who focuses on studying cloud management tools – says that OpenStack is struggling to increase its enterprise adoption. Despite marketing efforts by vendors and favorable press, enterprise adoption remains in the very earliest stages, he says.

Don’t believe the hype generated by press and vendor marketing: OpenStack penetration in the large enterprise market is minimal.
— Gartner analyst Alessandro Perilli 

Sure there are examples like PayPal, eBay and Yahoo using OpenStack. But these are not the meat and potatoes types of enterprise customers that vendors are looking to serve. Why? He outlines four reasons, most of which are related to the process and community nature of the project, and less around the technical maturity of it. By the way, this is not the first time a Gartner analyst has thrown cold water on the project.  

[EARLIER CRITICISMS FROM GARTNER: Gartner report throws cold water on OpenStack hype]

Lack of clarity about what OpenStack does

There is market confusion about exactly what OpenStack is, he says. It is an open source platform that can be assembled together to build a cloud. It, by itself, is not a cloud though just by downloading and installing it. OpenStack requires some heavy lifting to turn the code into an executable cloud platform, which is why dozens of companies have come out with distributions or productized versions of OpenStack code. But, the code itself is not a competitor to cloud platforms offered by vendors like VMware, BMC, CA or others. Read more…

Under the Covers of a Distributed Virtual Computing Platform – Built For Scale and Agility – via @dlink7, #Nutanix

November 21, 2013 Leave a comment

I must say that Dwayne did a great job with this blog post series!! It goes into expelling the Nutanix Distributed File System (NDFS) that I must say is the most amazing enterprise product out there if you need a truly scalable and agile Compute and Storage platform! I advise you to read this series!!

Under the Covers of a Distributed Virtual Computing Platform – Part 1: Built For Scale and Agility

Lots of talk in the industry about how had software defined storage first and who was using what components. I don’t want to go down that rat hole since it’s all marketing and it won’t help you at the end of the day to enable your business. I want to really get into the nitty gritty of the Nutanix Distributed Files System(NDFS). NDFS has been in production for over a year and half with good success, take read of the article on the Wall Street Journal.

Below are core services and components that make NDFS tick. There are actually over 13 services, for example our replication is distributed across all the nodes to provide speed and low impact on the system. The replication service is called Cerebro which we will get to in this series.
Nuntaix Distrubuted File System

 

This isn’t some home grown science experiment, the engineers that wrote the code come from Google, Facebook, Yahoo where this components where invented. It’s important to realize that all components are replaceable or future proofed if you will. The services\libraries provide the API’s so as newest innovations happen in the community, Nutanix is positioned to take advantage.

All the services mentioned above run on multiple nodes in cluster a master-less fashion to provide availability. The nodes talk over 10 GbE and are able to scale in a linear fashion. There is no performance degradation as you add nodes. Other vendors have to use InfiniBand because they don’t share the metadata cross all of the nodes. Those vendors end up putting a full copy of the metadata on each node, this eventually will cause them to hit a performance cliff and the scaling stops. Each Nutanix node acts a storage controller allowing you to do things like have a datastore of 10,000 VM’s without any performance impact… continue reading part 1 here

Under the Covers of a Distributed Virtual Computing Platform – Part 2: ZZ Top

In case you missed Part 1 – Part 1: Built For Scale and Agility
zz-top-03082012-19
No it’s not Billy Gibbons, Dusty Hill, or drummer Frank Beard. It’s Zeus and Zookeeper providing the strong blues that allow the Nutanix Distributed File System to maintain it’s configuration across the entire cluster. Read more…

#Gartner report – How to Choose Between #Hyper-V and #vSphere – #IaaS

November 19, 2013 Leave a comment

The constant battle between the hypervisor and orchestration of  IaaS etc. is of course continuing! But it is really fun I must say that Microsoft is getting more and more mature with it’s offerings in this space, great job!

One of the things that I tend to think most of is the cost, scalability and flexibility of the infrastructure that we build and how we build it, I often see that we tend to do what we’ve done for so many years now. We buy our SAN/NAS storage, we buy our servers but lean towards Blade servers though we think that’s the latest and coolest, and then we try to squeeze that into some sort of POD/FlexPods/UCS or whatever we like to call it to find our optimal “volume of Compute, Network and Storage” that we can scale. But is this scalable like the bigger cloud players like Google, Amazon etc.? Is this 2013 state of the art? I think that we’re just fooling ourselves a bit and build whatever we’ve done for all these years and don’t really provide the business with anything new… but that’s my view… I know what I’d look at and most of you that have read my earlier blog posts know that I love the way of scaling out and doing more like the big players using something like Nutanix and ensure that you choose the right IaaS components as a part of that stack, as well as the orchestration layer (OpenStack, System Center, CloudStack, Cloud Platform or whatever you prefer after you’ve done your homework).

Back to the topic a bit, I’d say that the hypervisor is of no importance anymore, that’s why everyone if giving it away for free or to the open source community! Vendors are after the more IaaS/PaaS orchestration layer and get into that because if they get that business then they have nested their way into your business processes, that’s where ultimately that will deliver the value as IT services in an automated way once you’ve got your business services and processes in place, and then it’s harder to make a change and they will live fat and happy on you for some years to come! 😉

Read more…

#XenDesktop 7.1 Service Template Tech Preview for System Center 2012 Virtual Machine Manager – #SCVMM

November 5, 2013 Leave a comment

This is interesting! Really good and can’t wait to try it out!

Introduction

Let’s face it, installing distributed, enterprise-class virtual desktop and server based computing infrastructure is time consuming and complex.  The infrastructure consists of many components that are installed on individual servers and then configured to work together.  Traditionally this has largely been a manual, error prone process.

The Citrix XenDesktop 7.1 Service Template for System Center 2012 Virtual Machine Manager (SCVMM) leverages the rich automation capabilities available in Microsoft’s private cloud offering to significantly streamline and simplify the installation experience.  The XenDesktop 7.1 Service Template enables rapid deployment of virtual app and desktop infrastructure on Microsoft System Center 2012 private clouds.  This Tech Preview is available now and includes the latest 7.1 version of XenDesktop that supports Windows Server 2012 R2 and System Center 2012 R2 Virtual Machine Manager.

Key Benefits:

  • Rapid Deployment – A fully configured XenDesktop 7.1 deployment that adheres to Citrix best practices is automatically installed in about an hour; a manual installation can take a day or more.
  • Reduction of human errors and the unwanted consequences – IT administrators answer 9 questions about the XenDesktop deployment, including the VM Network to use, the domain to join, the SQL server used to host the database, the SCVMM server to host the desktops, and the administrative service accounts to connect to each of these resources.  Once this information is entered, the Service Template automation installs the XenDesktop infrastructure the same way, every time, ensuring consistency and correctness.
  • Reduction in cost of IT Operations – XenDesktop infrastructure consistently configured with automation is less costly to support because the configuration adheres to best practice standards.
  • Free highly skilled and knowledgeable staff from repeatable and mundane tasks – A Citrix administrator’s time is better spent focused on ensuring that users get access to the applications they need, rather than lengthy production installation tasks.
  • Simplified Eval to Retail Conversion – Windows Server 2012 and later, as well as XenDesktop 7.1, support conversion of evaluation product keys to retail keys.  This means that a successful POC deployment of the XenDesktop 7.1 Service Template is easily converted to a fully supported and properly configured production deployment.
  • Easy Scale-Out for greater capacity – SCVMM Service Templates support a scale-out model to increase user capacity.  For example, as user demand increases additional XenDesktop Controllers and StoreFront servers are easily added with a few clicks and are automatically joined to the XenDesktop site.

The XenDesktop Service Templates were developed and tested with the support of our friends and partners at Dell, who, in support of the release of XenDesktop 7.1 and the Service Template technical preview, are expected to launch new and innovative solutions that include these and other automation capabilities this quarter.  These solutions are based on the Dell DVS Enterprise for Citrix XenDesktop solutions.

Simplification of Distributed Deployments

The XenDesktop 7.1 in-box installation wizard is a fantastic user experience that automatically installs all the required prerequisites and XenDesktop components in under 30 minutes.  The result is a fully installed XenDesktop deployment, all on a single server, that is excellent for POCs and product evaluations.  The installation and configuration challenges occur when you want to install XenDesktop in production, with enterprise-class scalability, distributed across multiple servers.

Manual Installation Steps

XenDesktop 7 manual installation steps

Read more…

#Rackspace launches high performance cloud servers – #IaaS via @ldignan

November 5, 2013 Leave a comment

Rackspace on Tuesday rolled out new high performance cloud servers with all solid-state storage, more memory and the latest Intel processors.

The company aims to take its high performance cloud servers and pitch them to companies focused on big data workloads. Rackspace’s performance cloud servers are available immediately in the company’s Northern Virginia region and will come online in Dallas, Chicago and London this month. Sydney and Hong Kong regions will launch in the first half of 2014.

Among the key features:

  • The public cloud servers have RAID 10-protected solid state drives;
  • Intel Xeon E5 processors;
  • Up to 120 Gigabytes of RAM;
  • 40 Gigabits per second of network throughput.

Overall, the public cloud servers, which run on OpenStack, provide a healthy performance boost of Rackspace’s previous offering. The performance cloud servers are optimized for Rackspace’s cloud block storage.

Rackspace said it will offer the performance cloud servers as part of a hybrid data center package.

Continue reading here!

//Richard

Making #OpenStack Grizzly Deployments Less Hairy – #Puppet, #PuppetLabs

October 29, 2013 Leave a comment
Interesting! OpenStack needs a bit more “simplicity”! 😉
October 24, 2013 by Chris Hoge in OpenStack

Today, I’m excited to announce a new module from Puppet Labs for OpenStack Grizzly. I’ve been working on this module with the goal of demonstrating how to simplify OpenStack deployments by identifying their independent components and customizing them for your environment.

The puppetlabs-grizzly module is a multi-node deployment of OpenStack built on the puppetlabs-openstack modules. There are two core differences in how it handles deploying OpenStack resources. First, it uses a “roles and profiles” model. Roles allow you to identify a node’s function, and profiles are the components that describe that role. For example, a typical controller node is composed of messaging, database and API profiles. Roles and profiles allow you to clearly define what a node does with a role, while being flexible enough to mix profiles to compose new roles.

The second difference is that the module leverages Hiera, a database that allows you to store configuration settings in a hierarchy of text files. Hiera can use Facter facts about a given node to set values for module parameters, rather than storing those values in the module itself. If you have to change a network setting or password, Hiera allows you to change it in your Hiera text file hierarchy, rather than changing it in the module.

Check out parts 1 and 2 of the demo, which walks you through how to deploy OpenStack with the puppetlabs-grizzly module.

Multi-node OpenStack Grizzly with Puppet Enterprise: Deployment (Part 1 of 2)

Hyperscale Invades the Enterprise and the Impact on Converged Infrastructure – via @mathiastornblom

October 29, 2013 Leave a comment

This is really interesting! Look at this video!

In this whiteboard presentation, Wikibon Senior Analyst Stu Miniman shares how enterprise IT can learn from the architectural models of hyperscale companies. He walks through Wikibon’s definition of software-led infrastructure and how converged infrastructure solutions meet the market’s requirements.

Continue reading or watch the whole channel here!

//Richard

#Microsoft launches its #Azure #Hadoop service! – via @maryjofoley

October 28, 2013 Leave a comment

This is really cool!

Microsoft’s cloud-based distribution of Hadoop — which it has been developing for the past year-plus with Hortonworks — is generally available as of October 28.

Microsoft officials also are acknowledging publicly that Microsoft has dropped plans to deliver a Microsoft-Hortonworks developed implementation of Windows Server, which was known as HDInsight Server for Windows. Instead, Microsoft will be advising customers who want Hadoop on Windows Server to go with Hortonworks Data Platform (HDP) for Windows.

Windows Azure HDInsight is “100 percent Apache Hadoop” and builds on top of HDP. HDInsight includes full compatibility with Apache Hadoop, as well as integration with Microsoft’s own business-intelligence tools, such as Excel, SQL Server and PowerBI.

“Our vision is how do we bring big data to a billion people,” said Eron Kelly, Microsoft’s SQL Server General Manager. “We want to make the data and insights accessible to everyone.” 

Making the Hadoop big-data framework available in the cloud, so that users can spin up and spin down Hadoop clusters when needed is one way Microsoft intends to meet this goal, Kelly said.

Microsoft and Hortonworks originally announced plans to bring the Hadoop big-data framework to Windows Server and Windows Azure in the fall of 2011. Microsoft made a first public preview of its Hadoop on Windows Server product (known officially as HDInsight Server for Windows) available in October 2012.

Microsoft made available its first public preview of its Hadoop on Windows Azure service, known as HDInsight Service, on March 18. Before that…

Continue reading here!

//Richard