- Why OpenStack is much more than just hype
- A summary of key OpenStack technologies
- Why to consider converged infrastructure for building private clouds
- The right way to scale-out OpenStack deployments
Watch the webinar here!
//Richard
This is a good post by Dwayne Lessner around how perfect match OpenStack and Nutanix is (not just OpenStack of course, Nutanix rocks with VMware and Microsoft as well)!
Nutanix NDFS also provides an advanced and unique feature set for OpenStack based
private clouds. Key features include:
Read more here.
Here you also have the link to the webinar with topic:
Building OpenStack on a Single 2U Appliance
Watch the webinar here!
//Richard
Good article and I must agree that OpenStack has quite a long way to go before the “average” enterprise embraces it…
OpenStack still has maturing to do before it’s really ready for the enterprise, analyst says
Network World – Gartner analyst Allessandro Perilli recently attended his first summit for the open source cloud platform OpenStack and he says the project has a long way to go before it’s truly an enterprise-grade platform.
In a blog post reviewing his experience, the analyst – who focuses on studying cloud management tools – says that OpenStack is struggling to increase its enterprise adoption. Despite marketing efforts by vendors and favorable press, enterprise adoption remains in the very earliest stages, he says.
Don’t believe the hype generated by press and vendor marketing: OpenStack penetration in the large enterprise market is minimal.— Gartner analyst Alessandro Perilli
Sure there are examples like PayPal, eBay and Yahoo using OpenStack. But these are not the meat and potatoes types of enterprise customers that vendors are looking to serve. Why? He outlines four reasons, most of which are related to the process and community nature of the project, and less around the technical maturity of it. By the way, this is not the first time a Gartner analyst has thrown cold water on the project.
[EARLIER CRITICISMS FROM GARTNER: Gartner report throws cold water on OpenStack hype]
Lack of clarity about what OpenStack does
There is market confusion about exactly what OpenStack is, he says. It is an open source platform that can be assembled together to build a cloud. It, by itself, is not a cloud though just by downloading and installing it. OpenStack requires some heavy lifting to turn the code into an executable cloud platform, which is why dozens of companies have come out with distributions or productized versions of OpenStack code. But, the code itself is not a competitor to cloud platforms offered by vendors like VMware, BMC, CA or others. Read more…
The constant battle between the hypervisor and orchestration of IaaS etc. is of course continuing! But it is really fun I must say that Microsoft is getting more and more mature with it’s offerings in this space, great job!
One of the things that I tend to think most of is the cost, scalability and flexibility of the infrastructure that we build and how we build it, I often see that we tend to do what we’ve done for so many years now. We buy our SAN/NAS storage, we buy our servers but lean towards Blade servers though we think that’s the latest and coolest, and then we try to squeeze that into some sort of POD/FlexPods/UCS or whatever we like to call it to find our optimal “volume of Compute, Network and Storage” that we can scale. But is this scalable like the bigger cloud players like Google, Amazon etc.? Is this 2013 state of the art? I think that we’re just fooling ourselves a bit and build whatever we’ve done for all these years and don’t really provide the business with anything new… but that’s my view… I know what I’d look at and most of you that have read my earlier blog posts know that I love the way of scaling out and doing more like the big players using something like Nutanix and ensure that you choose the right IaaS components as a part of that stack, as well as the orchestration layer (OpenStack, System Center, CloudStack, Cloud Platform or whatever you prefer after you’ve done your homework).
Back to the topic a bit, I’d say that the hypervisor is of no importance anymore, that’s why everyone if giving it away for free or to the open source community! Vendors are after the more IaaS/PaaS orchestration layer and get into that because if they get that business then they have nested their way into your business processes, that’s where ultimately that will deliver the value as IT services in an automated way once you’ve got your business services and processes in place, and then it’s harder to make a change and they will live fat and happy on you for some years to come! 😉
Rackspace on Tuesday rolled out new high performance cloud servers with all solid-state storage, more memory and the latest Intel processors.
The company aims to take its high performance cloud servers and pitch them to companies focused on big data workloads. Rackspace’s performance cloud servers are available immediately in the company’s Northern Virginia region and will come online in Dallas, Chicago and London this month. Sydney and Hong Kong regions will launch in the first half of 2014.
Among the key features:
Overall, the public cloud servers, which run on OpenStack, provide a healthy performance boost of Rackspace’s previous offering. The performance cloud servers are optimized for Rackspace’s cloud block storage.
Rackspace said it will offer the performance cloud servers as part of a hybrid data center package.
Continue reading here!
//Richard
Today, I’m excited to announce a new module from Puppet Labs for OpenStack Grizzly. I’ve been working on this module with the goal of demonstrating how to simplify OpenStack deployments by identifying their independent components and customizing them for your environment.
The puppetlabs-grizzly module is a multi-node deployment of OpenStack built on the puppetlabs-openstack modules. There are two core differences in how it handles deploying OpenStack resources. First, it uses a “roles and profiles” model. Roles allow you to identify a node’s function, and profiles are the components that describe that role. For example, a typical controller node is composed of messaging, database and API profiles. Roles and profiles allow you to clearly define what a node does with a role, while being flexible enough to mix profiles to compose new roles.
The second difference is that the module leverages Hiera, a database that allows you to store configuration settings in a hierarchy of text files. Hiera can use Facter facts about a given node to set values for module parameters, rather than storing those values in the module itself. If you have to change a network setting or password, Hiera allows you to change it in your Hiera text file hierarchy, rather than changing it in the module.
Check out parts 1 and 2 of the demo, which walks you through how to deploy OpenStack with the puppetlabs-grizzly module.
Here we are again… a lot of companies and Solution Architects are scratching their heads thinking about how we’re going to do it “this time”.
Most of you out there have something today, probably running XenApp on your VMware or XenServer hypervisor with a FC SAN or something, perhaps provisioned using PVS or just managed individually. There is also most likely a “problem” with talking to the Storage team that manage the storage service for the IaaS service that isn’t built for the type of workloads that XenApp and XenDesktop (VDI) requires.
So how are you going to do it this time? Are you going to challenge the Storage and Server/IaaS service and be innovative and review the new cooler products and capabilities that now exists out there? They are totally changing the way that we build Virtual Cloud Computing solutions where; business agility, simplicity, cost savings, performance and simple scale out is important!
There is no one solution for everything… but I’m getting more and more impressed by some of the “new” players on the market when it comes to providing simple and yet so powerful and performing Virtual Cloud Computing products. One in particular is Nutanix that EnvokeIT has partnered with and they have a truly stunning product.
But as many have written in many great blog posts about choosing your storage solution for your VDI solution you truly need to understand what your service will require from the underlying dependency services. And is it really worth to do it the old way? You have your team that manages the IaaS service, and most of the times it just provides a way for ordering/provisioning VM’s, then the “VDI” team leverages that one using PVS or MCS. Some companies are not even where they can order that VM as a service or provision it from the Image Provisioning (PVS/MCS) service, everything is manual and they call it a IaaS service… is it then a real IaaS service? My answer would be now… but let’s get back to the point I was trying to make!
This HW, Hypervisor, Network, Storage (and sometimes orchestrator) components are often managed by different teams. Each team are also most of the times not really up to date in terms of understanding what a Virtualization/VDI service will require from them and their components. They are very competent in understanding the traditional workload of running a web server VM or similar, but not really dealing with boot storms from hundreds to thousands of VDI’s booting up, people logging in at the same time and the whole pattern of IOPS that is generated in these VM’s “life-cycle”.
This is where I’d suggest everyone to challenge their traditional view on building Virtualization and Storage services for running Hosted Shared Desktop (XenApp/RDS) and Hosted Virtual Desktop (VDI/XenDesktop) on!
You can reduce the complexity, reduce your operational costs and integrate Nutanix as a real power compute part of your internal/private cloud service!
One thing that also is kind of cool is the integration possibilities of the Nutanix product with OpenStack and other cloud management products through its REST API’s. And it supports running both Hyper-V, VMware ESXi and KVM as hypervisors in this lovely bundled product.
If you want the nitty gritty details about this product I highly recommend that you read the Nutanix Bible post by Steven Poitras here.
A little update below from Joe Panettieri, good reading! And thanks Oded Nahum for sharing!
And I must agree that I would not count out CloudStack! 😉
OpenStack vs CloudStack: The Latest Score
OpenStack remains the largest and most active open source cloud computing project, Network World notes. But research from Chinese blogger Qingye “John” Jiangsuggests that momentum is building for CloudStack, and interest in Eucalyptus and OpenNebula remains strong. For cloud services providers (CSPs) and consultants, it’s critically important to track each of the four open source cloud platforms. Here’s why.
During Q4 2012, interest in CloudStack grew faster than rival open source cloud platforms. But Jiang’s data shows that:
Partner Views
Those findings are important to CSPs and consultants that are selecting cloud platforms upon which to build services. OpenStack has been the poster child for open source cloud computing for more than a year now but the bandwagon has some challenges.
Dell, for one, says OpenStack lacks maturity and the hardware giant won’t launch its public cloud (based on OpenStack) for roughly a year. Dell also alleges that Hewlett-Packard’s own public cloud uses a “dramatically forked” version of OpenStack containing proprietary HP technology.
Still, OpenStack consulting opportunities seem to be emerging rapidly. Mirantis, for instance, has emeged as the largest OpenStack systems integrator. The company’s clientele apparently includes Cisco, Dell, GE, Agilent, NASA, HP, AT&T, The Gap, Axcient and Nexenta.
Moreover, Rackspace (NYSE: RAX) continues to see OpenStack progress. CTO John Engates recently offered his perspectives — including some OpenStack milestones — to ZDnet.
Here Comes CloudStack
Meanwhile, it has been awhile since I’ve heard from the CloudStack community. But the CloudStack chatter will likely grow very loud…
Continue reading here!
//Richard
“If you can’t make it to San Diego but you still want to participate in the sessions that will shape the future roadmap of OpenStack ‘Grizzly’ you can join the live audio streaming events on Webex. Kindly donated by Cisco Webex team (users of OpenStack themselves), the Webex session will run for the whole day from 9:30am to 6:00pm for each of the rooms Emma AB, Emma C, Windsor BC, Annie AB where the Design Summit will happen. Webex will be used to stream the audio of all conversations in the room where there will be enough microphones: remote participants will use the Webex chat to ask questions and people in the rooms will see the chat stream on one of the two projectors in each room.
The sessions are ready for you to register, they’re identified by topic (Nova, Quantum, Cinder, Documentation, Common, Process, Swift):
Also, each entry on the official schedule of the Design Summit has a link to its proper Webex audio streaming session. There will also be a live video streaming for the general sessions on Tuesday and Wednesday. Keep an eye on the website and twitter for details.
Known issue: Webex doesn’t support Java 64bit on Linux. If you try to join the voip conference Webex will complain that “The Audio Device is Unaccessible Now”. The most common workaround is to install 32bit Java environment alongside the 64bit one.”
Read more here!
//Richard
This was an interesting article, I recommend reading it! And thx Ruben for the tip!
“Recommendations:
http://www.gartner.com/technology/reprints.do?id=1-1C3IGID&ct=120919&st=sb
//Richard
Ok, so what are your thoughts, findings and view on which will become or already is the best solution out there for IaaS/PaaS services?
I must admit that this is not my area of expertise but it’s an area of interest and I like reading about it to get more up to date on where they are from a service readiness perspective. Are they ready for enterprise usage, or are enterprises stuck in their mindset of adopting the open source initiatives and technologies that exists around them. If yes; then why? Is it due to that it doesn’t fit the existing way of how they buy or deliver existing IT services, or is the technology not ready from an ITSM point of view with SD, SLA, SLO and delivery models that we have with the “old” traditional technologies like vSphere, XenServer and Hyper-V if you put a large enterprise organization and governance on top of it?