Archive

Posts Tagged ‘cost’

Why huge IaaS/PaaS/DaaS providers don’t use Dell and HP, and why they can do VDI cheaper than you! – via @brianmadden

February 3, 2014 Leave a comment

Yes, why do people and organisations still think that they can build IaaS/PaaS/DaaS services within their enterprise’s and believe that they will be able to do so with the “same old architecture” and components used before? It’s not going to be comparable to the bigger players that are using newer and more scalable architectures with cheaper components.

Enterprises just don’t have that innovation power that companies like Google, Facebook and Amazon has! And if they do then most of the time they are stuck in their old way of doing things from a service delivery point of view, stopping them from thinking outside of the box though the service delivery organisation isn’t ready for it..

This is a great blog post on this from Brian, great work!!

Last month I wrote that it’s not possible for you to build VDI cheaper than a huge DaaS provider like Amazon can sell it to you. Amazon can literally sell you DaaS and make a profit all for less than it costs you to actually build and operate an equivalent VDI system on your own. (“Equivalent” is the key word there. Some have claimed they can do it cheaper, but they’re achieving that by building in-house systems with lower capabilities than what the DaaS providers offer.)

One of the reasons huge providers can build VDI cheaper than you is because they’re doing it at scale. While we all understand the economics of buying servers by the container instead of by the rack, there’s more to it than that when it comes to huge cloud provider. Their datacenters are not crammed full of HP or Dell’s latest rack mount, blade, or Moonshot servers; rather, they’re stacked floor-to-ceiling with heaps of circuit boards you’d hardly recognize as “servers” at all.

Building Amazon’s, Google’s, and Facebook’s “servers”

For most corporate datacenters, rack-mounted servers from vendors like Dell and HP make sense. They’re efficient in that they’re modular, manageable, and interchangeable. If you take the top cover off a 1U server, it looks like everything is packed in there. On the scale of a few dozen racks managed by IT pros who have a million other things on their mind, these servers work wonderfully!

Read more…

#Gartner report – How to Choose Between #Hyper-V and #vSphere – #IaaS

November 19, 2013 Leave a comment

The constant battle between the hypervisor and orchestration of  IaaS etc. is of course continuing! But it is really fun I must say that Microsoft is getting more and more mature with it’s offerings in this space, great job!

One of the things that I tend to think most of is the cost, scalability and flexibility of the infrastructure that we build and how we build it, I often see that we tend to do what we’ve done for so many years now. We buy our SAN/NAS storage, we buy our servers but lean towards Blade servers though we think that’s the latest and coolest, and then we try to squeeze that into some sort of POD/FlexPods/UCS or whatever we like to call it to find our optimal “volume of Compute, Network and Storage” that we can scale. But is this scalable like the bigger cloud players like Google, Amazon etc.? Is this 2013 state of the art? I think that we’re just fooling ourselves a bit and build whatever we’ve done for all these years and don’t really provide the business with anything new… but that’s my view… I know what I’d look at and most of you that have read my earlier blog posts know that I love the way of scaling out and doing more like the big players using something like Nutanix and ensure that you choose the right IaaS components as a part of that stack, as well as the orchestration layer (OpenStack, System Center, CloudStack, Cloud Platform or whatever you prefer after you’ve done your homework).

Back to the topic a bit, I’d say that the hypervisor is of no importance anymore, that’s why everyone if giving it away for free or to the open source community! Vendors are after the more IaaS/PaaS orchestration layer and get into that because if they get that business then they have nested their way into your business processes, that’s where ultimately that will deliver the value as IT services in an automated way once you’ve got your business services and processes in place, and then it’s harder to make a change and they will live fat and happy on you for some years to come! 😉

Read more…

Solving the Compute and Storage scalability dilemma – #Nutanix, via @josh_odgers

October 24, 2013 Leave a comment

The topic of Compute, Network and STORAGE is a hot topic as I’ve written in blog posts before this one (How to pick virtualization (HW, NW, Storage) solution for your #VDI environment? – #Nutanix, @StevenPoitras) … and still a lot of colleagues and customers are struggling with finding better solutions and architecture.

How can we ensure that we get the same or better performance of our new architecture? How can we scale in a more simple and linear manner? How can we ensure that we don’t have a single point of failure for all of our VM’s etc..? How are others scaling and doing this in a better way?

I’m not a storage expert, but I do know and read that many companies out there are working on finding the optimal solution for Compute and Storage, and how they can get the cost down and be left with a more simple architecture to manage…

This is a topic that most need to address as well now when more and more organisations are starting to build their private clouds, because how are you going to scale it and how can you get closer to the delivery that the big players provide? Gartner even had Software-Defined-Storage (SDS) as the number 2 trend going forward: #Gartner Outlines 10 IT Trends To Watch – via @MichealRoth, #Nutanix, #VMWare

Right now I see Nutanix as the leader here! They rock! Just have a look at this linear scalability:

If you want to learn more how Nutanix can bring great value please contact us at EnvokeIT!

For an intro of Nutanix in 2 minutes have a look at these videos:

Overview:

Read more…

Windows #Intune – Toyota rolls out to more than 3000 clients

Automotive Retailer Avoids $1.3 Million in IT Costs with Cloud-Based PC Management Tool

Toyota Motor Europe (TME) had no tools to manage 3,500 car-diagnostic PCs running outside the corporate domain at 3,000 dealerships. TME chose Windows Intune to manage the PCs remotely from a web-based console. It can standardize software deployments to ensure consistent customer service and enhance the security of managed computers to reduce downtime at dealerships. Remote assistance capabilities will also help reduce on-site support costs.

Business Needs
Toyota Motor Europe (TME) manages a network of 30 national marketing and sales companies (NMSC) across Europe. These organizations oversee more than 3,000 dealerships.

In early 2012, TME replaced its stand-alone car-diagnostic tool called IT2 with 3,500 new PCs running more up-to-date software, including Tech Stream and Picoscope. The PCs also store technical documentation. Mechanics attach the PCs to a Vehicle Information Module that connects to a vehicle’s engine to provide critical maintenance information, such as how to reprogram and update a vehicle’s computer chip. The PCs were installed by an external company. The computers are not joined to the domain and operate outside the corporate firewall.

TME did not have a management solution for these 3,500 computers. “We wanted everyone to use the new tools, but we had no visibility into how the dealerships were working with the PCs,” says Niels Svaerke, Manager, Business Process Office, After Sales at Toyota Motor Europe. 

NMSC staff downloaded diagnostic software to the PCs from a Toyota intranet site. However, there was no way for headquarters to verify that all dealerships received and installed the software updates concurrently. “It was difficult to ensure that everyone was providing the same level of service by using the same corporate systems and auto diagnostics,” says Dirk Christiaens, Manager of Enterprise Architecture at Toyota Motor Europe. “Also, the head office had no way of knowing if the dealerships deployed an antivirus solution for their PCs, a worrying scenario as they were connected directly to the Internet.”

NMSC employees performed on-site support for mechanics, which often entails travel time. Sometimes, NMSC staff called an external company to reinstall all the software on the PC. Either scenario incurred wasteful downtime at the dealerships.

Solution
To solve these issues, Toyota Motor Europe decided to evaluate Windows Intune, the cloud-based PC management service from Microsoft. Staff at the NMSC can use the web-based Administration console in Windows Intune to run PC management tasks remotely, including software distribution. All that is required is a standard Internet connection, a browser running Microsoft Silverlight, and the Windows Intune client software installed on the PCs at the dealerships. The client returns information on the PC, including software and hardware inventory, and endpoint protection and update status to the Administration console.“We wanted to move into cloud computing, so Windows Intune met our needs perfectly,” says Christiaens. “Windows Intune had a more flexible, pay-as-you-go model, with no additional bandwidth or server costs.”

Read the whole case study here!

//Richard

#Sanbolic Brings Public Cloud Economics to the Enterprise – #Melio

March 18, 2013 1 comment

Ok, I must say that this product is great!!! If you haven’t looked at it before then please do! And contact us at EnvokeIT if you want more details!

Sanbolic Enables Distributed Flash, SSD and HDD to Achieve Enterprise Systems Capability and Scale-Out In Server-Side and Commodity Storage Deployments

Waltham, MA – (March 18, 2013) – Sanbolic® today announced the general availability of its Melio version 5 (Melio5™) software – delivering distributed scale-out, high-availability and enterprise data services through software. Server-side flash has seen rapid adoption for applications such as hyperscale web serving, but limited adoption in general purpose enterprise applications. With the launch of Melio5, Sanbolic enables enterprise customers to dramatically improve their storage infrastructure economics by enabling server-side flash, SSD and HDD as primary persistent storage. Melio5 aggregates across nodes for scale-out and availability while providing RAID, remote replication, quality of Service (QoS), snapshots and systems functionality through a software layer on commodity hardware. This provides customers with the ability to deploy commodity and server-based storage architecture with similar economics and flexibility as public cloud data centers such as Google and Facebook.

With validation by hundreds of enterprise and government organizations running in production, Melio volume management and file system technology addresses the needs of high performing cost effective storage infrastructure on-premise. Melio5’s architecture is designed to scale up to 2,048 nodes and up to 65,000 storage devices enabling linear performance scalability in a cluster.

Melio5 also eliminates the need to deploy a redundant flash caching layer in front of legacy storage area network (SAN) hardware by directly incorporating flash into hybrid volumes and intelligently placing data based on file system access profiles. A hybrid volume will place random access data such as file system metadata on flash sectors while placing sequential data on low cost hard disk drives to greatly reduce the cost of capacity. The result is a highly scalable, high performance storage system, with a much lower cost than legacy storage arrays.

“Typically, server and disk drive vendors operate on gross margins in the 20-30% range. Storage array vendors, on the other hand, are often twice that or more,” said Eric Slack, Senior Analyst,Storage Switzerland. “Sanbolic’s approach leverages the architecture that the big social media and public cloud companies use, to fix this problem. By replacing storage arrays (and storage array margins) with commodity server and disk drive hardware and enabling it with intelligence through software, companies can significantly reduce storage infrastructure costs.”

Terri McClure, Senior Analyst, Enterprise Strategy Group (ESG), stated, “Sanbolic’s Melio5 software enables corporate users to take advantage of flash and SSD in conjunction with commodity hardware to create an intelligent, cost effective, and high performance storage architecture like the huge public cloud companies run, while still ensuring enterprise workload scalability and high availability.”

“Melio5 lets us solve one of the biggest challenges for our customers today – the upfront and management cost for storage – without sacrificing systems capability or performance. The Lego-like modular capability of Melio allows our customers to scale-out their storage and servers based on off-the-self commodity components, without downtime,” said Mattias Tornblom, CEO, EnvokeIT.

“LSI and Sanbolic’s shared vision and complementary products help customers to dramatically improve the performance, flexibility and economics of their on-premise storage infrastructure,” said Brent Blanchard, Senior Director of Worldwide Channel Sales and Marketing, LSI Corporation. “LSI’s Nytro™ family of server-side flash acceleration cards and leading SAS-based server storage connectivity solutions…

Continue reading here or here!

//Richard

%d bloggers like this: