Why huge IaaS/PaaS/DaaS providers don’t use Dell and HP, and why they can do VDI cheaper than you! – via @brianmadden
Yes, why do people and organisations still think that they can build IaaS/PaaS/DaaS services within their enterprise’s and believe that they will be able to do so with the “same old architecture” and components used before? It’s not going to be comparable to the bigger players that are using newer and more scalable architectures with cheaper components.
Enterprises just don’t have that innovation power that companies like Google, Facebook and Amazon has! And if they do then most of the time they are stuck in their old way of doing things from a service delivery point of view, stopping them from thinking outside of the box though the service delivery organisation isn’t ready for it..
This is a great blog post on this from Brian, great work!!
Last month I wrote that it’s not possible for you to build VDI cheaper than a huge DaaS provider like Amazon can sell it to you. Amazon can literally sell you DaaS and make a profit all for less than it costs you to actually build and operate an equivalent VDI system on your own. (“Equivalent” is the key word there. Some have claimed they can do it cheaper, but they’re achieving that by building in-house systems with lower capabilities than what the DaaS providers offer.)
One of the reasons huge providers can build VDI cheaper than you is because they’re doing it at scale. While we all understand the economics of buying servers by the container instead of by the rack, there’s more to it than that when it comes to huge cloud provider. Their datacenters are not crammed full of HP or Dell’s latest rack mount, blade, or Moonshot servers; rather, they’re stacked floor-to-ceiling with heaps of circuit boards you’d hardly recognize as “servers” at all.
Building Amazon’s, Google’s, and Facebook’s “servers”
For most corporate datacenters, rack-mounted servers from vendors like Dell and HP make sense. They’re efficient in that they’re modular, manageable, and interchangeable. If you take the top cover off a 1U server, it looks like everything is packed in there. On the scale of a few dozen racks managed by IT pros who have a million other things on their mind, these servers work wonderfully!
But what if you worked at Amazon, and your boss just told you that you have to pick the hardware to run a VDI environment for 100,000 users. What would you buy? Sure, you can do the back-of-the-napkin calculation to see that you’re looking at 20,000 CPU cores and 400,000 gigabytes of memory, but how do you get that? Do you go out and buy 2,000 servers from Dell or HP? Probably not. 2,000 1U servers take up a lot of space—they’d be almost 300 feet high if stacked one on top of the other.
Instead you take a commercial off-the-shelf 1U server and look at it—I mean really look at it. It has a lot of nice features for customers buying a few dozen at a time. But you’re buying a few thousand at a time. So you open it up. What do you find? A lot of air. You’re paying your server vendor to enclose a lot of air into a 1.75″ tall metal box, stacks of which you’ll place into an even larger metal box (which, conveniently, you’ll also buy from them).
So the first thing to go are those metal boxes. No server chassis and no racks. You just need the guts.
Next up is the power supplies. Why does each server need its own? Power supplies are expensive in every sense of the word: they cost money, they take up space, and they waste power since they’re not too efficient at converting AC to DC. Let’s take a fresh look at this. The power coming into your datacenter is, what, 480 volts? Maybe 270? Then it’s cut down to 110, run down your aisle where a pair of power supplies (in every server!) convert it to DC with outputs at 12, 5, and 3.3 volts. Why are we going through all that conversion effort, thousands and thousands of times, in tiny little inefficiency circuits in metal boxes inside other metal boxes inside bigger metal boxes that we bought from our server vendor?
Instead why don’t we just install a big power supply at the end of the aisle that takes 270 volts AC and converts it directly to 12vdc with enough current to directly power a few thousand of our motherboards. Then we can design (or buy) our custom box-less server motherboards that only require a single 12vdc input.
Oh yeah, I should mention that this point that we’re designing or buying our own custom motherboards… It’s not as daunting as it sounds, because 99% of a motherboard’s design is already done by Intel, and everything outside of what they provide is just bloat we don’t need anyway.
So now that we’re looking at custom-built motherboards, let’s see what else we don’t need. For example, do our servers need VGA ports? We’re running thousands of servers…will we ever be in a situation where we need to plug a monitor into one? Does our DaaS…
Continue reading here!
//Richard