Archive

Author Archive

Single File Restore – Fairy Tale Ending Going Down History Lane – via @Nutanix and @dlink7

November 21, 2013 Leave a comment

Great blog post by Dwayne Lessner!

If I go back to my earliest sysadmin days where I had to restore a file from a network share, I was happy just to get the file back. Where I worked we only had tape and it was crapshoot at the best of times. Luckily, 2007 brought me a SAN to play with.

bad times with dealing with LUNSThe SAN made it easier for sure to go back into time and find that file and pull it back from the clutches of death by using hardware based snapshots. It was no big deal to mount the snapshot to the guest but fighting with the MS iSCSI initiator got pretty painful, partly because I had a complex password for the CHAP authentication, and partly because clean-up and logging out of the iSCSI was problematic. I always had ton of errors, both in the windows guest and in the SAN console which caused more grief than good it seemed.

Shortly after the SAN showed up, VMware entered my world. It was great that I didn’t have to mess with MS iSCSI initiators any more but it really just moved my problem to the ESXi host. Now that VMware had the LUN with all my VMs, I had to worry about resignatureing the LUN so it wouldn’t have conflicts with the rest of production VMs. This whole process was short lived because we couldn’t afford all the space the snapshots were taking up. Since we had to use LUNS we had to take snapshots of all the VMs even though there were a handful that really need the extra protection. Before virtualization we were already reserving over 50% of the total LUN space because snapshots were backed by large block sizes and ate through space. Due to the fact that we had to snapshot all of the VMs on the LUN we had to change the snap reserve to 100%. We quickly ran out of space and turned off snapshots for our virtual environment.

When a snapshot is taken on Nutanix, we don’t copy data, nor do we copy the meta-data. The meta-data and data diverge on a need basis; as new writes happen against the active parent snapshot we just track the changes. Changes operate at the byte level which is a far cry from the 16 MB I had to live with in the past.

Due to the above-mentioned life lessons in LUN-based snapshots, I am very happy to show Nutanix customers the benefits of per-VM snapshots and how easy it to restore a file.

Per VM protectionTo restore a file from a VM living on Nutanix you just need to make sure you have a protection domain set up with a proper RPO schedule. For this example, I created a Protection Domain called RPO-High. This is great as you could have 2,000 VMs all on one volume with Nutanix. You just slide over what VMs you want to protect; in this example, I am protecting my FileServer. Note you can have more than one protection domain if you want to assign different RPO to different VMs. Create a new protection domain and add 1 VM or more based on the application grouping.

Read more…

10TB of free #cloud #storage from China’s #Tencent

November 20, 2013 Leave a comment

Wow, that really puts some pressure on the “other” guys!!! 10TB is a lot!

I’ve talked to people and some are concerned about that it’s a China owned company even though the storage will be located in the US and Southeast Asia though it’s the company that has the “power” to manage and control it, but let us set that aside for now! 10TB of free storage! 🙂

Chinese Internet giant Tencent is looking to dole out a jaw-dropping 10TB worth of free cloud storage to its international users soon, as it seeks to roll out an English version of its cloud storage product next year, PandoDaily reports.

Peter Zheng, the Shenzhen-based vice president of Tencent’s social network group, tells PandoDaily that the company will bring its cloud storage offering to the US in early 2014. It is also launching an English version of Story Camera, an Instagram-like watermark-based photo app already popular in China, in the next two to three weeks.

In August, Tencent started courting Chinese users with its very generous gift of space — as it sought to out-do its rivals Baidu and Qihoo, which started giving away 1TB worth of free storage.

As Tencent gets into the game with an English version of its cloud storage service though, it looks set to attract way more users around the world who want to get their hands on this colossal amount of space — which, to be honest, seems almost impossible to finish using.

The whopping 10TB puts the free space given by Dropbox, Box and Microsoft to shame. At their maximum amounts, Dropbox has offered free space amounts ranging from 25-50GB as part of promotional deals with Samsung and HTC, Box has offered 50GB of free storage with file-size limitations before, and Microsoft upped the storage space for its SkyDrive Pro offering from 7GB to 25GB.

However, just as with the Chinese rollout, Tencent will likely kick in the same caveat for its cloud storage offering — that is, you won’t get the whole 10TB worth of space at one go. Instead, Tencent will top up your storage space as you deplete it.

To put privacy concerns at ease, Zheng tells Pandodaily that the international data will probably be stored on servers outside of China — in the same way that Tencent’s messaging service WeChat uses servers in the US and Southeast Asia.

As Tencent is taking big strides…

Continue reading here!

//Richard

#Gartner report – How to Choose Between #Hyper-V and #vSphere – #IaaS

November 19, 2013 Leave a comment

The constant battle between the hypervisor and orchestration of  IaaS etc. is of course continuing! But it is really fun I must say that Microsoft is getting more and more mature with it’s offerings in this space, great job!

One of the things that I tend to think most of is the cost, scalability and flexibility of the infrastructure that we build and how we build it, I often see that we tend to do what we’ve done for so many years now. We buy our SAN/NAS storage, we buy our servers but lean towards Blade servers though we think that’s the latest and coolest, and then we try to squeeze that into some sort of POD/FlexPods/UCS or whatever we like to call it to find our optimal “volume of Compute, Network and Storage” that we can scale. But is this scalable like the bigger cloud players like Google, Amazon etc.? Is this 2013 state of the art? I think that we’re just fooling ourselves a bit and build whatever we’ve done for all these years and don’t really provide the business with anything new… but that’s my view… I know what I’d look at and most of you that have read my earlier blog posts know that I love the way of scaling out and doing more like the big players using something like Nutanix and ensure that you choose the right IaaS components as a part of that stack, as well as the orchestration layer (OpenStack, System Center, CloudStack, Cloud Platform or whatever you prefer after you’ve done your homework).

Back to the topic a bit, I’d say that the hypervisor is of no importance anymore, that’s why everyone if giving it away for free or to the open source community! Vendors are after the more IaaS/PaaS orchestration layer and get into that because if they get that business then they have nested their way into your business processes, that’s where ultimately that will deliver the value as IT services in an automated way once you’ve got your business services and processes in place, and then it’s harder to make a change and they will live fat and happy on you for some years to come! 😉

Read more…

There was a big flash, and then the dinosaurs died – via @binnygill, #Nutanix

November 15, 2013 Leave a comment

Great blog post by @binnygill! 😉

This is how it was supposed to end. The legacy SAN and NAS vendors finally realize that Flash is fundamentally different from HDDs. Even after a decade of efforts to completely assimilate Flash into the legacy architectures of the SAN/NAS era, it’s now clear that new architectures are required to support Flash arrays. The excitement around all-flash arrays is a testament to how different Flash is from HDDs, and its ultimate importance to datacenters.

Consider what happened in the datacenter two decades ago: HDDs were moved out of networked computers, and SAN and NAS were born. What is more interesting, however, is what was not relocated.

Although it was feasible to move DRAM out with technology similar to RDMA, it did not make sense. Why move a low latency, high throughput component across a networking fabric, which would inevitably become a bottleneck?

Today Flash is forcing datacenter architects to revisit this same decision. Fast near-DRAM-speed storage is a reality today. SAN and NAS vendors have attempted to provide that same goodness in the legacy architectures, but have failed. The last ditch effort is to create special-purpose architectures that bundle flash into arrays, and connect it to a bunch of servers. If that is really a good idea, then why don’t we also pool DRAM in that fashion and share with all servers? This last stand will be a very short lived one. What is becoming increasingly apparent is that Flash belongs on the server – just like DRAM.

For example, consider a single Fusion-IO flash card that writes at 2.5GB/s throughput and supports 1,100,000 IOPS with just 15 microsec latency (http://www.fusionio.com/products/iodrive2-duo/). You can realize these speeds by attaching the card to your server and throwing your workload at it. If you put 10 of these cards in a 2U-3U storage controller, should you expect 25GB/s streaming writes, and 11 million IOPS at sub millisecond latencies. To my knowledge no storage controller can do that today, and for good reasons.

Networked storage has the overhead of networking protocols. Protocols like NFS and iSCSI are not designed for massive parallelism, and end up creating bottlenecks that make crossing a few million IOPS on a single datastore an extremely hard computer science problem. Further, if an all-flash array is servicing ten servers, then the networking prowess of the all-flash array should be 10X of that of each server, or else we end up artificially limiting the bandwidth that each server can get based on how the storage array is shared.

No networking technology, whether it be Infiniband, Ethernet, or fibre channel can beat the price and performance of locally-attached PCIe, or even that of a locally-attached SATA controller. Placing flash devices that operate at almost DRAM speeds outside of the server requires unnecessary investment in high-end networking. Eventually, as flash becomes faster, the cost of a speed-matched network will become unbearable, and the datacenter will gravitate towards locally-attached flash – both for technological reasons, as well as for sustainable economics.

The right way to utilize flash is to treat it as one would treat DRAM — place it on the server where it belongs. The charts below illustrate the dramatic speed up from server-attached flash.

Continue reading here!

//Richard

#Netscaler Insight and Integration with #XenDesktop Director – via @msandbu

November 15, 2013 Leave a comment

Great blog post by Marius! 🙂

This is another one of Citrix hidden gems, Netscaler Insight. This product has been available from Citrix some time now, but with the latest update in became alot more useful. Insight is an virtual applance from Citrix which gathers AppFlow data and statistics from Netscaler to show performance data, kinda like old Edgesight. (NOTE: In order to use this functionality against Netscaler it requires atleast Netscaler Enterprise or Platinum)

Insight has two specific functions, called Web Insight and HDX insight.
Web Insight shows traffic related to web-traffic, for instance how many users, what ip-adresses, what kind of content etc. 
HDX Insight is related to Access Gateway functionality of Citrix to show for instance how many users have accessed the solution, what kind of applications have they used, what kind of latency did the clients have to the netscaler etc.

You can download this VPX from mycitrix under Netscaler downloads, important to note as of now it is only supported on Vmware and XenServer (They haven’t mentioned any support coming for Hyper-V but I’m guessing its coming.

The setup is pretty simple like a regular Netscaler we need to define an IP-address and subnet mask (Note that the VPX does not require an license since it will only gather data from Netscaler appliances that have a platform license and it does not work on regular Netscaler gateways)

After we have setup the Insight VPX we can access it via web-gui, the username and password here is the same as Netscaler nsroot & nsroot

image

After this is setup we need to enable the insight features, we can start by setting up HDX insight, here we need to define a expression that allows all Gateway traffic to be gathered. 
Here we just need to enable VPN equals true. We can also add mulitple Netscalers here, if you have a cluster or HA setup we need to add both nodes.

image

After we have added the node, just choose configure on the node and choose VPN from the list and choose expression true.

Read more…

#Amazon WorkSpaces – “#VDI” cloud service – #VDI, #BYOD

November 15, 2013 Leave a comment

This is an interesting offering from Amazon! I however don’t like that everyone talks about the “VDI” concept all the time.. this is based on Windows server with Desktop Experience and not a client OS.

Amazon WorkSpaces is a fully managed desktop computing service in the cloud. Amazon WorkSpaces allows customers to easily provision cloud-based desktops that allow end-users to access the documents, applications and resources they need with the device of their choice, including laptops, iPad, Kindle Fire, or Android tablets. With a few clicks in the AWS Management Console, customers can provision a high-quality desktop experience for any number of users at a cost that is highly competitive with traditional desktops and half the cost of most virtual desktop infrastructure (VDI) solutions.

WorkSpace Bundles

Amazon WorkSpaces offers a choice of service bundles providing different hardware and software options to meet your needs. You can choose from the Standard or Performance family of bundles that offer different CPU, memory, and storage resources, based on the requirements of your users. If you would like to launch WorkSpaces with more software already pre-installed (e.g., Microsoft Office, Trend Micro Anti-Virus, etc.), you should choose the Standard Plus or Performance Plus options. If you don’t need the applications offered in those bundles or you would like to use software licenses for some of the applications in the Standard Plus or Performance Plus options that you’ve already paid for, we recommend the Standard or Performance bundles. Whichever option you choose, you can always add your own software whenever you like.

WorkSpaces Bundle Hardware Resources Applications Monthly Price
Standard 1 vCPU, 3.75 GiB Memory, 50 GB User Storage Utilities (Adobe Reader, Internet Explorer 9, Firefox, 7-Zip, Adobe Flash, JRE) $35
Standard Plus 1 vCPU, 3.75 GiB Memory, 50 GB User Storage Microsoft Office Professional 2010, Trend Micro Anti-Virus, Utilities (Adobe Reader, Internet Explorer 9, Firefox, 7-Zip, Adobe Flash, JRE) $50
Performance 2 vCPU, 7.5 GiB Memory, 100 GB User Storage Utilities (Adobe Reader, Internet Explorer 9, Firefox, 7-Zip, Adobe Flash, JRE) $60
Performance Plus 2 vCPU, 7.5 GiB Memory, 100 GB User Storage Microsoft Office Professional 2010, Trend Micro Anti-Virus, Utilities (Adobe Reader, Internet Explorer 9, Firefox, 7-Zip, Adobe Flash, JRE) $75

All WorkSpaces Bundles provide the Windows 7 Experience to users (provided by Windows Server 2008 R2). Microsoft Office 2010 Professional includes Microsoft Excel 2010, Microsoft OneNote 2010, Microsoft PowerPoint 2010, Microsoft Word 2010, Microsoft Outlook 2010, Microsoft Publisher 2010 and Microsoft Access 2010.

Read more…

#Citrix #Receiver for Linux 13 released

November 13, 2013 Leave a comment

Finally Citrix has released a Receiver version for Linux that for instance has StoreFront support! Can’t wait to try it out and see if it gives the same user  experience etc like the one on OS X and Windows!

Here you have some details about it and links to the product documentation:

Access Windows applications and virtual desktops, as well as web and SaaS applications. Enable anywhere access from your Linux thin client/desktop or use web access.

What’s new

The following new features are available in this release:

  • Support for XenDesktop 7 features – Receiver supports many of the new features and enhancements in XenDesktop 7, including Windows Media client-side content fetching, HDX 3D Pro, HDX RealTime webcam compression, Server-rendered Rich Graphics, and IPv6 support.
    Note: Link-local network addresses are not supported in IPv6 environments. You must have at least one global or unique-local address assigned to your network interface.
  • VDI-in-a-Box support – You can use Receiver to connect to virtual desktops created with Citrix VDI-in-a-Box.
  • Self-service UI – A new graphical user interface (UI), like that in other Citrix Receivers, replaces the configuration manager, wfcmgr. After they are set up with an account, users can subscribe to desktops and applications, and then start them.
  • Deprecated and removed utilities – The pnabrowse command-line utility is deprecated in favor of the new storebrowse command-line utility. The icabrowse and wfcmgr utilities have been removed.
  • StoreFront support – You can now connect to StoreFront stores as well as Citrix XenApp sites (also known as Program Neighborhood Agent sites).
  • UDP audio support – Most audio features are transported using the ICA stream and are secured in the same way as other ICA traffic. User Datagram Protocol (UDP) Audio uses a separate, unsecured, transport mechanism, but is more consistent when the network is busy. UDP Audio is primarily designed for Voice over IP (VoIP) connections and requires that audio traffic is of medium quality (that is Speex wideband) and unencrypted.
  • Packaging – An armhf (hard float) Debian package and tarball are now included in the download packages. In addition, the Debian package for Intel systems uses multiarch (a Debian feature) for installations on 32- and 64-bit systems. 32-bit binaries are also available in RPM packages.
  • System Flow Control – Video display has been enhanced on low-performance user devices that connect to high-performance servers. In such setups, System Flow Control prevents sessions becoming uncontrollable and unusable.
  • Localization – Receiver is now available in German, Spanish, French, Japanese, and Simplified Chinese.
  • Keyboard improvements – You can now specify which local key combination (Ctrl+Alt+End or Ctrl+Alt+Enter) generates the Ctrl+Alt+Delete combination on a remote Windows desktop. In addition, a new option supports Croatian keyboard layouts.
  • Deferred XSync – While one frame is still on screen, Receiver can now decode tiles for the next frame. This provides a performance improvement compared with previous releases, in which Receiver waited for a frame to finish being displayed before decoding the next frame.
  • Audio and webcam playback improvements – Various changes are implemented that conserve CPU cycles and reduce latency.
  • Audio settings – Several new audio settings are now available in module.ini.

For more product and release info read here!

//Richard

#Windows server 2012 Storage Spaces – using PowerShell – via LazyWinAdmin

November 12, 2013 Leave a comment

Very good work on this blog post about Windows Storage Spaces!

WS2012 Storage – Creating a Storage Pool and a Storage Space (aka Virtual Disk) using PowerShell

 

In my previous posts I talked about how to use NFS and iSCSI technologies hosted on Windows Server 2012 and how to deploy those to my Home Lab ESXi servers.

One point I did not covered was: How to do the Initial setup with the physical disk, Storage pooling and the creating the Virtual Disk(s) ?

The cost to acquire and manage highly available and reliable storage can represent a significant part of the IT budget. Windows Server 2012 addresses this issue by delivering a sophisticated virtualized storage feature called Storage Spaces as part of the WS2012 Storage platform. This provides an alternative option for companies that require advanced storage capabilities at lower price point.

Overview

  • Terminology
  • Storage Virtualization Concept
  • Deployment Model of a Storage Space
  • Quick look at Storage Management under Windows Server 2012Identifying the physical disk(s)
    • Server Manager – Volumes
    • PowerShell – Module Storage
  • Creating the Storage Pool
  • Creating the Virtual Disk
  • Initializing the Virtual Disk
  • Partitioning and Formating

Terminology

Storage Pool: Abstraction of multiple physical disks into a logical construct with specified capacity
Group of physical disks into a container, the so-called storage pool, such that the total capacity collectively presented by those associated physical disks can appear and become manageable as a single and seemingly continuous space.

There are two primary types of pools which are used in conjunction with Storage Spaces, as well as the management API in Windows Server 2012: Primordial Pool and Concrete Pool.

Primordial Pool: The Primordial pool represents all of the disks that Storage Spaces is able to enumerate, regardless of whether they are currently being used for a concrete pool. Physical Disks in the Primordial pool have a property named CanPool equal to “True” when they meet the requirements to create a concrete pool.

 

Concrete Pool: A Concrete pool is a specific collection of Physical Disks that was formed by the user to allow creating Storage Spaces (aka Virtual Disks).

#Ericsson Mobility Report – November – via @Ericsson, #mobility

November 12, 2013 Leave a comment

This is really interesting to read! Great job Ericsson!

On the pulse of the Networked Society

Ericsson has performed in-depth data traffic measurements since the early days of mobile broadband from a large base of live networks covering all regions of the world.

The aim of this report is to share analysis based on these measurements, internal forecasts and other relevant studies to provide insights into the current traffic and market trends.

We will continue to share traffic and market data, along with our analysis, on a regular basis. We hope you find it engaging and valuable.

In the last issue of the Ericsson Mobility Report we described app coverage. In the November 2013 edition we take this analysis a step further by using the radio characteristics of a WCDMA/HSPA network to predict coverage area and indoor penetration for popular smartphone applications such as streaming music and video, video telephony, and circuit-switched voice. We also apply the same app type requirements on downlink speed to compare network performance in 17 cities globally.

Download the November report here!

What’s also cool is that Ericsson have published a Traffic Exploration tool that is really cool! 🙂

Create your own graph

Traffic Exploration tool

Create your own graphs, tables and data using our Traffic Exploration tool that contains data from the Ericsson Mobility Report.

To read more about the Ericsson mobility reports do so here!

//Richard

#Windows #Azure Desktop Hosting Deployment Guide – #RDS, #BYOD – via @michael_keen

November 12, 2013 Leave a comment

This is great! Have a look at this guide!

Hello everyone, this is Clark Nicholson from the Remote Desktop Virtualization Team. I’m writing today to let you know that we have just published the Windows Azure Desktop Hosting Deployment Guide. This document provides guidance for deploying a basic desktop hosting solution based on the Windows Azure Desktop Hosting Reference Architecture Guide. This document is intended to provide a starting point for implementing a Desktop Hosting service on Windows Azure virtual machines. A production environment will need additional deployment steps to provide advanced features such as high availability, customized desktop experience, RemoteApp collections, etc.

For more information, please see Remote Desktop Services and Windows Azure Infrastructure Services.

Continue reading here!

//Richard