Archive

Posts Tagged ‘Server’

#XenApp 7.5 is launching! – #Citrix, #HSD, #DaaS, #VDI

January 25, 2014 2 comments

Wow… this is really interesting and “weird” I must say…

XenApp is back! 🙂

And of course AppDNA is in there as well to simplify software/application management on this platform.

Description

New Citrix XenApp 7.5 makes it simple to deliver any Windows app to an increasingly mobile workforce, while leveraging the cost saving and elasticity of hybrid clouds and the security of mobile device management. Learn more at http://www.citrix.com/xenapp

Hear more about it in this video!

The video above was removed because of that it was accidentally published too early.. but you can find it on YouTube here:

//Richard

Single File Restore – Fairy Tale Ending Going Down History Lane – via @Nutanix and @dlink7

November 21, 2013 Leave a comment

Great blog post by Dwayne Lessner!

If I go back to my earliest sysadmin days where I had to restore a file from a network share, I was happy just to get the file back. Where I worked we only had tape and it was crapshoot at the best of times. Luckily, 2007 brought me a SAN to play with.

bad times with dealing with LUNSThe SAN made it easier for sure to go back into time and find that file and pull it back from the clutches of death by using hardware based snapshots. It was no big deal to mount the snapshot to the guest but fighting with the MS iSCSI initiator got pretty painful, partly because I had a complex password for the CHAP authentication, and partly because clean-up and logging out of the iSCSI was problematic. I always had ton of errors, both in the windows guest and in the SAN console which caused more grief than good it seemed.

Shortly after the SAN showed up, VMware entered my world. It was great that I didn’t have to mess with MS iSCSI initiators any more but it really just moved my problem to the ESXi host. Now that VMware had the LUN with all my VMs, I had to worry about resignatureing the LUN so it wouldn’t have conflicts with the rest of production VMs. This whole process was short lived because we couldn’t afford all the space the snapshots were taking up. Since we had to use LUNS we had to take snapshots of all the VMs even though there were a handful that really need the extra protection. Before virtualization we were already reserving over 50% of the total LUN space because snapshots were backed by large block sizes and ate through space. Due to the fact that we had to snapshot all of the VMs on the LUN we had to change the snap reserve to 100%. We quickly ran out of space and turned off snapshots for our virtual environment.

When a snapshot is taken on Nutanix, we don’t copy data, nor do we copy the meta-data. The meta-data and data diverge on a need basis; as new writes happen against the active parent snapshot we just track the changes. Changes operate at the byte level which is a far cry from the 16 MB I had to live with in the past.

Due to the above-mentioned life lessons in LUN-based snapshots, I am very happy to show Nutanix customers the benefits of per-VM snapshots and how easy it to restore a file.

Per VM protectionTo restore a file from a VM living on Nutanix you just need to make sure you have a protection domain set up with a proper RPO schedule. For this example, I created a Protection Domain called RPO-High. This is great as you could have 2,000 VMs all on one volume with Nutanix. You just slide over what VMs you want to protect; in this example, I am protecting my FileServer. Note you can have more than one protection domain if you want to assign different RPO to different VMs. Create a new protection domain and add 1 VM or more based on the application grouping.

Read more…

There was a big flash, and then the dinosaurs died – via @binnygill, #Nutanix

November 15, 2013 Leave a comment

Great blog post by @binnygill! 😉

This is how it was supposed to end. The legacy SAN and NAS vendors finally realize that Flash is fundamentally different from HDDs. Even after a decade of efforts to completely assimilate Flash into the legacy architectures of the SAN/NAS era, it’s now clear that new architectures are required to support Flash arrays. The excitement around all-flash arrays is a testament to how different Flash is from HDDs, and its ultimate importance to datacenters.

Consider what happened in the datacenter two decades ago: HDDs were moved out of networked computers, and SAN and NAS were born. What is more interesting, however, is what was not relocated.

Although it was feasible to move DRAM out with technology similar to RDMA, it did not make sense. Why move a low latency, high throughput component across a networking fabric, which would inevitably become a bottleneck?

Today Flash is forcing datacenter architects to revisit this same decision. Fast near-DRAM-speed storage is a reality today. SAN and NAS vendors have attempted to provide that same goodness in the legacy architectures, but have failed. The last ditch effort is to create special-purpose architectures that bundle flash into arrays, and connect it to a bunch of servers. If that is really a good idea, then why don’t we also pool DRAM in that fashion and share with all servers? This last stand will be a very short lived one. What is becoming increasingly apparent is that Flash belongs on the server – just like DRAM.

For example, consider a single Fusion-IO flash card that writes at 2.5GB/s throughput and supports 1,100,000 IOPS with just 15 microsec latency (http://www.fusionio.com/products/iodrive2-duo/). You can realize these speeds by attaching the card to your server and throwing your workload at it. If you put 10 of these cards in a 2U-3U storage controller, should you expect 25GB/s streaming writes, and 11 million IOPS at sub millisecond latencies. To my knowledge no storage controller can do that today, and for good reasons.

Networked storage has the overhead of networking protocols. Protocols like NFS and iSCSI are not designed for massive parallelism, and end up creating bottlenecks that make crossing a few million IOPS on a single datastore an extremely hard computer science problem. Further, if an all-flash array is servicing ten servers, then the networking prowess of the all-flash array should be 10X of that of each server, or else we end up artificially limiting the bandwidth that each server can get based on how the storage array is shared.

No networking technology, whether it be Infiniband, Ethernet, or fibre channel can beat the price and performance of locally-attached PCIe, or even that of a locally-attached SATA controller. Placing flash devices that operate at almost DRAM speeds outside of the server requires unnecessary investment in high-end networking. Eventually, as flash becomes faster, the cost of a speed-matched network will become unbearable, and the datacenter will gravitate towards locally-attached flash – both for technological reasons, as well as for sustainable economics.

The right way to utilize flash is to treat it as one would treat DRAM — place it on the server where it belongs. The charts below illustrate the dramatic speed up from server-attached flash.

Continue reading here!

//Richard

#Amazon WorkSpaces – “#VDI” cloud service – #VDI, #BYOD

November 15, 2013 Leave a comment

This is an interesting offering from Amazon! I however don’t like that everyone talks about the “VDI” concept all the time.. this is based on Windows server with Desktop Experience and not a client OS.

Amazon WorkSpaces is a fully managed desktop computing service in the cloud. Amazon WorkSpaces allows customers to easily provision cloud-based desktops that allow end-users to access the documents, applications and resources they need with the device of their choice, including laptops, iPad, Kindle Fire, or Android tablets. With a few clicks in the AWS Management Console, customers can provision a high-quality desktop experience for any number of users at a cost that is highly competitive with traditional desktops and half the cost of most virtual desktop infrastructure (VDI) solutions.

WorkSpace Bundles

Amazon WorkSpaces offers a choice of service bundles providing different hardware and software options to meet your needs. You can choose from the Standard or Performance family of bundles that offer different CPU, memory, and storage resources, based on the requirements of your users. If you would like to launch WorkSpaces with more software already pre-installed (e.g., Microsoft Office, Trend Micro Anti-Virus, etc.), you should choose the Standard Plus or Performance Plus options. If you don’t need the applications offered in those bundles or you would like to use software licenses for some of the applications in the Standard Plus or Performance Plus options that you’ve already paid for, we recommend the Standard or Performance bundles. Whichever option you choose, you can always add your own software whenever you like.

WorkSpaces Bundle Hardware Resources Applications Monthly Price
Standard 1 vCPU, 3.75 GiB Memory, 50 GB User Storage Utilities (Adobe Reader, Internet Explorer 9, Firefox, 7-Zip, Adobe Flash, JRE) $35
Standard Plus 1 vCPU, 3.75 GiB Memory, 50 GB User Storage Microsoft Office Professional 2010, Trend Micro Anti-Virus, Utilities (Adobe Reader, Internet Explorer 9, Firefox, 7-Zip, Adobe Flash, JRE) $50
Performance 2 vCPU, 7.5 GiB Memory, 100 GB User Storage Utilities (Adobe Reader, Internet Explorer 9, Firefox, 7-Zip, Adobe Flash, JRE) $60
Performance Plus 2 vCPU, 7.5 GiB Memory, 100 GB User Storage Microsoft Office Professional 2010, Trend Micro Anti-Virus, Utilities (Adobe Reader, Internet Explorer 9, Firefox, 7-Zip, Adobe Flash, JRE) $75

All WorkSpaces Bundles provide the Windows 7 Experience to users (provided by Windows Server 2008 R2). Microsoft Office 2010 Professional includes Microsoft Excel 2010, Microsoft OneNote 2010, Microsoft PowerPoint 2010, Microsoft Word 2010, Microsoft Outlook 2010, Microsoft Publisher 2010 and Microsoft Access 2010.

Read more…

#Windows server 2012 Storage Spaces – using PowerShell – via LazyWinAdmin

November 12, 2013 Leave a comment

Very good work on this blog post about Windows Storage Spaces!

WS2012 Storage – Creating a Storage Pool and a Storage Space (aka Virtual Disk) using PowerShell

 

In my previous posts I talked about how to use NFS and iSCSI technologies hosted on Windows Server 2012 and how to deploy those to my Home Lab ESXi servers.

One point I did not covered was: How to do the Initial setup with the physical disk, Storage pooling and the creating the Virtual Disk(s) ?

The cost to acquire and manage highly available and reliable storage can represent a significant part of the IT budget. Windows Server 2012 addresses this issue by delivering a sophisticated virtualized storage feature called Storage Spaces as part of the WS2012 Storage platform. This provides an alternative option for companies that require advanced storage capabilities at lower price point.

Overview

  • Terminology
  • Storage Virtualization Concept
  • Deployment Model of a Storage Space
  • Quick look at Storage Management under Windows Server 2012Identifying the physical disk(s)
    • Server Manager – Volumes
    • PowerShell – Module Storage
  • Creating the Storage Pool
  • Creating the Virtual Disk
  • Initializing the Virtual Disk
  • Partitioning and Formating

Terminology

Storage Pool: Abstraction of multiple physical disks into a logical construct with specified capacity
Group of physical disks into a container, the so-called storage pool, such that the total capacity collectively presented by those associated physical disks can appear and become manageable as a single and seemingly continuous space.

There are two primary types of pools which are used in conjunction with Storage Spaces, as well as the management API in Windows Server 2012: Primordial Pool and Concrete Pool.

Primordial Pool: The Primordial pool represents all of the disks that Storage Spaces is able to enumerate, regardless of whether they are currently being used for a concrete pool. Physical Disks in the Primordial pool have a property named CanPool equal to “True” when they meet the requirements to create a concrete pool.

 

Concrete Pool: A Concrete pool is a specific collection of Physical Disks that was formed by the user to allow creating Storage Spaces (aka Virtual Disks).

#XenDesktop 7.1 Service Template Tech Preview for System Center 2012 Virtual Machine Manager – #SCVMM

November 5, 2013 Leave a comment

This is interesting! Really good and can’t wait to try it out!

Introduction

Let’s face it, installing distributed, enterprise-class virtual desktop and server based computing infrastructure is time consuming and complex.  The infrastructure consists of many components that are installed on individual servers and then configured to work together.  Traditionally this has largely been a manual, error prone process.

The Citrix XenDesktop 7.1 Service Template for System Center 2012 Virtual Machine Manager (SCVMM) leverages the rich automation capabilities available in Microsoft’s private cloud offering to significantly streamline and simplify the installation experience.  The XenDesktop 7.1 Service Template enables rapid deployment of virtual app and desktop infrastructure on Microsoft System Center 2012 private clouds.  This Tech Preview is available now and includes the latest 7.1 version of XenDesktop that supports Windows Server 2012 R2 and System Center 2012 R2 Virtual Machine Manager.

Key Benefits:

  • Rapid Deployment – A fully configured XenDesktop 7.1 deployment that adheres to Citrix best practices is automatically installed in about an hour; a manual installation can take a day or more.
  • Reduction of human errors and the unwanted consequences – IT administrators answer 9 questions about the XenDesktop deployment, including the VM Network to use, the domain to join, the SQL server used to host the database, the SCVMM server to host the desktops, and the administrative service accounts to connect to each of these resources.  Once this information is entered, the Service Template automation installs the XenDesktop infrastructure the same way, every time, ensuring consistency and correctness.
  • Reduction in cost of IT Operations – XenDesktop infrastructure consistently configured with automation is less costly to support because the configuration adheres to best practice standards.
  • Free highly skilled and knowledgeable staff from repeatable and mundane tasks – A Citrix administrator’s time is better spent focused on ensuring that users get access to the applications they need, rather than lengthy production installation tasks.
  • Simplified Eval to Retail Conversion – Windows Server 2012 and later, as well as XenDesktop 7.1, support conversion of evaluation product keys to retail keys.  This means that a successful POC deployment of the XenDesktop 7.1 Service Template is easily converted to a fully supported and properly configured production deployment.
  • Easy Scale-Out for greater capacity – SCVMM Service Templates support a scale-out model to increase user capacity.  For example, as user demand increases additional XenDesktop Controllers and StoreFront servers are easily added with a few clicks and are automatically joined to the XenDesktop site.

The XenDesktop Service Templates were developed and tested with the support of our friends and partners at Dell, who, in support of the release of XenDesktop 7.1 and the Service Template technical preview, are expected to launch new and innovative solutions that include these and other automation capabilities this quarter.  These solutions are based on the Dell DVS Enterprise for Citrix XenDesktop solutions.

Simplification of Distributed Deployments

The XenDesktop 7.1 in-box installation wizard is a fantastic user experience that automatically installs all the required prerequisites and XenDesktop components in under 30 minutes.  The result is a fully installed XenDesktop deployment, all on a single server, that is excellent for POCs and product evaluations.  The installation and configuration challenges occur when you want to install XenDesktop in production, with enterprise-class scalability, distributed across multiple servers.

Manual Installation Steps

XenDesktop 7 manual installation steps

Read more…

#Microsoft Desktop Hosting Reference Architecture Guides

October 28, 2013 Leave a comment

Wow, these are some compelling guides that Microsoft delivered!! Have a look at them! But of course there’s always something more U want! Let Service Providers provide DaaS services based on client OS’s as well!!!

Microsoft has released two papers related to Desktop Hosting. The first is called: “Desktop Hosting Reference Architecture Guide” and the second is called: “Windows Azure Desktop Hosting Reference Architecture Guide“. Both documents provide a blueprint for creating secure, scalable, multi-tenant desktop hosting solutions using Windows Server 2012 and System Center 2012 SP1 Virtual Machine Manager or using Windows Azure Infrastructure Services.

The documents are targeted to hosting providers which deliver desktop hosting via the Microsoft Service Provider Licensing Agreement (SPLA). Desktop hosting in this case is based on Windows Server with the Windows Desktop Experience feature enabled, and not Microsoft’s client Operating Systems like Windows 7 or Windows 8.

For some reason, Microsoft still doesn’t want service providers to provide Desktops as a Service (DaaS) running on top of a Microsoft Client OS, as outlined in the “Decoding Microsoft’s VDI Licensing Arcanum” paper which virtualization.info covered in September this year.

The Desktop Hosting Reference Architecture Guide provides the following sections:

  • Desktop Hosting Service Logical Architecture
  • Service Layer
    • Tenant Environment
    • Provider Management and Perimeter Environments
  • Virtualization Layer
    • Hyper-V and Virtual Machine Manager
    • Scale-Out File Server
  • Physical Layer
    • Servers
    • Network
  • Tenant On-Premises Components
    • Clients
    • Active Directory Domain Services

clip_image001

The Windows Azure Desktop Hosting Reference Architecture covers the following topics:

%d bloggers like this: