Archive

Posts Tagged ‘network’

#Gartner Magic Quadrant for Application Delivery Controllers – #ADC, #NetScaler, #Citrix

November 26, 2013 Leave a comment
Citrix is keeping up the good work and is placed with F5 in the Leader quadrant!

Citrix is positioned in the Leaders Quadrant for Application Delivery Controllers for the seventh consecutive year: the Gartner Magic Quadrant Report focuses on vendor’s ability to solve complex application deployment challenges. Don’t miss this chance to learn from Gartner’s independent research.

NetScaler is well established as the industry’s leading internet delivery system, touching an estimated 75 percent of internet users each day. Citrix builds on this leadership to provide the world’s most advanced cloud networking platform, giving customers a single, integrated solution that brings the elasticity, simplicity and expandability of the cloud to any network. This combination helps customers deliver public and private cloud services with the best performance, security and reliability to any device. Learn more about the importance of this recognition by reading this recent press release.

Figure 1. Magic Quadrant for Application Delivery Controllers
Figure 1.Magic Quadrant for Application Delivery Controllers

 
 

Source: Gartner (October 2013). The full 2013 Gartner Application Delivery Controller Magic Quadrant, report can be viewed on the Gartner website.

//Richard

#Citrix #ShareFile StorageZone controller 2.2 released – #BYOD

November 21, 2013 Leave a comment

If you haven’t seen this then have a look at what 2.2 now has to offer!

  • StorageZones for ShareFile Data — You can store ShareFile data in either Windows Azure cloud storage or a private single-tenant storage system that you maintain. You specify a storage option when you configure StorageZones for ShareFile Data. 
    Diagram of on-premises data storage

What’s new

StorageZones Controller 2.2 provides the following enhancements:

Support for Windows Azure storage containers — If you have a Windows Azure account, you can use an Azure storage container for your private data storage instead of a locally-maintained share.

To get started create a new zone and choose the Azure option when you configure StorageZones for ShareFile Data.

Click here to learn more

Connectors to SharePoint root-level sites — You can now create a StorageZones Connector for a SharePoint root-level site or site collection, enabling users to navigate all of the subsites and document libraries in the site. To provide more limited access, you can continue to create connectors to individual SharePoint document libraries.

Click here to learn more

Connectors to user home drives based on Active Directory — You can now create a Connector for network file shares that reliably points to user home drives. To create a connector for user home drives, set the UNC path to the variable %homedrive%. StorageZones Controller will then create connectors based on the user home folder path property in Active Directory.

Installation on non-English operating systems — You can install the English version of StorageZones Controller on the following operating system versions: French, German, Japanese, Simplified Chinese, and Spanish.

Read more here!

//Richard

#Gartner report – How to Choose Between #Hyper-V and #vSphere – #IaaS

November 19, 2013 Leave a comment

The constant battle between the hypervisor and orchestration of  IaaS etc. is of course continuing! But it is really fun I must say that Microsoft is getting more and more mature with it’s offerings in this space, great job!

One of the things that I tend to think most of is the cost, scalability and flexibility of the infrastructure that we build and how we build it, I often see that we tend to do what we’ve done for so many years now. We buy our SAN/NAS storage, we buy our servers but lean towards Blade servers though we think that’s the latest and coolest, and then we try to squeeze that into some sort of POD/FlexPods/UCS or whatever we like to call it to find our optimal “volume of Compute, Network and Storage” that we can scale. But is this scalable like the bigger cloud players like Google, Amazon etc.? Is this 2013 state of the art? I think that we’re just fooling ourselves a bit and build whatever we’ve done for all these years and don’t really provide the business with anything new… but that’s my view… I know what I’d look at and most of you that have read my earlier blog posts know that I love the way of scaling out and doing more like the big players using something like Nutanix and ensure that you choose the right IaaS components as a part of that stack, as well as the orchestration layer (OpenStack, System Center, CloudStack, Cloud Platform or whatever you prefer after you’ve done your homework).

Back to the topic a bit, I’d say that the hypervisor is of no importance anymore, that’s why everyone if giving it away for free or to the open source community! Vendors are after the more IaaS/PaaS orchestration layer and get into that because if they get that business then they have nested their way into your business processes, that’s where ultimately that will deliver the value as IT services in an automated way once you’ve got your business services and processes in place, and then it’s harder to make a change and they will live fat and happy on you for some years to come! 😉

Read more…

#XenDesktop 7.1 Service Template Tech Preview for System Center 2012 Virtual Machine Manager – #SCVMM

November 5, 2013 Leave a comment

This is interesting! Really good and can’t wait to try it out!

Introduction

Let’s face it, installing distributed, enterprise-class virtual desktop and server based computing infrastructure is time consuming and complex.  The infrastructure consists of many components that are installed on individual servers and then configured to work together.  Traditionally this has largely been a manual, error prone process.

The Citrix XenDesktop 7.1 Service Template for System Center 2012 Virtual Machine Manager (SCVMM) leverages the rich automation capabilities available in Microsoft’s private cloud offering to significantly streamline and simplify the installation experience.  The XenDesktop 7.1 Service Template enables rapid deployment of virtual app and desktop infrastructure on Microsoft System Center 2012 private clouds.  This Tech Preview is available now and includes the latest 7.1 version of XenDesktop that supports Windows Server 2012 R2 and System Center 2012 R2 Virtual Machine Manager.

Key Benefits:

  • Rapid Deployment – A fully configured XenDesktop 7.1 deployment that adheres to Citrix best practices is automatically installed in about an hour; a manual installation can take a day or more.
  • Reduction of human errors and the unwanted consequences – IT administrators answer 9 questions about the XenDesktop deployment, including the VM Network to use, the domain to join, the SQL server used to host the database, the SCVMM server to host the desktops, and the administrative service accounts to connect to each of these resources.  Once this information is entered, the Service Template automation installs the XenDesktop infrastructure the same way, every time, ensuring consistency and correctness.
  • Reduction in cost of IT Operations – XenDesktop infrastructure consistently configured with automation is less costly to support because the configuration adheres to best practice standards.
  • Free highly skilled and knowledgeable staff from repeatable and mundane tasks – A Citrix administrator’s time is better spent focused on ensuring that users get access to the applications they need, rather than lengthy production installation tasks.
  • Simplified Eval to Retail Conversion – Windows Server 2012 and later, as well as XenDesktop 7.1, support conversion of evaluation product keys to retail keys.  This means that a successful POC deployment of the XenDesktop 7.1 Service Template is easily converted to a fully supported and properly configured production deployment.
  • Easy Scale-Out for greater capacity – SCVMM Service Templates support a scale-out model to increase user capacity.  For example, as user demand increases additional XenDesktop Controllers and StoreFront servers are easily added with a few clicks and are automatically joined to the XenDesktop site.

The XenDesktop Service Templates were developed and tested with the support of our friends and partners at Dell, who, in support of the release of XenDesktop 7.1 and the Service Template technical preview, are expected to launch new and innovative solutions that include these and other automation capabilities this quarter.  These solutions are based on the Dell DVS Enterprise for Citrix XenDesktop solutions.

Simplification of Distributed Deployments

The XenDesktop 7.1 in-box installation wizard is a fantastic user experience that automatically installs all the required prerequisites and XenDesktop components in under 30 minutes.  The result is a fully installed XenDesktop deployment, all on a single server, that is excellent for POCs and product evaluations.  The installation and configuration challenges occur when you want to install XenDesktop in production, with enterprise-class scalability, distributed across multiple servers.

Manual Installation Steps

XenDesktop 7 manual installation steps

Read more…

Making #OpenStack Grizzly Deployments Less Hairy – #Puppet, #PuppetLabs

October 29, 2013 Leave a comment
Interesting! OpenStack needs a bit more “simplicity”! 😉
October 24, 2013 by Chris Hoge in OpenStack

Today, I’m excited to announce a new module from Puppet Labs for OpenStack Grizzly. I’ve been working on this module with the goal of demonstrating how to simplify OpenStack deployments by identifying their independent components and customizing them for your environment.

The puppetlabs-grizzly module is a multi-node deployment of OpenStack built on the puppetlabs-openstack modules. There are two core differences in how it handles deploying OpenStack resources. First, it uses a “roles and profiles” model. Roles allow you to identify a node’s function, and profiles are the components that describe that role. For example, a typical controller node is composed of messaging, database and API profiles. Roles and profiles allow you to clearly define what a node does with a role, while being flexible enough to mix profiles to compose new roles.

The second difference is that the module leverages Hiera, a database that allows you to store configuration settings in a hierarchy of text files. Hiera can use Facter facts about a given node to set values for module parameters, rather than storing those values in the module itself. If you have to change a network setting or password, Hiera allows you to change it in your Hiera text file hierarchy, rather than changing it in the module.

Check out parts 1 and 2 of the demo, which walks you through how to deploy OpenStack with the puppetlabs-grizzly module.

Multi-node OpenStack Grizzly with Puppet Enterprise: Deployment (Part 1 of 2)

#Microsoft Desktop Hosting Reference Architecture Guides

October 28, 2013 Leave a comment

Wow, these are some compelling guides that Microsoft delivered!! Have a look at them! But of course there’s always something more U want! Let Service Providers provide DaaS services based on client OS’s as well!!!

Microsoft has released two papers related to Desktop Hosting. The first is called: “Desktop Hosting Reference Architecture Guide” and the second is called: “Windows Azure Desktop Hosting Reference Architecture Guide“. Both documents provide a blueprint for creating secure, scalable, multi-tenant desktop hosting solutions using Windows Server 2012 and System Center 2012 SP1 Virtual Machine Manager or using Windows Azure Infrastructure Services.

The documents are targeted to hosting providers which deliver desktop hosting via the Microsoft Service Provider Licensing Agreement (SPLA). Desktop hosting in this case is based on Windows Server with the Windows Desktop Experience feature enabled, and not Microsoft’s client Operating Systems like Windows 7 or Windows 8.

For some reason, Microsoft still doesn’t want service providers to provide Desktops as a Service (DaaS) running on top of a Microsoft Client OS, as outlined in the “Decoding Microsoft’s VDI Licensing Arcanum” paper which virtualization.info covered in September this year.

The Desktop Hosting Reference Architecture Guide provides the following sections:

  • Desktop Hosting Service Logical Architecture
  • Service Layer
    • Tenant Environment
    • Provider Management and Perimeter Environments
  • Virtualization Layer
    • Hyper-V and Virtual Machine Manager
    • Scale-Out File Server
  • Physical Layer
    • Servers
    • Network
  • Tenant On-Premises Components
    • Clients
    • Active Directory Domain Services

clip_image001

The Windows Azure Desktop Hosting Reference Architecture covers the following topics:

How to pick virtualization (HW, NW, Storage) solution for your #VDI environment? – #Nutanix, @StevenPoitras

September 13, 2013 Leave a comment

Here we are again… a lot of companies and Solution Architects are scratching their heads thinking about how we’re going to do it “this time”.

Most of you out there have something today, probably running XenApp on your VMware or XenServer hypervisor with a FC SAN or something, perhaps provisioned using PVS or just managed individually. There is also most likely a “problem” with talking to the Storage team that manage the storage service for the IaaS service that isn’t built for the type of workloads that XenApp and XenDesktop (VDI) requires.

So how are you going to do it this time? Are you going to challenge the Storage and Server/IaaS service and be innovative and review the new cooler products and capabilities that now exists out there? They are totally changing the way that we build Virtual Cloud Computing solutions where; business agility, simplicity, cost savings, performance and simple scale out is important!

There is no one solution for everything… but I’m getting more and more impressed by some of the “new” players on the market when it comes to providing simple and yet so powerful and performing Virtual Cloud Computing products. One in particular is Nutanix that EnvokeIT has partnered with and they have a truly stunning product.

But as many have written in many great blog posts about choosing your storage solution for your VDI solution you truly need to understand what your service will require from the underlying dependency services. And is it really worth to do it the old way? You have your team that manages the IaaS service, and most of the times it just provides a way for ordering/provisioning VM’s, then the “VDI” team leverages that one using PVS or MCS. Some companies are not even where  they can order that VM as a service or provision it from the Image Provisioning (PVS/MCS) service, everything is manual and they call it a IaaS service… is it then a real IaaS service? My answer would be now… but let’s get back to the point I was trying to make!

This HW, Hypervisor, Network, Storage (and sometimes orchestrator) components are often managed by different teams. Each team are also most of the times not really up to date in terms of understanding what a Virtualization/VDI service will require from them and their components. They are very competent in understanding the traditional workload of running a web server VM or similar, but not really dealing with boot storms from hundreds to thousands of VDI’s booting up, people logging in at the same time and the whole pattern of IOPS that is generated in these VM’s “life-cycle”.

This is where I’d suggest everyone to challenge their traditional view on building Virtualization and Storage services for running Hosted Shared Desktop (XenApp/RDS) and Hosted Virtual Desktop (VDI/XenDesktop) on!

You can reduce the complexity, reduce your operational costs and integrate Nutanix as a real power compute part of your internal/private cloud service!

One thing that also is kind of cool is the integration possibilities of the Nutanix product with OpenStack and other cloud management products through its REST API’s.  And it supports running both Hyper-V, VMware ESXi and KVM as hypervisors in this lovely bundled product.

If you want the nitty gritty details about this product I highly recommend that you read the Nutanix Bible post by Steven Poitras here.

Nutanix_Bible640CVM_Dist-1024x384

Read more…

Organizational Challenges with #VDI – #Citrix

And yet another good blog post by Citrix and Wayne Baker. This is an interesting topic and I must say that the blog posts still goes into a lot of the technical aspects, but there are more “soft” organisational aspects to look into as well like service delivery/governance model and process changes that often are missed. And as Wayne also highlights below and that’s worth mentioning again is the impact on the network that also was covered well in this previous post: #Citrix blog post – Get Up To Speed On #XenDesktop Bandwidth Requirements

Back to the post itself:

One of the biggest challenges I repeatedly come across when working with large customers attempting desktop transformation projects, is the internal structure of the organisation. I don’t mean that the organisation itself is a problem, rather that the project they are attempting spans so many areas of responsibility it can cause significant friction. Many of these customers undertake the projects as a purely technical exercise, but I’m here to tell you it’s also an exercise in organisational change!

One of the things I see most often is a “Desktop” team consisting of all the people who traditionally manage all the end-points, and a totally disparate “Server” team who handle all the server virtualization and back-end work. There’s also the “Networks” team to worry about and often the “Storage” team are in the mix too! Bridging those gaps can be one of the areas where friction begins to show. In my role I tend to be involved across all the teams, and having discussion with all of those people alerts me to where weaknesses may lie in the project. For example the requirements for server virtualization tend to be significantly different to the requirements for desktop virtualization, but when discussing these changes with the server virtualization team, one of the most often asked questions is, “Why would you want to do THAT?!” when pointing out the differing resource allocations for both XenApp and XenDesktop deployments.

Now that’s not to say that all teams are like this and – sweeping generalizations aside – I have worked with some incredibly good ones, but increasingly there are examples where the integration of teams causes massive tension. The only way to overcome this situation is to address the root cause – organizational change. Managing desktops was (and in many places still is) a bit of a black art, combining vast organically grown scripts and software distribution mechanisms into an intricately woven (and difficult to unpick!) tapestry. Managing the server estate has become an exercise in managing workloads and minimising/maximising the hardware allocations to provide the required level of service and reducing the footprint in the datacentre. Two very distinct skill-sets!

The other two teams which tend to get a hard time during these types of projects are the networks and storage teams – this usually manifests itself when discussing streaming technologies and their relative impacts on the network and storage layers. What is often overlooked however is that any of the teams can have a significant impact on the end-user experience – when the helpdesk takes the call from an irate user it’s going to require a good look at all of the areas to decipher where the issue lies. The helpdesk typically handle the call as a regular desktop call and don’t document the call in a way which would help the disparate teams discover the root cause, which only adds to the problem! A poorly performing desktop/application delivery infrastructure can be caused by any one of the interwoven areas, and this towering of teams makes troubleshooting very difficult, as there is always a risk that each team doesn’t have enough visibility of the other areas to provide insight into the problem.

Organizations that do not take a wholesale look at how they are planning to migrate that desktop tapestry into the darkened world of the datacentre are the ones who, as the project trundles on, come to realise that the project will never truly be the amazing place that the sales guy told them it would be. Given the amount of time, money and political will invested in these projects, it is a fundamental issue that organizations need to address.

So what are the next steps? Hopefully everyone will have a comprehensive set of requirements defined which can drive forward a design, something along the lines of:

1) Understand the current desktop estate:

Read more…

#Microsoft finds a new way to deliver a private #cloud in a box – #Azure via @maryjofoley

Interesting!!!! 🙂

It took three years from when it was first announced, but Microsoft may have found a way to deliver a private cloud in a box.

azuremgpack

The company’s vision and strategy for doing this has gone through many twists and turns.

Microsoft’s original plan was to provide its largest partners and even a few, select enterprise users a so-called Azure Appliance. Announced in 2010, the Azure Appliances were to be carried by Dell, Fujitsu and HP. These OEMs were to provide the servers which could be installed in partner and select enterprise customers’ datacenters. Microsoft was supposed to provide and maintain Windows Azure as a service to these servers.

The only partner that ever delivered an Azure Appliance was Fujitsu, which announced availability in August 2011. But some time in the past few months, Microsoft ended up dropping its Azure Appliance plans, without ever officially announcing it was dead.

Read more…

#Citrix #NetScaler 10 on Amazon Web Services – #AWS

Yes, it’s here! 🙂

Mainstream IT is fast embracing the enterprise cloud transformation and selecting the right cloud networking technologies has thus quickly emerged to be an imperative. As mainstream IT adopts IaaS (Internet as a service) cloud services, they will require a combination of the elasticity and flexibility, expected of cloud offerings and the powerful advanced networking services used within emerging enterprise cloud datacenters. 

Citrix® NetScaler® 10 delivers elasticity, simplicity and expandability of the cloud to enterprise cloud datacenters and already powers the largest and most successful public clouds in the world. With NetScaler 10, Citrix delivers a comprehensive cloud network platform that mainstream enterprises can leverage to fully embrace a cloud-first network design. 

Citrix and Amazon Web Services (AWS) have come together to deliver industry-leading application delivery controller technology. NetScaler on AWS delivers the same services used to ensure the availability, scalability and security of the largest public and private clouds for AWS environments. Whether the need is to optimize, secure or control delivery of enterprise and cloud services, NetScaler for AWS can help accomplish these initiatives economically, and according to business demands. 

The full suite of NetScaler capabilities such as availability, acceleration, offload and security functionality is available in AWS, enabling users to leverage tried-and-true NetScaler functionality such as rewrites and redirects, content caching, Citrix Access Gateway™ Enterprise SSL VPN, and application firewall within their AWS deployments. Additional benefits include usage of Citrix CloudBridge™ and Citrix Branch Repeater™ as a joint solution. 

Citrix NetScaler transforms the cloud into an extension of the datacenter by eliminating the barriers to enterprise-class cloud deployments. Together, NetScaler and AWS delivers a broad set of capabilities for the Enterprise IT: 

Hybrid Cloud Environment 

Hybrid clouds that span enterprise datacenters and extend into AWS can benefit from the same cloud networking platform, significantly easing…

Continue reading here!

//Richard

%d bloggers like this: