Archive
Solving the Compute and Storage scalability dilemma – #Nutanix, via @josh_odgers
The topic of Compute, Network and STORAGE is a hot topic as I’ve written in blog posts before this one (How to pick virtualization (HW, NW, Storage) solution for your #VDI environment? – #Nutanix, @StevenPoitras) … and still a lot of colleagues and customers are struggling with finding better solutions and architecture.
How can we ensure that we get the same or better performance of our new architecture? How can we scale in a more simple and linear manner? How can we ensure that we don’t have a single point of failure for all of our VM’s etc..? How are others scaling and doing this in a better way?
I’m not a storage expert, but I do know and read that many companies out there are working on finding the optimal solution for Compute and Storage, and how they can get the cost down and be left with a more simple architecture to manage…
This is a topic that most need to address as well now when more and more organisations are starting to build their private clouds, because how are you going to scale it and how can you get closer to the delivery that the big players provide? Gartner even had Software-Defined-Storage (SDS) as the number 2 trend going forward: #Gartner Outlines 10 IT Trends To Watch – via @MichealRoth, #Nutanix, #VMWare
Right now I see Nutanix as the leader here! They rock! Just have a look at this linear scalability:
If you want to learn more how Nutanix can bring great value please contact us at EnvokeIT!
For an intro of Nutanix in 2 minutes have a look at these videos:
Overview:
How To: #XenMobile #MDM 8.5 Deployment Part 3: Policies – #Citrix
And here U have part 3 of Adams great blog post series!

In this 3rd part of my 7 part series on XenMobile MDM 8.5 we will focus on policies. Policies within MDM allow you to control a multitude of features on your end users mobile devices, including: WiFi, Email, VPN, Location Services, most all functionality of the device (camera, FaceTime, etc), AppStore access, etc. Most configuration variations you do to control and limit/restrict/configure your end users devices will be done from this tab. This tab is also the location where we can create some automated actions that include notifying your users when they have fallen out of compliance.
If you would like to read the other parts in this article series please go to:
- How To: XenMobile MDM 8.5 Deployment Part 1: Installation
- How To: XenMobile MDM 8.5 Deployment Part 2: Basic Configuration
In this article I was to cover a “base” set of policy configurations that will give you a feel of how the policies work in general. By no means does this cover the breadth of what you can do with MDM, but it at least gives you a glimpse.
I want to accomplish the following in this article:
- Set a passcode policy on the device
- Block iCloud from syncing documents
- Preconfigure a WiFi network on my device (so that your users could come into the office with WiFi already configured and never have been given the password)
- Blacklist Dropbox, Box, and SkyDrive applications
- Notify the user their device as Out of Compliance (OoC) if those apps are installed
- Mark the device as OoC in the dashboard
Configure a Passcode Policy
#Ericsson to build three Global #ICT Centers
This is really cool!
- High-tech, sustainable global ICT Centers to support R&D and Services organizations to bring innovation faster to the market
- Two centers located in Europe; one in North America
- Another step in providing industry leading cloud-enabled technology
- Also establishing a new R&D hardware design building in Stockholm
Ericsson (NASDAQ:ERIC) is planning to invest approximately SEK 7 billion in the coming five years to build three global ICT Centers. Two will be located in Sweden, in Stockholm and Linköping, while the third one, in North America, will be located in Canada, in Montreal, Quebec.
The centers will be located close to Ericsson’s main R&D hubs and will be the new platform for more than 24,000 Ericsson R&D engineers around the world, supporting their lean and agile ways of working. Team of experts will be able to collaborate beyond borders more easily and efficiently.
Ericsson’s customers will also be able to connect remotely for interoperability testing, trials and will have early access to innovation on new business services in real-time from the comfort of their locations.
The three ICT Centers combined will be up to 120,000 square meters, approximately the size of 14 football fields. The new centers will house the company’s complete portfolio, enabling the R&D organization to develop and verify solutions, creating the foundation for the next generation technology and cloud-based services.
Hans Vestberg, President and CEO, Ericsson, says: “The new ICT Centers are examples of Ericsson’s passion for driving the development of the industry. Great ideas come from collaboration, and at these centers we will push the boundaries of possibility on next generation technology and services. Flexibility enabled by new ways of working will realize innovation faster to the market and to our customers.”
The centers will have a leading-edge design, built in a modular and scalable way, securing an efficient use of resources and space adaptable to the business needs. Ericsson estimates that the combination of architecture, design and locations will reduce energy consumption up to 40 percent. This significant reduction in carbon footprint is instrumental in Ericsson’s vision of a more sustainable future.
The two ICT Centers in Sweden will begin initial operations from end of 2013 and from end of 2014 respectively and the North American ICT Center from early 2015.
The new hardware design building in Stockholm, Sweden, will provide similar benefits as the global ICT Centers in use of equipment and energy savings. It will enable R&D hardware design activities in Stockholm to consolidate into one modern creative environment…..
Continue reading here!
#Microsoft to acquire #Nokia’s devices & services business
This is interesting, but I must admin that I’m not that surprised…
Microsoft to acquire Nokia’s devices & services business, license Nokia’s patents and mapping services
REDMOND, Washington and ESPOO, Finland – Sept. 3, 2013 – Microsoft Corporation and Nokia Corporation today announced that the Boards of Directors for both companies have decided to enter into a transaction whereby Microsoft will purchase substantially all of Nokia’s Devices & Services business, license Nokia’s patents, and license and use Nokia’s mapping services.
Under the terms of the agreement, Microsoft will pay EUR 3.79 billion to purchase substantially all of Nokia’s Devices & Services business, and EUR 1.65 billion to license Nokia’s patents, for a total transaction price of EUR 5.44 billion in cash. Microsoft will draw upon its overseas cash resources to fund the transaction. The transaction is expected to close in the first quarter of 2014, subject to approval by Nokia’s shareholders, regulatory approvals and other closing conditions.
Building on the partnership with Nokia announced in February 2011 and the increasing success of Nokia’s Lumia smartphones, Microsoft aims to accelerate the growth of its share and profit in mobile devices through faster innovation, increased synergies, and unified branding and marketing. For Nokia, this transaction is expected to be significantly accretive to earnings, strengthen its financial position, and provide a solid basis for future investment in its continuing businesses. Read more…
Are #Microsoft Losing Friends and Alienating IT Pros? – via @andyjmorgan, @stevegoodman
This is a great blog post by Steve Goodman!
Regular readers of my blog will know I’m a big fan of Microsoft products. As well as being an Exchange MVP, I’m very much a cloud fan – you’ll find me at Exchange Connections in a few weeks time talking about migrating to Exchange Online amongst other subjects. What I’m about to write doesn’t change any of that, and I hope the right people will read this and have a serious re-think.
Microsoft’s “Devices and Services” strategy is leaving many in the industry very confused at the moment.
If you’ve been living under a rock – I’ll give you an overview. They’ve dropped MCSM, the leading certification for their Server products. They’ve dropped TechNet subscriptions, the benchmark for how a vendor lets its IT pros evaluate and learn about their range of products. And they’ve been very lax with the quality of updates for their on-premises range of products, Exchange included, whilst at the same time releasing features only in their cloud products.
A range of MCMs and MCSMs – Microsoft employees included – have been expressing their opinions here, here, here, hereand in numerous other places. We’ve discussed the TechNet Subscriptions on The UC Architects’ podcast.
One thing is key – this kind of behaviour absolutely destroys trust in Microsoft. After the last round of anti-trust issues, it took a long time for Microsoft to gain a position of trust along with many years of incrementally releasing better and better products. A few years ago Microsoft was just about “good enough” to let into your datacentre; now it’s beginning to lead the way, especially with Hyper-V, Exchange and Lync.
Before I get started on Microsoft’s cloud strategy, let’s take a jovial look at what (from my experience) is Google’s strategy:
- Tell the customer their internal IT sucks (tactfully), ideally without IT present so they can talk about the brilliance of being “all in” the cloud without a dose of reality getting in the way.
- Class all line of business apps as irrelevant – the sales person was probably still in nursery when they were deployed. Because those apps are old, they must be shit.
- Show a picture of something old and irrelevant – like a mill generating it’s own energy. Tell them that’s what their IT is! You, the customer, don’t run a power station, so why would you run your own IT? If you do run your own IT you are irrelevant and getting left behind.
- Make out the customer’s own IT is actually less reliable than it is. Don’t mention that recent on-premises products cost less, are easy for the right people to implement and from a user perspective are often more reliable than an overseas cloud service.
- Only provide your products in the cloud so once you’re in… you’re in.
- Don’t let anyone from the outside be a real expert on the technology. You don’t need a Google “MVP”, because 99% of Google server products can only be provided by one company.
- Once you’ve signed up a customer remember, you don’t need to give them good support. They can’t go anyway without spending money on a third party solution to get their data out.
From a Microsoft MVP point of view, Google’s strategy is brilliant. It means that although we like a lot of their products, it drives away customers in their droves. Microsoft’s traditional approach to the cloud – and partner ecosystem would be a breath of fresh air to someone who’s been though the Google machine.
Unfortunately, based on recent experiences by myself and others – the above is actually looking pretty similar to Microsoft’s new strategy….
Continue reading here!
//Richard
#Citrix #XenMobile 8.5 MAM upgrade! Part 1 – #StoreFront, #AppController, #NetScaler
In this little blog series series you’ll follow a little upgrade process to XenMobile 8.5 for Mobile Application Management (previously known as CloudGateway).
Ok, I don’t exactly know where to begin. I must first say that Citrix is THE master when it comes to renaming products, updating/changing the architecture, changing consoles (claiming to reducing the number of them like every year but at the same time introduce new ones).
How hard can it be to make crystal clear documentation and upgrade processes that works and are easy? I feel already that my tone in this blog post is “a bit” negative… but I think that Citrix actually deserves it this time.
I must now take a step back and calm down and point out that Citrix is delivering some MAJOR changes and good news/features in the new XenMobile 8.5 release though! It’s great (when you’ve got it up and running) and I must say that I don’t see anyone that is near them in delivering all these capabilities in a nice end-to-end delivery!! 🙂
Have a look at everything that is new, deployment scenarios etc. here before you even start thinking to upgrade or change your current NetScaler, StoreFront and AppController environment!
Once you’ve started to read the different design scenarios you’ll see that App Controller can be placed in front of StoreFront, in the back of StoreFront or totally without StoreFront… all the options just make your head spin! Because Citrix doesn’t really make it clear on how all of this should work with a Receiver and Worx Home depending if the device is on the internal network, external through NetScaler or what the capabilities that you need are supported in the different scenarios in a simple way, just text that explains it. And I find the pictures and text a bit misleading:

As you see above the App Controller is added as a “Farm” just as in 2.6, but is that the truth now in version 2.8 of App Controller?
If you have a look at the text from this page it’s getting even more confusing: Read more…
#Gartner Magic Quadrant for Cloud Infrastructure as a Service – #IaaS
Market Definition/Description
Cloud computing is a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using Internet technologies. Cloud infrastructure as a service (IaaS) is a type of cloud computing service; it parallels the infrastructure and data center initiatives of IT. Cloud compute IaaS constitutes the largest segment of this market (the broader IaaS market also includes cloud storage and cloud printing). Only cloud compute IaaS is evaluated in this Magic Quadrant; it does not cover cloud storage providers, platform as a service (PaaS) providers, software as a service (SaaS) providers, cloud services brokerages or any other type of cloud service provider, nor does it cover the hardware and software vendors that may be used to build cloud infrastructure. Furthermore, this Magic Quadrant is not an evaluation of the broad, generalized cloud computing strategies of the companies profiled.
In the context of this Magic Quadrant, cloud compute IaaS (hereafter referred to simply as “cloud IaaS” or “IaaS”) is defined as a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities, are owned by a service provider and offered to the customer on demand. The resources are scalable and elastic in near-real-time, and metered by use. Self-service interfaces are exposed directly to the customer, including a Web-based UI and, optionally, an API. The resources may be single-tenant or multitenant, and hosted by the service provider or on-premises in the customer’s data center.
We draw a distinction between cloud infrastructure as a service, and cloud infrastructure as atechnology platform; we call the latter cloud-enabled system infrastructure (CESI). In cloud IaaS, the capabilities of a CESI are directly exposed to the customer through self-service. However, other services, including noncloud services, may be delivered on top of a CESI; these cloud-enabled services may include forms of managed hosting, data center outsourcing and other IT outsourcing services. In this Magic Quadrant, we evaluate only cloud IaaS offerings; we do not evaluate cloud-enabled services. (See “Technology Overview for Cloud-Enabled System Infrastructure” and “Don’t Be Fooled by Offerings Falsely Masquerading as Cloud Infrastructure as a Service” for more on this distinction.)
This Magic Quadrant covers all the common use cases for cloud IaaS, including development and testing, production environments (including those supporting mission-critical workloads) for both internal and customer-facing applications, batch computing (including high-performance computing [HPC]) and disaster recovery. It encompasses both single-application workloads and “virtual data centers” (VDCs) hosting many diverse workloads. It includes suitability for a wide range of application design patterns, including both “cloud-native”….
Figure 1. Magic Quadrant for Cloud Infrastructure as a Service
Source: Gartner (August 2013)
Continue reading here!
//Richard
True or False: Always use Provisioning Services – #Citrix, #PVS, #MCS
Another good blog post from Daniel Feller:
Test your Citrix muscle…
True or False: Always use Provisioning Services
Answer: False
There has always been this aura around Machine Creation Services in that it could not hold a candle to Provisioning Services; that you would be completely insane to implement this feature in any but the simplest/smallest deployments.
How did we get to this myth? Back in March of 2011 I blogged about deciding between MCS and PVS. I wanted to help people decide between using Provisioning Services and the newly released Machine Creation Services. Back in 2011, MCS an alternative to PVS in that MCS was easy to setup, but had some limitations when compared to PVS. My blog and decision tree were used to help steer people into the PVS route except for the use cases where MCS made sense.
Two and a half years passed and over that time, MCS has grown up. Unfortunately, I got very busy and didn’t keep this decision matrix updated. I blame the XenDesktop product group. How dare they improve our products. Don’t they know this causes me more work? ![]()
It’s time to make some updates based on improvements of XenDesktop 7 (and these improvements aren’t just on the MCS side but also on the PVS side as well).

So let’s break it down:
- Hosted VDI desktops only: MCS in XenDesktop 7 now supports XenApp hosts. This is really cool, and am very happy about this improvement as so many organizations understand that XA plays a huge part in any successful VDI project.
- Dedicated Desktops: Before PVD, I was no fan of doing dedicated VDI desktops with PVS. With PVD, PVS dedicated desktops is now much more feasible, like it always was with MCS
- Boot/Logon Storms: PVS, if configured correctly, would cache many of the reads into system memory, helping to reduce the Read IOPS. Hypervisors have improved over the past 2 years to help us with the large number of Read disk operations. This helps lessen the impact of the boot/logon storms when using MCS.
Organizational Challenges with #VDI – #Citrix
And yet another good blog post by Citrix and Wayne Baker. This is an interesting topic and I must say that the blog posts still goes into a lot of the technical aspects, but there are more “soft” organisational aspects to look into as well like service delivery/governance model and process changes that often are missed. And as Wayne also highlights below and that’s worth mentioning again is the impact on the network that also was covered well in this previous post: #Citrix blog post – Get Up To Speed On #XenDesktop Bandwidth Requirements
Back to the post itself:
One of the biggest challenges I repeatedly come across when working with large customers attempting desktop transformation projects, is the internal structure of the organisation. I don’t mean that the organisation itself is a problem, rather that the project they are attempting spans so many areas of responsibility it can cause significant friction. Many of these customers undertake the projects as a purely technical exercise, but I’m here to tell you it’s also an exercise in organisational change!
One of the things I see most often is a “Desktop” team consisting of all the people who traditionally manage all the end-points, and a totally disparate “Server” team who handle all the server virtualization and back-end work. There’s also the “Networks” team to worry about and often the “Storage” team are in the mix too! Bridging those gaps can be one of the areas where friction begins to show. In my role I tend to be involved across all the teams, and having discussion with all of those people alerts me to where weaknesses may lie in the project. For example the requirements for server virtualization tend to be significantly different to the requirements for desktop virtualization, but when discussing these changes with the server virtualization team, one of the most often asked questions is, “Why would you want to do THAT?!” when pointing out the differing resource allocations for both XenApp and XenDesktop deployments.
Now that’s not to say that all teams are like this and – sweeping generalizations aside – I have worked with some incredibly good ones, but increasingly there are examples where the integration of teams causes massive tension. The only way to overcome this situation is to address the root cause – organizational change. Managing desktops was (and in many places still is) a bit of a black art, combining vast organically grown scripts and software distribution mechanisms into an intricately woven (and difficult to unpick!) tapestry. Managing the server estate has become an exercise in managing workloads and minimising/maximising the hardware allocations to provide the required level of service and reducing the footprint in the datacentre. Two very distinct skill-sets!
The other two teams which tend to get a hard time during these types of projects are the networks and storage teams – this usually manifests itself when discussing streaming technologies and their relative impacts on the network and storage layers. What is often overlooked however is that any of the teams can have a significant impact on the end-user experience – when the helpdesk takes the call from an irate user it’s going to require a good look at all of the areas to decipher where the issue lies. The helpdesk typically handle the call as a regular desktop call and don’t document the call in a way which would help the disparate teams discover the root cause, which only adds to the problem! A poorly performing desktop/application delivery infrastructure can be caused by any one of the interwoven areas, and this towering of teams makes troubleshooting very difficult, as there is always a risk that each team doesn’t have enough visibility of the other areas to provide insight into the problem.
Organizations that do not take a wholesale look at how they are planning to migrate that desktop tapestry into the darkened world of the datacentre are the ones who, as the project trundles on, come to realise that the project will never truly be the amazing place that the sales guy told them it would be. Given the amount of time, money and political will invested in these projects, it is a fundamental issue that organizations need to address.
So what are the next steps? Hopefully everyone will have a comprehensive set of requirements defined which can drive forward a design, something along the lines of:
1) Understand the current desktop estate:
Today is the RTM for #Windows Server 2012 R2! – #Microsoft
Microsoft blog post about the RTM release of Windows Server 2012 R2:
![]() |
As noted in my earlier post about the availability dates for the 2012 R2 wave, we are counting the days until our partners and customers can start using these products. Today I am proud to announce a big milestone: Windows Server 2012 R2 has been released to manufacturing!
This means that we are handing the software over to our hardware partners for them to complete their final system validations; this is the final step before putting the next generation of Windows Server in your hands.
While every release milestone provides ample reason to celebrate (and trust me, there’s going to be a party here in Redmond), we are all particularly excited this time around because we’ve delivered so much in such a short amount of time. The amazing new features in this release cover virtualization, storage, networking, management, access, information protection, and much more.
By any measure, this is a lot more than just one year’s worth of innovation since the release of Windows Server 2012!
As many readers have noticed, this release is being handled a bit differently than in years past. With previous releases, shortly after the RTM Microsoft provided access to software through our MSDN and TechNet subscriptions. Because this release was built and delivered at a much faster pace than past products, and because we want to ensure that you get the very highest quality product, we made the decision to complete the final validation phases prior to distributing the release. It is enormously important to all of us here that you have the best possible experience using R2 to build your private and hybrid cloud infrastructure.
We are all incredibly proud of this release and, on behalf of the Windows Server engineering team, we are honored to share this release with you. The opportunity to deliver such a wide range of powerful, interoperable R2 products is a powerful example of the Common Engineering Criteria that I’ve written about before.
Also of note: The next update to Windows Intune will be available at the time of GA, and we are also on track to deliver System Center 2012 R2.
Thank you to everyone who provided feedback during….
Continue reading here!
//Richard









