Archive
#Ericsson to build three Global #ICT Centers
This is really cool!
- High-tech, sustainable global ICT Centers to support R&D and Services organizations to bring innovation faster to the market
- Two centers located in Europe; one in North America
- Another step in providing industry leading cloud-enabled technology
- Also establishing a new R&D hardware design building in Stockholm
Ericsson (NASDAQ:ERIC) is planning to invest approximately SEK 7 billion in the coming five years to build three global ICT Centers. Two will be located in Sweden, in Stockholm and Linköping, while the third one, in North America, will be located in Canada, in Montreal, Quebec.
The centers will be located close to Ericsson’s main R&D hubs and will be the new platform for more than 24,000 Ericsson R&D engineers around the world, supporting their lean and agile ways of working. Team of experts will be able to collaborate beyond borders more easily and efficiently.
Ericsson’s customers will also be able to connect remotely for interoperability testing, trials and will have early access to innovation on new business services in real-time from the comfort of their locations.
The three ICT Centers combined will be up to 120,000 square meters, approximately the size of 14 football fields. The new centers will house the company’s complete portfolio, enabling the R&D organization to develop and verify solutions, creating the foundation for the next generation technology and cloud-based services.
Hans Vestberg, President and CEO, Ericsson, says: “The new ICT Centers are examples of Ericsson’s passion for driving the development of the industry. Great ideas come from collaboration, and at these centers we will push the boundaries of possibility on next generation technology and services. Flexibility enabled by new ways of working will realize innovation faster to the market and to our customers.”
The centers will have a leading-edge design, built in a modular and scalable way, securing an efficient use of resources and space adaptable to the business needs. Ericsson estimates that the combination of architecture, design and locations will reduce energy consumption up to 40 percent. This significant reduction in carbon footprint is instrumental in Ericsson’s vision of a more sustainable future.
The two ICT Centers in Sweden will begin initial operations from end of 2013 and from end of 2014 respectively and the North American ICT Center from early 2015.
The new hardware design building in Stockholm, Sweden, will provide similar benefits as the global ICT Centers in use of equipment and energy savings. It will enable R&D hardware design activities in Stockholm to consolidate into one modern creative environment…..
Continue reading here!
#Microsoft to acquire #Nokia’s devices & services business
This is interesting, but I must admin that I’m not that surprised…
Microsoft to acquire Nokia’s devices & services business, license Nokia’s patents and mapping services
REDMOND, Washington and ESPOO, Finland – Sept. 3, 2013 – Microsoft Corporation and Nokia Corporation today announced that the Boards of Directors for both companies have decided to enter into a transaction whereby Microsoft will purchase substantially all of Nokia’s Devices & Services business, license Nokia’s patents, and license and use Nokia’s mapping services.
Under the terms of the agreement, Microsoft will pay EUR 3.79 billion to purchase substantially all of Nokia’s Devices & Services business, and EUR 1.65 billion to license Nokia’s patents, for a total transaction price of EUR 5.44 billion in cash. Microsoft will draw upon its overseas cash resources to fund the transaction. The transaction is expected to close in the first quarter of 2014, subject to approval by Nokia’s shareholders, regulatory approvals and other closing conditions.
Building on the partnership with Nokia announced in February 2011 and the increasing success of Nokia’s Lumia smartphones, Microsoft aims to accelerate the growth of its share and profit in mobile devices through faster innovation, increased synergies, and unified branding and marketing. For Nokia, this transaction is expected to be significantly accretive to earnings, strengthen its financial position, and provide a solid basis for future investment in its continuing businesses. Read more…
Are #Microsoft Losing Friends and Alienating IT Pros? – via @andyjmorgan, @stevegoodman
This is a great blog post by Steve Goodman!
Regular readers of my blog will know I’m a big fan of Microsoft products. As well as being an Exchange MVP, I’m very much a cloud fan – you’ll find me at Exchange Connections in a few weeks time talking about migrating to Exchange Online amongst other subjects. What I’m about to write doesn’t change any of that, and I hope the right people will read this and have a serious re-think.
Microsoft’s “Devices and Services” strategy is leaving many in the industry very confused at the moment.
If you’ve been living under a rock – I’ll give you an overview. They’ve dropped MCSM, the leading certification for their Server products. They’ve dropped TechNet subscriptions, the benchmark for how a vendor lets its IT pros evaluate and learn about their range of products. And they’ve been very lax with the quality of updates for their on-premises range of products, Exchange included, whilst at the same time releasing features only in their cloud products.
A range of MCMs and MCSMs – Microsoft employees included – have been expressing their opinions here, here, here, hereand in numerous other places. We’ve discussed the TechNet Subscriptions on The UC Architects’ podcast.
One thing is key – this kind of behaviour absolutely destroys trust in Microsoft. After the last round of anti-trust issues, it took a long time for Microsoft to gain a position of trust along with many years of incrementally releasing better and better products. A few years ago Microsoft was just about “good enough” to let into your datacentre; now it’s beginning to lead the way, especially with Hyper-V, Exchange and Lync.
Before I get started on Microsoft’s cloud strategy, let’s take a jovial look at what (from my experience) is Google’s strategy:
- Tell the customer their internal IT sucks (tactfully), ideally without IT present so they can talk about the brilliance of being “all in” the cloud without a dose of reality getting in the way.
- Class all line of business apps as irrelevant – the sales person was probably still in nursery when they were deployed. Because those apps are old, they must be shit.
- Show a picture of something old and irrelevant – like a mill generating it’s own energy. Tell them that’s what their IT is! You, the customer, don’t run a power station, so why would you run your own IT? If you do run your own IT you are irrelevant and getting left behind.
- Make out the customer’s own IT is actually less reliable than it is. Don’t mention that recent on-premises products cost less, are easy for the right people to implement and from a user perspective are often more reliable than an overseas cloud service.
- Only provide your products in the cloud so once you’re in… you’re in.
- Don’t let anyone from the outside be a real expert on the technology. You don’t need a Google “MVP”, because 99% of Google server products can only be provided by one company.
- Once you’ve signed up a customer remember, you don’t need to give them good support. They can’t go anyway without spending money on a third party solution to get their data out.
From a Microsoft MVP point of view, Google’s strategy is brilliant. It means that although we like a lot of their products, it drives away customers in their droves. Microsoft’s traditional approach to the cloud – and partner ecosystem would be a breath of fresh air to someone who’s been though the Google machine.
Unfortunately, based on recent experiences by myself and others – the above is actually looking pretty similar to Microsoft’s new strategy….
Continue reading here!
//Richard
#Citrix #XenMobile 8.5 MAM upgrade! Part 1 – #StoreFront, #AppController, #NetScaler
In this little blog series series you’ll follow a little upgrade process to XenMobile 8.5 for Mobile Application Management (previously known as CloudGateway).
Ok, I don’t exactly know where to begin. I must first say that Citrix is THE master when it comes to renaming products, updating/changing the architecture, changing consoles (claiming to reducing the number of them like every year but at the same time introduce new ones).
How hard can it be to make crystal clear documentation and upgrade processes that works and are easy? I feel already that my tone in this blog post is “a bit” negative… but I think that Citrix actually deserves it this time.
I must now take a step back and calm down and point out that Citrix is delivering some MAJOR changes and good news/features in the new XenMobile 8.5 release though! It’s great (when you’ve got it up and running) and I must say that I don’t see anyone that is near them in delivering all these capabilities in a nice end-to-end delivery!! 🙂
Have a look at everything that is new, deployment scenarios etc. here before you even start thinking to upgrade or change your current NetScaler, StoreFront and AppController environment!
Once you’ve started to read the different design scenarios you’ll see that App Controller can be placed in front of StoreFront, in the back of StoreFront or totally without StoreFront… all the options just make your head spin! Because Citrix doesn’t really make it clear on how all of this should work with a Receiver and Worx Home depending if the device is on the internal network, external through NetScaler or what the capabilities that you need are supported in the different scenarios in a simple way, just text that explains it. And I find the pictures and text a bit misleading:

As you see above the App Controller is added as a “Farm” just as in 2.6, but is that the truth now in version 2.8 of App Controller?
If you have a look at the text from this page it’s getting even more confusing: Read more…
#Gartner Magic Quadrant for Cloud Infrastructure as a Service – #IaaS
Market Definition/Description
Cloud computing is a style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using Internet technologies. Cloud infrastructure as a service (IaaS) is a type of cloud computing service; it parallels the infrastructure and data center initiatives of IT. Cloud compute IaaS constitutes the largest segment of this market (the broader IaaS market also includes cloud storage and cloud printing). Only cloud compute IaaS is evaluated in this Magic Quadrant; it does not cover cloud storage providers, platform as a service (PaaS) providers, software as a service (SaaS) providers, cloud services brokerages or any other type of cloud service provider, nor does it cover the hardware and software vendors that may be used to build cloud infrastructure. Furthermore, this Magic Quadrant is not an evaluation of the broad, generalized cloud computing strategies of the companies profiled.
In the context of this Magic Quadrant, cloud compute IaaS (hereafter referred to simply as “cloud IaaS” or “IaaS”) is defined as a standardized, highly automated offering, where compute resources, complemented by storage and networking capabilities, are owned by a service provider and offered to the customer on demand. The resources are scalable and elastic in near-real-time, and metered by use. Self-service interfaces are exposed directly to the customer, including a Web-based UI and, optionally, an API. The resources may be single-tenant or multitenant, and hosted by the service provider or on-premises in the customer’s data center.
We draw a distinction between cloud infrastructure as a service, and cloud infrastructure as atechnology platform; we call the latter cloud-enabled system infrastructure (CESI). In cloud IaaS, the capabilities of a CESI are directly exposed to the customer through self-service. However, other services, including noncloud services, may be delivered on top of a CESI; these cloud-enabled services may include forms of managed hosting, data center outsourcing and other IT outsourcing services. In this Magic Quadrant, we evaluate only cloud IaaS offerings; we do not evaluate cloud-enabled services. (See “Technology Overview for Cloud-Enabled System Infrastructure” and “Don’t Be Fooled by Offerings Falsely Masquerading as Cloud Infrastructure as a Service” for more on this distinction.)
This Magic Quadrant covers all the common use cases for cloud IaaS, including development and testing, production environments (including those supporting mission-critical workloads) for both internal and customer-facing applications, batch computing (including high-performance computing [HPC]) and disaster recovery. It encompasses both single-application workloads and “virtual data centers” (VDCs) hosting many diverse workloads. It includes suitability for a wide range of application design patterns, including both “cloud-native”….
Figure 1. Magic Quadrant for Cloud Infrastructure as a Service
Source: Gartner (August 2013)
Continue reading here!
//Richard
True or False: Always use Provisioning Services – #Citrix, #PVS, #MCS
Another good blog post from Daniel Feller:
Test your Citrix muscle…
True or False: Always use Provisioning Services
Answer: False
There has always been this aura around Machine Creation Services in that it could not hold a candle to Provisioning Services; that you would be completely insane to implement this feature in any but the simplest/smallest deployments.
How did we get to this myth? Back in March of 2011 I blogged about deciding between MCS and PVS. I wanted to help people decide between using Provisioning Services and the newly released Machine Creation Services. Back in 2011, MCS an alternative to PVS in that MCS was easy to setup, but had some limitations when compared to PVS. My blog and decision tree were used to help steer people into the PVS route except for the use cases where MCS made sense.
Two and a half years passed and over that time, MCS has grown up. Unfortunately, I got very busy and didn’t keep this decision matrix updated. I blame the XenDesktop product group. How dare they improve our products. Don’t they know this causes me more work? ![]()
It’s time to make some updates based on improvements of XenDesktop 7 (and these improvements aren’t just on the MCS side but also on the PVS side as well).

So let’s break it down:
- Hosted VDI desktops only: MCS in XenDesktop 7 now supports XenApp hosts. This is really cool, and am very happy about this improvement as so many organizations understand that XA plays a huge part in any successful VDI project.
- Dedicated Desktops: Before PVD, I was no fan of doing dedicated VDI desktops with PVS. With PVD, PVS dedicated desktops is now much more feasible, like it always was with MCS
- Boot/Logon Storms: PVS, if configured correctly, would cache many of the reads into system memory, helping to reduce the Read IOPS. Hypervisors have improved over the past 2 years to help us with the large number of Read disk operations. This helps lessen the impact of the boot/logon storms when using MCS.
Organizational Challenges with #VDI – #Citrix
And yet another good blog post by Citrix and Wayne Baker. This is an interesting topic and I must say that the blog posts still goes into a lot of the technical aspects, but there are more “soft” organisational aspects to look into as well like service delivery/governance model and process changes that often are missed. And as Wayne also highlights below and that’s worth mentioning again is the impact on the network that also was covered well in this previous post: #Citrix blog post – Get Up To Speed On #XenDesktop Bandwidth Requirements
Back to the post itself:
One of the biggest challenges I repeatedly come across when working with large customers attempting desktop transformation projects, is the internal structure of the organisation. I don’t mean that the organisation itself is a problem, rather that the project they are attempting spans so many areas of responsibility it can cause significant friction. Many of these customers undertake the projects as a purely technical exercise, but I’m here to tell you it’s also an exercise in organisational change!
One of the things I see most often is a “Desktop” team consisting of all the people who traditionally manage all the end-points, and a totally disparate “Server” team who handle all the server virtualization and back-end work. There’s also the “Networks” team to worry about and often the “Storage” team are in the mix too! Bridging those gaps can be one of the areas where friction begins to show. In my role I tend to be involved across all the teams, and having discussion with all of those people alerts me to where weaknesses may lie in the project. For example the requirements for server virtualization tend to be significantly different to the requirements for desktop virtualization, but when discussing these changes with the server virtualization team, one of the most often asked questions is, “Why would you want to do THAT?!” when pointing out the differing resource allocations for both XenApp and XenDesktop deployments.
Now that’s not to say that all teams are like this and – sweeping generalizations aside – I have worked with some incredibly good ones, but increasingly there are examples where the integration of teams causes massive tension. The only way to overcome this situation is to address the root cause – organizational change. Managing desktops was (and in many places still is) a bit of a black art, combining vast organically grown scripts and software distribution mechanisms into an intricately woven (and difficult to unpick!) tapestry. Managing the server estate has become an exercise in managing workloads and minimising/maximising the hardware allocations to provide the required level of service and reducing the footprint in the datacentre. Two very distinct skill-sets!
The other two teams which tend to get a hard time during these types of projects are the networks and storage teams – this usually manifests itself when discussing streaming technologies and their relative impacts on the network and storage layers. What is often overlooked however is that any of the teams can have a significant impact on the end-user experience – when the helpdesk takes the call from an irate user it’s going to require a good look at all of the areas to decipher where the issue lies. The helpdesk typically handle the call as a regular desktop call and don’t document the call in a way which would help the disparate teams discover the root cause, which only adds to the problem! A poorly performing desktop/application delivery infrastructure can be caused by any one of the interwoven areas, and this towering of teams makes troubleshooting very difficult, as there is always a risk that each team doesn’t have enough visibility of the other areas to provide insight into the problem.
Organizations that do not take a wholesale look at how they are planning to migrate that desktop tapestry into the darkened world of the datacentre are the ones who, as the project trundles on, come to realise that the project will never truly be the amazing place that the sales guy told them it would be. Given the amount of time, money and political will invested in these projects, it is a fundamental issue that organizations need to address.
So what are the next steps? Hopefully everyone will have a comprehensive set of requirements defined which can drive forward a design, something along the lines of:
1) Understand the current desktop estate:
Today is the RTM for #Windows Server 2012 R2! – #Microsoft
Microsoft blog post about the RTM release of Windows Server 2012 R2:
![]() |
As noted in my earlier post about the availability dates for the 2012 R2 wave, we are counting the days until our partners and customers can start using these products. Today I am proud to announce a big milestone: Windows Server 2012 R2 has been released to manufacturing!
This means that we are handing the software over to our hardware partners for them to complete their final system validations; this is the final step before putting the next generation of Windows Server in your hands.
While every release milestone provides ample reason to celebrate (and trust me, there’s going to be a party here in Redmond), we are all particularly excited this time around because we’ve delivered so much in such a short amount of time. The amazing new features in this release cover virtualization, storage, networking, management, access, information protection, and much more.
By any measure, this is a lot more than just one year’s worth of innovation since the release of Windows Server 2012!
As many readers have noticed, this release is being handled a bit differently than in years past. With previous releases, shortly after the RTM Microsoft provided access to software through our MSDN and TechNet subscriptions. Because this release was built and delivered at a much faster pace than past products, and because we want to ensure that you get the very highest quality product, we made the decision to complete the final validation phases prior to distributing the release. It is enormously important to all of us here that you have the best possible experience using R2 to build your private and hybrid cloud infrastructure.
We are all incredibly proud of this release and, on behalf of the Windows Server engineering team, we are honored to share this release with you. The opportunity to deliver such a wide range of powerful, interoperable R2 products is a powerful example of the Common Engineering Criteria that I’ve written about before.
Also of note: The next update to Windows Intune will be available at the time of GA, and we are also on track to deliver System Center 2012 R2.
Thank you to everyone who provided feedback during….
Continue reading here!
//Richard
Microsoft is progressing quickly! – SkyDrive Pro updated to 25GB and improved sharing – via @BasvanKaam
I must say this once again, Microsoft looks to be on the right track when it comes to getting back as one strong supplier of services in the future/present “BYOD” world. As I wrote in my post #Microsoft – On the right track! – #Windows, #BYOD, #Citrix now Microsoft is actually targeting to solve many of the gaps that we see with today services for BYOx scenarios. For instance how to manage what you want on top of the device (Azure, Intune, SkyDrive, Work Folders etc…) in a controllable fashion and not a full managed device that costs you a fortune to manage… and ShareFile, Box and others are great solutions that have many features that SkyDrive doesn’t have. But there is one thing that they all lack (or please enlighten me!!):
Encryption at rest on Windows, OS X and Linux OS’s/distributions, here all providers are leaning on that you already have hard drive encryption like BitLocker etc. But who manages that then? Can you then say that your service is “BYOD-compliant”? I wouldn’t say so… It’s not only SmartPhones and Tablet devices that we loose… but here Microsoft and SkyDrive may be the first to come with encryption on at least Windows 8.1 devices and somewhat manageable…
But again back to the announcement from Microsoft and SkyDrive:
Microsoft announced today that it is giving business users more storage space and a better way to share files across multiple devices. As first reported by TechCrunch, through its SkyDrive Pro accounts, employees will now receive 25GB of storage to start out with, a sharp increase from 7GB — and even this capacity can be increased to 50GB or even 100GB. Additionally, using SkyDrive’s Shared with Me view, users can share files with their friends and co-workers securely and in real-time.
According to Microsoft Senior Product Managers Mark Kashman and Tejas Mehta, the new storage space limits will be available for both new and existing customers.
This certainly makes the service standout among its competitors, namely Dropbox and Box. It was only about a week or so ago when the latter heralded in the launch of a new pricing plan aiming to increase the number of small businesses using its service. For personal users, Box also wound up doubling the amount of free storage they received.
Here’s how you can figure out the overall storage for each user:
With Office 365, you get 25 GB of SkyDrive Pro storage + 25 GB of email storage + 5 GB for each site mailbox you create + your total available tenant storage, which for every Office 365 business customer starts at 10 GB + (500 MB x # of user(s)1).
While Dropbox, Box, and Hightail certainly are some of the popular services out there today, SkyDrive isn’t something to be trifled with either. Through its integration with the Surface, Windows Phone, and other Microsoft products, along with iOS and Android devices, it has the potential to be a very powerful service.
As for the new sharing feature, just like you would perhaps see in Google Drive or any other cloud storage service, SkyDrive Pro is now offering a Shared with Me view that lets you take a shared document and view, edit, re-share, download, and more — all as if it were in your own storage bin.
But Microsoft isn’t stopping there, as it is adding several minor, but interesting enhancements to SkyDrive. The company has also increased the overall file upload limit to its SharePoint Online service to 2GB per file. Files placed into the recycle bin will now remain…
Continue reading here!
//Richard
#Citrix #PVS vs. #MCS Revisited – #Nutanix, #Sanbolic
Another good blog post from Citrix and Nick Rintalan around the famous topic whether to go for PVS or MCS! If your thinking about this topic then don’t miss this article. Also ensure that you talk to someone who have implemented an image mgmt/provisioning service like this to get some details on lessons learnt etc., also with the change in the hypervisor layer and the cache features this is getting really interesting…
AND don’t forget the really nice storage solutions that exists out there like Nutanix and Melio that really solves some challenges out there!!
http://go.nutanix.com/rs/nutanix/images/TG_XenDesktop_vSphere_on_Nutanix_RA.pdf
Melio Solutions – Virtual Desktop Infrastructure
Back to the Citrix blog post:
It’s been a few months since my last article, but rest assured, I’ve been keeping busy and I have a ton of stuff in my head that I’m committed to getting down on paper in the near future. Why so busy? Well, our Mobility products are keeping me busy for sure. But I also spent the last month or so preparing for 2 different sessions at BriForum Chicago. My colleague, Dan Allen, and I co-presented on the topics of IOPS and Folder Redirection. Once Brian makes the videos and decks available online, I’ll be sure to point people to them.
So what stuff do I want to get down on paper and turn into a future article? To name a few…MCS vs. PVS (revisited), NUMA and XA VM Sizing, XenMobile Lessons Learned “2.0″, and Virtualizing PVS Part 3. But let’s talk about that first topic of PVS vs MCS now.
Although BriForum (and Synergy) are always busy times, I always try to catch a few sessions by some of my favorite presenters. One of them is Jim Moyle and he actually inspired this article. If you don’t know Jim, he is one of our CTPs and works for Atlantis Computing – he also wrote one of the most informative papers on IOPS I’ve ever read. I swear there is not a month that goes by that I don’t get asked about PVS vs. MCS (pros and cons, what should I use, etc.). I’m not going to get into the pros and cons or tell you what to use since many folks like Dan Feller have done a good job of that already, even with beautiful decision trees. I might note that Barry Schiffer has an updated decision tree you might want to check out, too. But I do want to talk about one of the main reasons people often cite for not using MCS – it generates about “1.6x or 60% more IOPS compared to PVS“. And ever since Ken Bell sort of “documented” this in passing about 2-3 years ago, that’s sort of been Gospel and no one had challenged it. But our CCS team was seeing slightly different results in the field and Jim Moyle also decided to challenge that statement. And Jim shared the results of his MCS vs. PVS testing at BriForum this year – I think many folks were shocked by the results.
What were those results? Here is a summary of the things I thought were most interesting:
- MCS generates 21.5% more average IOPS compared to PVS in the steady-state (not anywhere near 60%)
- This breaks down to about 8% more write IO and 13% more read IO
- MCS generates 45.2% more peak IOPS compared to PVS (this is closer to the 50-60% range that we originally documented)
- The read-to-write (R/W) IO ratio for PVS was 90%+ writes in both the steady-state and peak(nothing new here)
- The R/W ratio for MCS at peak was 47/53 (we’ve long said it’s about 50/50 for MCS, so nothing new here)
- The R/W ratio for MCS in the steady-state was 17/83 (this was a bit of a surprise, much like the first bullet)
So how can this be?!?
I think it’s critical to understand where our initial “1.5-1.6x” or “50-60%” statement comes from – that takes into account not just the steady-state, but also the boot and logon phases, which are mostly read IOPS and absolutely drive up the numbers for MCS. If you’re unfamiliar with the typical R/W ratios for a Windows VM during the various stages of its “life” (boot, logon, steady-state, idle, logoff, etc.), then this picture, courtesy of Project VRC, always does a good job explaining it succinctly:
We were also looking at peak IOPS and average IOPS in a single number – we didn’t provide two different numbers or break it down like Jim and I did above in the results, and a single IOPS number can be very misleading in itself. You don’t believe me? Just check out my BriForum presentation on IOPS and I’ll show you several examples of how…
Continue reading here!
//Richard








