Archive
Gartner Identifies the Top 10 Strategic Technology Trends for 2015 – #Nutanix, #WebScale, #Dell, #EnvokeIT, #Gartner
As usual it’s very interesting when Gartner takes a look at the trends for the coming year. I must say that I agree with many of them, one of the trends is very close to my heart and what I think should have been on the agenda of most CIO’s prior to 2015, and this is: Web-Scale IT.
Why haven’t more enterprise and solution architects been looking earlier at how to simplify the delivery of the “commodity” service that IaaS should be in todays IT world. Yes I know that most enterprises have a “legacy” environment that is hard to just transform, they have a service delivery organisation with certain competences and are being bombarded by salesmen from the older legacy providers that this new way is scary (up until they themselves come up with a story on web-scale of course). But it’s time to wake up and look at how you can change your Compute, Network and Storage components to reduce complexity, increase flexibility/agility, focus on core business (apps and services on top) and also reduce your TCO.
One way is of course to move to the cloud and let someone else bother about this, but I yet don’t see that the larger enterprises are looking at this and there is a hesitation though most haven’t gotten to the point of understanding the TCO model and how to compare their As-Is costs to the cost that they get from the costing tools of Azure, Amazon etc. Why is this? My view is that most don’t have a clear understanding of their own As-Is TCO, they understand how much a server costs, and storage costs, but not the TCO when it comes to facility/datacenter costs, power & cooling, HW costs, support and operational costs, license costs and the overview of that in a TCO model they can understand or compare with “the cloud”.
Ok, as usual I’m getting a bit sidetracked but I love this topic and I must encourage you to contact EnvokeIT if you need help to understand the Web-Scale IT concept and how it can add value to you and your business. We work with Nutanix and Dell and can assist in assessing your existing As-Is solution and forming the To-Be target architecture and the strategy to get there based on your requirements and needs. Of course we’re not locked into Dell or Nutanix and have experience within Azure and other public cloud providers as well as other hardware vendor solutions like HP, NetApp etc.
If you like to see a really cool solution that is coming then have a look at my previous post including a short and cool video: Dell + Nutanix = awesome!
Here we have the top 10 trends for 2015 that Gartner have identified:
Analysts Examine Top Industry Trends at Gartner Symposium/ITxpo 2014, October 5-9 in Orlando
Gartner, Inc. today highlighted the top 10 technology trends that will be strategic for most organizations in 2015. Analysts presented their findings during the sold out Gartner Symposium/ITxpo, which is taking place here through Thursday.
Gartner defines a strategic technology trend as one with the potential for significant impact on the organization in the next three years. Factors that denote significant impact include a high potential for disruption to the business, end users or IT, the need for a major investment, or the risk of being late to adopt. These technologies impact the organization’s long-term plans, programs and initiatives.
Read more…
#Hyper-V 2012 R2 Network Architectures Series (Part 1 of 7) – Introduction
This is a great blog post series! Good job Cristian Edwards!
Hi Virtualization gurus,
Since 6 months now, I’ve been working on the internal readiness about Hyper-V Networking in 2012 R2 and all the options and functionalities that exists and how to make them work together and I realize that a common question in our team or from our customers is what are the best practices or the best approaches when defining the Hyper-V Network Architectures of your Private Cloud or your Virtualization farm. Hence I decided to write this series of posts that I think they might be helpful at least to do the brainstorm to find the best approach for every particular scenario. The reality is that each environment is different and use different hardware, but at least I can help you identify 5 common scenarios on how to squeeze the performance of your hardware.
I want to make clear that there is no just one right answer or configuration and your hardware can help you determine the best configuration for a robust, reliable and performer Hyper-V Network Architecture. Please note that I will do some personal recommendation based on my experience. These recommendations might or might not be the official – generic recommendations from Microsoft, so please call you support contact in case of any doubt.
The series will contain these post:
1. Hyper-V 2012 R2 Network Architectures Series (Part 1 of 7 ) – Introduction (This Post)
5. Hyper-V 2012 R2 Network Architectures Series (Part 5 of 7) – Converged Networks using Dynamic QoS
6. Hyper-V 2012 R2 Network Architectures Series (Part 6 of 7 ) – Converged Network using CNAs
7. Hyper-V 2012 R2 Network Architectures Series (Part 7 of 7 ) – Conclusions and Summary
8. Hyper-V 2012 R2 Network Architectures (Part 8 of 7) – Bonus
Continue reading here!
//Richard
#Ericsson to build three Global #ICT Centers
This is really cool!
- High-tech, sustainable global ICT Centers to support R&D and Services organizations to bring innovation faster to the market
- Two centers located in Europe; one in North America
- Another step in providing industry leading cloud-enabled technology
- Also establishing a new R&D hardware design building in Stockholm
Ericsson (NASDAQ:ERIC) is planning to invest approximately SEK 7 billion in the coming five years to build three global ICT Centers. Two will be located in Sweden, in Stockholm and Linköping, while the third one, in North America, will be located in Canada, in Montreal, Quebec.
The centers will be located close to Ericsson’s main R&D hubs and will be the new platform for more than 24,000 Ericsson R&D engineers around the world, supporting their lean and agile ways of working. Team of experts will be able to collaborate beyond borders more easily and efficiently.
Ericsson’s customers will also be able to connect remotely for interoperability testing, trials and will have early access to innovation on new business services in real-time from the comfort of their locations.
The three ICT Centers combined will be up to 120,000 square meters, approximately the size of 14 football fields. The new centers will house the company’s complete portfolio, enabling the R&D organization to develop and verify solutions, creating the foundation for the next generation technology and cloud-based services.
Hans Vestberg, President and CEO, Ericsson, says: “The new ICT Centers are examples of Ericsson’s passion for driving the development of the industry. Great ideas come from collaboration, and at these centers we will push the boundaries of possibility on next generation technology and services. Flexibility enabled by new ways of working will realize innovation faster to the market and to our customers.”
The centers will have a leading-edge design, built in a modular and scalable way, securing an efficient use of resources and space adaptable to the business needs. Ericsson estimates that the combination of architecture, design and locations will reduce energy consumption up to 40 percent. This significant reduction in carbon footprint is instrumental in Ericsson’s vision of a more sustainable future.
The two ICT Centers in Sweden will begin initial operations from end of 2013 and from end of 2014 respectively and the North American ICT Center from early 2015.
The new hardware design building in Stockholm, Sweden, will provide similar benefits as the global ICT Centers in use of equipment and energy savings. It will enable R&D hardware design activities in Stockholm to consolidate into one modern creative environment…..
Continue reading here!
Designing a virtual desktop environment? – #XenDesktop, #Citrix
This is a good blog post by Niraj Patel.
Questions: How do you successfully design a virtual desktop solution for 1,000 users? How about 10,000 users? What about 50,000 users? What are the questions you should be asking? Most importantly, where do you start?
Answer: Hire Citrix Consulting for your next virtual desktop project! OK, that is one right answer, but not the only way to do it. The successful way to design a virtual desktop environment is to follow a modular approach using the 5 layers defined within the Citrix Virtual Desktop Handbook. Breaking apart a virtual desktop project into different layers provides a modular approach that reduces risks and increase chances for your project’s success no matter how larger you’re planned deployment is. What are the 5 layers and some examples of the decisions are defined within them?
- User Layer: Recommended end-points and the required user functionality.
- Access Layer: How the user will connect to their desktop hosted in the desktop layer. Decisions for local vs. remote access, firewalls and SSL-VPN communications are addressed within this layer.
- Desktop Layer: The desktop layer contains the user’s virtual desktop and is subdivided into three components; image, applications, and personalization. Decisions related to FlexCast model, application requirements, policy, and profile design are addressed in this layer.
- Control Layer: Within the control layer decisions surrounding the management and maintenance of the overall solution are addressed. The control layer is comprised of access controllers, desktop controllers and infrastructure controllers. Access controllers support the access layer, desktop controllers support the desktop layer, and infrastructure controllers provide the underlying support for each component within the architecture.
- Hardware Layer: The hardware layer contains the physical devices required to support the entire solution, and includes servers, processors, memory and storage devices.
Want to know how to get started? Try the Citrix Project Accelerator. Input criteria around your business requirements, technical expertise, end user requirements, applications, etc. to get started on your architecture based on the 5 layer model.
Lastly, don’t forget to come see SYN318…
Continue reading here!
//Richard
Delivering #Citrix #XenApp on #Hyper-V with PVS and #McAfee – via @TonySanchez_CTX
Good Citrix blog post from Tony Sanchez!
Architectures—whether physical or virtual—should be flexible enough to adapt to different workloads, allowing them to support changing business needs. Although implementing a new IT architecture takes time and careful planning, the process to test and validate an architecture should be easy. In the case of a virtual desktop architecture, test engineers should be able to follow a repeatable pattern, step by step, simply changing out the workload to validate the architecture under different anticipated user densities, application workloads, and configuration assumptions. The procedure should be as easy as learning a new series of dance steps (think PSY’s Gangnam Style, the most watched dance video on YouTube). The point causes me as a test engineer to ask the question: in the case of VDI, why can’t a hypervisor simply learn a new workload just like I might learn a new sequence of dance steps?
Luckily for test engineers, Citrix FlexCast® provides the ability to learn and deliver any workload type by leveraging the power of the Citrix Provisioning Services® (PVS). Recently I worked with engineers from Citrix and Dell, collaborating to build a FlexCast reference architecture for deploying XenApp® and XenDesktop® on Hyper-V on a Dell infrastructure. Testing of this reference architecture looked at how XenApp and XenDesktop performed under various workloads, altering hypervisor configuration settings and examining the overall user experience and user densities. At the drop of dime, FlexCast and PVS enabled a simple switch of the architecture to a new workload.
Based on that reference architecture effort, we recently began a Single Server Scalability (SSS) test using the latest hardware and software releases available. This blog focuses on that effort — what I call the “XenApp dance step for FlexCast style” and how XenApp workloads perform on Hyper-V. (A follow-on blog article will focus on an alternate “dance” sequence for XenDesktop.) The focus of this blog is how the configuration of the McAfee virus scanning software can impact performance and scaling.
In previous blogs, I describe the testing process and methodology that leverages the Login VSI test harness, along with key tips for success. Since those same methods and recommendations apply here, let’s review the configurations we used for this scalability testing as well as the workloads and actual test results.
For background reading, I highly recommend that you review Frank Anderson’s post on XenApp physical versus virtual testing results with Hyper-V. Frank is my colleague and a great resource for insights about testing, including implementation tips and general best practices. In addition, the related Dell and Citrix white paper describing the FlexCast reference architecture for deploying XenApp and XenDesktop on Hyper-V is available here.
Continue reading here!
//Richard
Performance Tuning Guidelines for #Windows Server 2012
This is a whitepaper that all techies out there should read if you’re dealing with Windows Server 2012!
About This Download
This guide describes important tuning parameters and settings that you can adjust to improve the performance and energy efficiency of the Windows Server 2012 operating system. It describes each setting and its potential effect to help you make an informed decision about its relevance to your system, workload, and performance goals.
The guide is for information technology (IT) professionals and system administrators who need to tune the performance of a server that is running Windows Server 2012.
Included in this white paper:
- Choosing and Tuning Server Hardware
- Performance Tuning for the Networking Subsystem
- Performance Tools for Network Workloads
- Performance Tuning for the Storage Subsystem
- Performance Tuning for Web Servers
- Performance Tuning for File Servers
- Performance Tuning for a File Server Workload (FSCT)
- Performance Counters for SMB 3.0
- Performance Tuning for File Server Workload (SPECsfs2008)
- Performance Tuning for Active Directory Servers
- Performance Tuning for Remote Desktop Session Host (Formerly Terminal Server)
- Performance Tuning for Remote Desktop Virtualization Host
- Performance Tuning for Remote Desktop Gateway
- Performance Tuning Remote Desktop Services Workload for Knowledge Workers
- Performance Tuning for Virtualization Servers
- Performance Tuning for SAP Sales and Distribution
- Performance Tuning for OLTP Workloads
Download here!
//Richard