Archive

Posts Tagged ‘file’

Magic Quadrant for Enterprise File Synchronization and Sharing – #ShareFile, #Citrix, #EMC, #Box, #Microsoft

October 10, 2014 Leave a comment

It’s not new but it’s something that I discussed the other day with a customer; who is the market leader when it comes to “corporate dropbox” solutions for enterprises? Gartner did update the Magic Quadrant for Enterprise File Synchronization and Sharing services/solutions and it’s a good read I must say.

You know I am a Citrix fan and a like their story and think that they from an overall virtual workplace offerings are far superior to the other players if you look across the stack from providing “legacy” services like Windows Apps and Desktop, Enterprise Mobility Management capabilities and all the network capabilities to provide the end-to-end service delivery. So it’s really nice to see that they are picking up in the ability to execute and are competing with EMC in the Leaders box!

I just hope that Citrix can stay int he lead and ensure that they price and capacity wise stay in synch with the up comers that are starting to offer really large storage capacity as a part of their cloud offerings. I still see that the capabilities and features of ShareFile are really great, and in some aspects others like Box, Microsoft and others are coming with nice features as well. So let’s see who will rule this market, but currently I think that ShareFile is a really strong player for enterprises but Microsoft will continue to grow and I just wish they add the additional features around security etc that enterprises often require so they can go into the bigger companies as well.

Figure 1. Magic Quadrant for Enterprise File Synchronization and Sharing
Figure 1.Magic Quadrant for Enterprise File Synchronization and Sharing

Source: Gartner (July 2014)

Market Definition/Description

This document was revised on 14 July 2014. The document you are viewing is the corrected version. For more information, see the Corrections page on gartner.com.

EFSS refers to a range of on-premises or cloud-based capabilities that enable individuals to synchronize and share documents, photos, videos and files across multiple devices, such as smartphones, tablets and PCs. File sharing can be within the organization, as well as externally (e.g., with partners and customers) or on a mobile device as data sharing among apps. Security and collaboration support are critical capabilities of EFSS to address enterprise priorities.

Beyond file synchronization, sharing and access, EFSS offerings may include different levels of support for:

  • Mobility, with native apps for a variety of mobile smartphones, tablets, notebooks and desktops, as well as Web browser support.
  • Security, for protection of data on the device, in transit and in cloud services (or servers), such as password protection, remote wipe, data encryption, data loss prevention, digital rights management (DRM), access tracking and reporting. Mature products ensure that files leaving the sharing location are DRM-encrypted and only readable by those authorized to access the data. Audit and compliance support are also present in complete products.
  • Administration and management, including integration with an Active Directory and Lightweight Directory Access Protocol (LDAP) policy enforcement.
  • Back-end server integration, e.g., with SharePoint and other corporate platforms. Integration is achieved through connectors (e.g., based on the Content Management Interoperability Services [CMIS] standard and APIs).
  • Content manipulation, such as file editing, PDF annotations and note taking.
  • Collaboration, such as cooperative editing on a shared document using change tracking and comments; and document-based workflow process support.
  • Simplicity and usability, with optimized UIs and interactions, such as file drag and drop and file open in applications.
  • Storage, i.e., cloud-based EFSS services often include cloud storage as part of the bundle to implement the EFSS repository. Software EFSS products, instead, may integrate with repositories on-premises or be implemented with a separate repository on-site.

Typical architectures for EFSS offerings are:

  • Cloud: Corporate files are accessed via mobile devices, or shared and are stored in the provider’s cloud. Organizations that want to replace the personal cloud services adopted by employees with an enterprise-class alternative under IT control, while preserving the user experience and enhancing mobile collaboration, prefer the cloud method.
  • On-premises: The remote access, synchronization and sharing component is deployed on-premises and integrates with corporate data repositories, without file replicas. This method is preferred by organizations under strict regulations about data storage.
  • Hybrid: The user and device authentication, security and search mechanisms are implemented in the provider’s cloud. Files and documents are kept in their original location, or can be in third-party clouds. Organizations that want to simplify mobile users’ access to corporate data through the cloud, without creating data replicas in someone else’s cloud, prefer the hybrid method.

There are two types of EFSS offerings:

  • Destinations — Stand-alone products with file sync and share as a core capability, which represents a new purchase for an organization.
  • Extensions — File sync and share capabilities added, and wrapped around established products or applications — e.g., for collaboration, content management or storage. Organizations can use extensions as part of the broader platform (see “Destinations and Wraparounds Will Reshape the Enterprise File Synchronization and Sharing Market”).

Continue reading here!

//Richard

Read more…

Bug in Citrix Receiver 13 for Linux – cannot connect with multiple STAs – @CitrixSupport, @CitrixReceiver, #Citrix

Ok, we’ve had some issues with Citrix Receiver version 13 for Linux.. and it’s not just ONE issue. I found one that I thought I just have to share… so it’s lab Saturday for me at the office in a true geek manner with two XenClients and my favourite MacBook!

Bugg_finding_geek_Saturday

I guess that some of you have tried the Linux Receiver and knows how hard it is to get working, especially on a 64-bit distribution of Linux like Ubuntu 12.04 LTS och 13.10 LTS.

If you follow these instructions you can get it onto the device and then login through a browser (local Receiver UI may still not be full functioning!)..

https://help.ubuntu.com/community/CitrixICAClientHowTo

What I’m about to show you is that it’s not just only getting Receiver on the device and ensuring that the SSL certificates are trusted. You then have to be able to use it as well externally through a NetScaler Gateway (NSG) into StoreFront and your XenApp/XenDesktop VDA’s.

Just assume that you have a production environment that consists of a NetScaler Gateway and a StoreFront server, if you then in StoreFront have configured your NetScaler Gateway correctly and the appropriate STA configuration (with MULTIPLE STA’s) then you will notice that you can’t launch a session.

BTW, the recommendation from Citrix is to use multiple STA’s, right! See this from edocs:

For all deployments, if you are making resources provided by XenDesktop, XenApp, or VDI-in-a-Box available in the store, list on the Secure Ticket Authority (STA) page URLs for servers running the STA. Add URLs for multiple STAs to enable fault tolerance, listing the servers in order of priority to set the failover sequence. If you configured a grid-wide virtual IP address for your VDI-in-a-Box deployment, you need only specify this address to enable fault tolerance.

Important: VDI-in-a-Box STA URLs must be entered in the form https://serveraddress/dt/sta in the Add Secure Ticket Authority URL dialog box, where serveraddress is the FQDN or IP address of the VDI-in-a-Box server, or the grid-wide virtual IP address.

The STA is hosted on XenDesktop, XenApp, and VDI-in-a-Box servers and issues session tickets in response to connection requests. These session tickets form the basis of authentication and authorization for access to XenDesktop, XenApp, and VDI-in-a-Box resources.

If you want XenDesktop, XenApp, and VDI-in-a-Box to keep disconnected sessions open while Citrix Receiver attempts to reconnect automatically, select theEnable session reliability check box. If you configured multiple STAs and want to ensure that session reliability is always available, select the Request tickets from two STAs, where available check box. Read more…

#Citrix #ShareFile StorageZone Controller logs – #BYOD

January 23, 2014 Leave a comment

This is great blog post by Dan about where to find that little extra info when you’re having issues with StorageZone controllers!

Citrix ShareFile StorageZone Controller logs

When troubleshooting issues with the Citrix ShareFile StorageZone Controller the first place to look will be on the Monitoring page of the StorageZone Controller configuration page. If there are items in red on that page I’d next recommend looking in the logs for more details information.

StorageZone Controller Monitoring page

The logs for the StorageZone Controller are located by default at c:\inetpub\wwwroot\Citrix\StorageCenter\SC\logs.

Within this directory multiple logfiles exist. The main ones you should check are:

  • cfgsrv_%date%.txt – StorageZone configuration logs
  • sc_%date%.txt – ShareFile StorageZone “ShareFile Data” logs
  • CIFS_%date%.txt – ShareFile StorageZone Connectors for Network File Shares logs
  • sharepoint_%date%.txt – ShareFile StorageZone Connectors for SharePoint logs

You can enable extended logging for…

Continue reading here!

//Richard

Single File Restore – Fairy Tale Ending Going Down History Lane – via @Nutanix and @dlink7

November 21, 2013 Leave a comment

Great blog post by Dwayne Lessner!

If I go back to my earliest sysadmin days where I had to restore a file from a network share, I was happy just to get the file back. Where I worked we only had tape and it was crapshoot at the best of times. Luckily, 2007 brought me a SAN to play with.

bad times with dealing with LUNSThe SAN made it easier for sure to go back into time and find that file and pull it back from the clutches of death by using hardware based snapshots. It was no big deal to mount the snapshot to the guest but fighting with the MS iSCSI initiator got pretty painful, partly because I had a complex password for the CHAP authentication, and partly because clean-up and logging out of the iSCSI was problematic. I always had ton of errors, both in the windows guest and in the SAN console which caused more grief than good it seemed.

Shortly after the SAN showed up, VMware entered my world. It was great that I didn’t have to mess with MS iSCSI initiators any more but it really just moved my problem to the ESXi host. Now that VMware had the LUN with all my VMs, I had to worry about resignatureing the LUN so it wouldn’t have conflicts with the rest of production VMs. This whole process was short lived because we couldn’t afford all the space the snapshots were taking up. Since we had to use LUNS we had to take snapshots of all the VMs even though there were a handful that really need the extra protection. Before virtualization we were already reserving over 50% of the total LUN space because snapshots were backed by large block sizes and ate through space. Due to the fact that we had to snapshot all of the VMs on the LUN we had to change the snap reserve to 100%. We quickly ran out of space and turned off snapshots for our virtual environment.

When a snapshot is taken on Nutanix, we don’t copy data, nor do we copy the meta-data. The meta-data and data diverge on a need basis; as new writes happen against the active parent snapshot we just track the changes. Changes operate at the byte level which is a far cry from the 16 MB I had to live with in the past.

Due to the above-mentioned life lessons in LUN-based snapshots, I am very happy to show Nutanix customers the benefits of per-VM snapshots and how easy it to restore a file.

Per VM protectionTo restore a file from a VM living on Nutanix you just need to make sure you have a protection domain set up with a proper RPO schedule. For this example, I created a Protection Domain called RPO-High. This is great as you could have 2,000 VMs all on one volume with Nutanix. You just slide over what VMs you want to protect; in this example, I am protecting my FileServer. Note you can have more than one protection domain if you want to assign different RPO to different VMs. Create a new protection domain and add 1 VM or more based on the application grouping.

Read more…

#Microsoft Desktop Hosting Reference Architecture Guides

October 28, 2013 Leave a comment

Wow, these are some compelling guides that Microsoft delivered!! Have a look at them! But of course there’s always something more U want! Let Service Providers provide DaaS services based on client OS’s as well!!!

Microsoft has released two papers related to Desktop Hosting. The first is called: “Desktop Hosting Reference Architecture Guide” and the second is called: “Windows Azure Desktop Hosting Reference Architecture Guide“. Both documents provide a blueprint for creating secure, scalable, multi-tenant desktop hosting solutions using Windows Server 2012 and System Center 2012 SP1 Virtual Machine Manager or using Windows Azure Infrastructure Services.

The documents are targeted to hosting providers which deliver desktop hosting via the Microsoft Service Provider Licensing Agreement (SPLA). Desktop hosting in this case is based on Windows Server with the Windows Desktop Experience feature enabled, and not Microsoft’s client Operating Systems like Windows 7 or Windows 8.

For some reason, Microsoft still doesn’t want service providers to provide Desktops as a Service (DaaS) running on top of a Microsoft Client OS, as outlined in the “Decoding Microsoft’s VDI Licensing Arcanum” paper which virtualization.info covered in September this year.

The Desktop Hosting Reference Architecture Guide provides the following sections:

  • Desktop Hosting Service Logical Architecture
  • Service Layer
    • Tenant Environment
    • Provider Management and Perimeter Environments
  • Virtualization Layer
    • Hyper-V and Virtual Machine Manager
    • Scale-Out File Server
  • Physical Layer
    • Servers
    • Network
  • Tenant On-Premises Components
    • Clients
    • Active Directory Domain Services

clip_image001

The Windows Azure Desktop Hosting Reference Architecture covers the following topics:

True Scale Out Shared Nothing Architecture – #Compute, #Storage, #Nutanix via @josh_odgers

October 26, 2013 Leave a comment

This is yet another great blog post by Josh! Great work and keep it up! 😉

I love this statement:

I think this really highlights what VMware and players like Google, Facebook & Twitter have been saying for a long time, scaling out not up, and shared nothing architecture is the way of the future.

At VMware vForum Sydney this week I presented “Taking vSphere to the next level with converged infrastructure”.

Firstly, I wanted to thank everyone who attended the session, it was a great turnout and during the Q&A there were a ton of great questions.

I got a lot of feedback at the session and when meeting people at vForum about how the Nutanix scale out shared nothing architecture tolerates failures.

I thought I would summarize this capability as I believe its quite impressive and should put everyone’s mind at ease when moving to this kind of architecture.

So lets take a look at a 5 node Nutanix cluster, and for this example, we have one running VM. The VM has all its data locally, represented by the “A” , “B” and “C” and this data is also distributed across the Nutanix cluster to provide data protection / resiliency etc.

Nutanix5NodeCluster

So, what happens when an ESXi host failure, which results in the Nutanix Controller VM (CVM) going offline and the storage which is locally connected to the Nutanix CVM being unavailable?

Firstly, VMware HA restarts the VM onto another ESXi host in the vSphere Cluster and it runs as normal, accessing data both locally where it is available (in this case, the “A” data is local) and remotely (if required) to get data “B” and “C”.

Nutanix5nodecluster1failed

Secondly, when data which is not local (in this example “B” and “C”) is accessed via other Nutanix CVMs in the cluster, it will be “localized” onto the host where the VM resides for faster future access.

It is importaint to note, if data which is not local is not accessed by the VM, it will remain remote, as there is no benefit in relocating it and this reduces the workload on the network and cluster.

The end result is the VM restarts the same as it would using traditional storage, then the Nutanix cluster “curator” detects if any data only has one copy, and replicates the required data throughout the cluster to ensure full resiliency.

The cluster will then look like a fully functioning 4 node cluster as show below.

5NodeCluster1FailedRebuild

The process of repairing the cluster from a failure is commonly incorrectly compared to a RAID pack rebuild. With a raid rebuild, a small number of disks, say 8, are under heavy load re striping data across a hot spare or a replacement drive. During this time the performance of everything on the RAID pack is significantly impacted.

With Nutanix, the data is distributed across the entire cluster, which even with a 5 node cluster will be at least 20 SATA drives, but with all data being written to SSD then sequentially offloaded to SATA.

The impact of this process is much less than a RAID…

Continue reading here!

//Richard

#Sanbolic Brings Public Cloud Economics to the Enterprise – #Melio

March 18, 2013 1 comment

Ok, I must say that this product is great!!! If you haven’t looked at it before then please do! And contact us at EnvokeIT if you want more details!

Sanbolic Enables Distributed Flash, SSD and HDD to Achieve Enterprise Systems Capability and Scale-Out In Server-Side and Commodity Storage Deployments

Waltham, MA – (March 18, 2013) – Sanbolic® today announced the general availability of its Melio version 5 (Melio5™) software – delivering distributed scale-out, high-availability and enterprise data services through software. Server-side flash has seen rapid adoption for applications such as hyperscale web serving, but limited adoption in general purpose enterprise applications. With the launch of Melio5, Sanbolic enables enterprise customers to dramatically improve their storage infrastructure economics by enabling server-side flash, SSD and HDD as primary persistent storage. Melio5 aggregates across nodes for scale-out and availability while providing RAID, remote replication, quality of Service (QoS), snapshots and systems functionality through a software layer on commodity hardware. This provides customers with the ability to deploy commodity and server-based storage architecture with similar economics and flexibility as public cloud data centers such as Google and Facebook.

With validation by hundreds of enterprise and government organizations running in production, Melio volume management and file system technology addresses the needs of high performing cost effective storage infrastructure on-premise. Melio5’s architecture is designed to scale up to 2,048 nodes and up to 65,000 storage devices enabling linear performance scalability in a cluster.

Melio5 also eliminates the need to deploy a redundant flash caching layer in front of legacy storage area network (SAN) hardware by directly incorporating flash into hybrid volumes and intelligently placing data based on file system access profiles. A hybrid volume will place random access data such as file system metadata on flash sectors while placing sequential data on low cost hard disk drives to greatly reduce the cost of capacity. The result is a highly scalable, high performance storage system, with a much lower cost than legacy storage arrays.

“Typically, server and disk drive vendors operate on gross margins in the 20-30% range. Storage array vendors, on the other hand, are often twice that or more,” said Eric Slack, Senior Analyst,Storage Switzerland. “Sanbolic’s approach leverages the architecture that the big social media and public cloud companies use, to fix this problem. By replacing storage arrays (and storage array margins) with commodity server and disk drive hardware and enabling it with intelligence through software, companies can significantly reduce storage infrastructure costs.”

Terri McClure, Senior Analyst, Enterprise Strategy Group (ESG), stated, “Sanbolic’s Melio5 software enables corporate users to take advantage of flash and SSD in conjunction with commodity hardware to create an intelligent, cost effective, and high performance storage architecture like the huge public cloud companies run, while still ensuring enterprise workload scalability and high availability.”

“Melio5 lets us solve one of the biggest challenges for our customers today – the upfront and management cost for storage – without sacrificing systems capability or performance. The Lego-like modular capability of Melio allows our customers to scale-out their storage and servers based on off-the-self commodity components, without downtime,” said Mattias Tornblom, CEO, EnvokeIT.

“LSI and Sanbolic’s shared vision and complementary products help customers to dramatically improve the performance, flexibility and economics of their on-premise storage infrastructure,” said Brent Blanchard, Senior Director of Worldwide Channel Sales and Marketing, LSI Corporation. “LSI’s Nytro™ family of server-side flash acceleration cards and leading SAS-based server storage connectivity solutions…

Continue reading here or here!

//Richard

%d bloggers like this: