Archive

Posts Tagged ‘Server’

Microsoft announcing SQL Server on Linux – #SQL, #Microsoft, #DB, #PaaS

This is sooo cool and further shows how Microsoft has changed over the past years!

SQL-Loves-Linux

It’s been an incredible year for the data business at Microsoft and an incredible year for data across the industry. This Thursday at our Data Driven event in New York, we will kick off a wave of launch activities for SQL Server 2016 with general availability later this year. This is the most significant release of SQL Server that we have ever done, and brings with it some fantastic new capabilities. SQL Server 2016 delivers:

  • Groundbreaking security encryption capabilities that enable data to always be encrypted at rest, in motion and in-memory to deliver maximum security protection
  • In-memory database support for every workload with performance increases up to 30-100x
  • Incredible Data Warehousing performance with the #1, #2 and #3 TPC-H 10 Terabyte benchmarks for non-clustered performance, and as of March 7, the #1 SAP SD Two-Tier performance benchmark on Windows1
  • Business Intelligence for every employee on every device – including new mobile BI support for iOS, Android and Windows Phone devices
  • Advanced analytics using our new R support that enables customers to do real-time predictive analytics on both operational and analytic data
  • Unique cloud capabilities that enable customers to deploy hybrid architectures that partition data workloads across on-premises and cloud based systems to save costs and increase agility

These improvements, and many more, are all built into SQL Server and bring you not just a new database but a complete platform for data management, business analytics and intelligent apps – one that can be used in a consistent way across both on-premises and the cloud. In fact, over the last year we’ve been using the SQL Server 2016 code-base to run in production more than 1.4 million SQL Databases in the cloud using our Azure SQL Database as a Service offering, and this real-world experience has made SQL Server 2016 an incredibly robust and battle-hardened data platform.

Gartner recently named Microsoft as leading the industry in their Magic Quadrant for Operational Database Management Systems in both execution and vision. We’re also a leader in Gartner’s Magic Quadrant for Data Warehouse and Data Management Solutions for Analytics, and Magic Quadrant for Business Intelligence and Analytics Platforms, as well as leading in vision in the Magic Quadrant for Advanced Analytics Platforms.

Gartner MQs

Extending SQL Server to Also Now Run on Linux

Today I’m excited to announce our plans to bring SQL Server to Linux as well. This will enable SQL Server to deliver a consistent data platform across Windows Server and Linux, as well as on-premises and cloud. We are bringing the core relational database capabilities to preview today, and are targeting availability in mid-2017. Read more…

Microsoft Ignite 2015 summary – #MSIgnite, #EnvokeIT, #Azure, #Office365, #OneDrive, #EMM, #PaaS, #IaaS

Hi all,

We at EnvokeIT participated and collaborated at Microsoft Ignite 2015 in Chicago. And it was one of the most intense events I’ve visited in years with a lot of happening in the business and Microsoft really showed that they are the leading innovator in many areas!

I hope that you enjoy my report and that it gives you a condensed overview of what happened and please contact us at EnvokeIT if you want assistance within any area below! And thank you Microsoft for such a great event and also all you bloggers out there that I’ve linked to in this material.

I must say that this event was positive and a bit scary at the same time. Microsoft is for sure pushing as visionairies and innovators in a lot of areas, and I think that competitors will have a hard time competing in the coming years.

These are the areas where A LOT have been released already and where Microsoft according to my oppinion will increase its market share significantly:

  • Cloud and Mobile services, and with this I don’t mean IaaS service for just running a VM in their public Azure cloud or building a hybrid cloud with connectivity to on-premise datacenters. They are delivering so many capabilities now as PaaS and SaaS services. Just look at the sections below, it’s everything from Enterprise Mobillity Management (EMM), Business Intelligence, Database, Storage, Web Apps/services, Service Availability services (DR, Monitoring/Reporting, Backup etc.), Development, Source Control, Visual Studio Online etc. It’s amazing!!
  • Open Source/Linux support – It’s so cool how much Microsoft have shifted to become an adopter to support more open source technologies and way of thinking than just a couple of years ago! Just have a look at all the Linux support they have in Azure, the Linux support they now have in System Center, Docker support to deliver more DevOps capabilities and all the other services in Azure. It’s amazing and so fun! So now both Microsoft have opened their eyes and realized that they can’t ignore this anymore just like Citrix has with their addition of XenDesktop for Linux with SuSE and RedHat support!

The first day kicked off and was a bombarding of product announcements aimed at helping IT pros secure and manage the new Universal Windows Platform.

CEO Satya Nadella presided over a three-hour keynote, which focused on how Microsoft’s new wave of software and cloud services will enable IT and business transformations that are in line with the ways people now work. Nadella talked up Microsoft’s focus on “productivity and platforms” and how it’s tied with the shift to cloud and mobility. He also highlighted the need for better automation of systems and processes, and better management of the vast amounts of data originating from new sources such as sensors and other Internet-of-Things-type nodes.

As mentioned there where a lot of updates and below I’ve tried to gather these and I hope it gives you a good insight on the infromation we received and also guidance on how you can get more information about the topics.

Included below are links to detailed overviews of each of the demos (from Microsoft blog post) – including information about how to use them, where to learn more, and what you’ll need to get started.

The following picture is a sketch of the keynote and is also quite good at summarizing the message of Mobile and Cloud first!

 

vNiklas also created a great powershell script that automates the downloading of all MS Ignite content with PowerShell and Bits from Channel 9 that you can find here!

Enterprise Mobility Management (EMM) – MDM, MAM, MCSM/MIM etc…

Microsoft’s next chapter in Enterprise Mobility, great blog post on where Microsoft is going etc. http://blogs.technet.com/b/enterprisemobility/archive/2015/05/04/ignite-microsofts-next-chapter- in-enterprise-mobility.aspx …

Windows 10 Continuum – this is cool, think about docking your smartphone to your external screen, keyboard and mouse! That’s try mobility of youre device, this looks really cool and something that I’d like to try out once released!

Have a look at the feature demo at Ignite in the video below.

What’s New and Upcoming with Microsoft Intune and System Center Configuration Manager | Microsoft Ignite 2015

This session outlines the latest enhancements in enterprise mobility management using Microsoft Intune and System Center Configuration Manager. See the newest Microsoft Intune improvements for managing mobile productivity without compromising compliance, and learn about the futures of Microsoft Intune and Configuration Manager, including new Windows 10 management scenarios.

Microsoft Intune and Configuration Manager, including new Windows 10 management scenarios.

https://channel9.msdn.com/Events/Ignite/2015/BRK3861/player

In the CloudEnterprise Mobility Management table of content:

Office 2016 public preview available!

Over the last 12 months, we’ve transformed Office from a suite of desktop applications to a complete, cross-platform, cross-device solution for getting work done. We’ve expanded the Office footprint to iPad and Android tablets. We’ve upgraded Office experiences on the Mac, the iPhone and on the web. We’ve even added new apps to the Office family with Sway and Office Lens. All designed to keep your work moving, everywhere. But that doesn’t mean we’ve forgotten where we came from. While you’ve seen us focus on tuning Office for different platforms over the last year, make no mistake, Office on Windows desktop is central to our strategy.

In March we introduced an IT Pro and Developer Preview for the 2016 release of our Office desktop apps on Windows, and now—as a next step—we’re ready to take feedback from a broader audience. Today we’re expanding the Office 2016 Preview, making it available to Office users everywhere in preparation for general availability in Fall 2015.

Office 2016 previewers will get an early look at the next release of Office on Windows desktop, but more importantly they’ll help to shape and improve the future of Office. Visit the Office 2016 Preview site to learn more about the Preview program and if it’s right for you.

New in Office 2016

Since March, we’ve shared some glimpses of what’s to come in Office 2016. Today, we’d like to give a more holistic view of what customers at home and work can expect in the next release. In Office 2016, we’re updating the Office suite for the modern workplace, with smart tools for individuals, teams, and businesses.

Read more…

Synergy 2015 – A condensed recap of everything you need to know – via @gkuruvilla, #Citrix, #CitrixSynergy

This is a great summary recap that George Kuruvill has done of Citrix Synergy 2015! Great work and enjoy this blog post!

For those of you who were not able to attend Citrix Synergy this year & dont have the time to sit through the key note recordings, I decided to put together a condensed version of some of the key announcements. So here goes!

Citrix Workspace Cloud

  • Citrix hosted control plane that enables customers to deliver a comprehensive mobile workspace to end users.
  • Gives customers the flexibility to host workloads on premises, in public or private clouds.
  • Control plane also provides end to end monitoring of user connections.
  • Evergreen infrastructure since Citrix maintains all core infrastructure components.
  • Workspace Cloud Connector installed on premises on a Win 2k12 server that establishes SSL communication between control plane and customer environment. Used to talk to infrastructure components like Active Directory and hypervisors hosting workload

I wrote a blog on CWC and the value proposition a month back that you can find here.

SYN 217 –  Workspace Cloud – Technical Overview [Video]

 

Citrix Lifecycle Management

  • Comprehensive cloud based service that can be used to design, deploy and manage both Citrix and other enterprise applications.
  • Based on the ScaleXtreme technology.
  • Lifecycle Management enables customers/partners to deploy infrastructure not only on premises but also public/private clouds (resource locations)
  • Customers/Partners have the ability to create blueprints to automate infrastructure deployments end to end. Examples of blueprints include a XD deployment for instance where you could not only install all the XD infrastructure but also automate the installation of all supporting infrastructure like Active Directory, SQL etc.
  • Vendors have the ability to create blueprints as well that can then be consumed by customers and partners alike.
  • Customers/Partners also have the ability to incorporate scripts (new/existing) into the deployment.
  • Once a blueprint is developed, its added to a library. Any resource within the library can then be deployed to a resource location (on premises, public/private cloud)
  • Another key benefit of the Lifecycle Management technology is the ability to automate application upgrades.

XenApp/XenDesktop

  • Xenapp 6.5 maintenance extended till end of 2017, EOL extended till 06/2018. Details here
  • New Feature Pack for XA 6.5 (enhance storage performance, Lync support enhancements, UPM enhancements, Director “Help Desk” troubleshooting”, Storefront 3.0, Receiver.next)
  • XenApp/XenDesktop 7.6 FP2  (End of Q2)
    • New Receiver X1
    • Lync 2013 on Mac
    • Touch ID Support
    • HDX with Framehawk
    • Native Receiver for Linux
    • Linux Apps and Desktops (Redhat and SUSE support)
    • Desktop Player for Mac 2.0 (June)
    • Desktop Player for Windows (Tech Preview)

SYN 233 – Whats new in XenApp and XenDesktop [Video]

SYN 319 – Tech Update for XenApp and XenDesktop  [Video]

Read more…

How to monitor your Internet facing service globally – #Azure, #ApplicationInsights, #Citrix, #NetScaler, #EnvokeIT

Hi again all!

It’s been quite a long time since I wrote a blog post.. I’ve just been too busy working! 🙂

But this is a really cool capability that I think that many of you will like, how often do your company or service provider have a good way of monitoring availability, performance etc. from the public Internet? And if they do then most of the time the larger service providers will build a service and install their own probes on different geographical locations and then they charge quite a lot for this service, and every time you change your application the charge you again for modifying the scripts that the probes use etc.

What I’ve tried and now think is going to be great for both smaller and larger organisations is the Azure Application Insights service. It’s really great and can assist with just this, it’s a service that microsoft provide from their locations globally where you can test your apps in Azure or course but also any web site out there on the Internet. And it doesn’t stop there, you can also use the server installer to also provide metrics from your Windows IIS server up to Azure to get more detailed statistics about the web server itself and requests etc.

Just think about how much it would take for you to setup monitoring from APAC, Americas and Europe for your NetScaler environment.. that would not be done in 10 minutes if you talk to your standard service provider. It took me 10 minutes to setup this reporting to ensure that the NetScaler is available from different locations around the world:

Availability

 

And this is just a simple url ping test to ensure that we get a proper 200 OK response from our EnvokeIT Lab environment that my colleague Björn have setup and modified so nicely with the X1 StoreFront look & feel.

NetScaler_StoreFront_x1_look_and_feel

 

URL_ping_test_netscaler_bear_lab_envokeit

Of course you can make a more proper test than just a url ping test like in this case, the service supports multi-step tests and also content matching etc. It’s also very easy to create one application/service that then consists of multiple locations that you want to monitor, for instance if you’re using GSLB FQDNs as well as regional to ensure that you get the full picture.

More information about what can be done you can find on the Azure Application Insights  pageRead more…

Cloud Platform Integration Framework–Overview – #Microsoft, #IaaS, #PaaS, #CPIF

January 11, 2015 Leave a comment

Another great blog series from Thomas W Shinder – MSFT and contributors!

The Cloud Platform Integration Framework (CPIF) provides workload integration guidance for onboarding applications into a Microsoft Cloud Solution. CPIF describes how organizations, Microsoft Partners and Solution Integrators should design and deploy Cloud-targeted workloads utilizing the hybrid cloud platform and management capabilities of Azure, System Center and Windows Server


Table of Contents

1 Introduction

     1.1 Cloud Platform Integration Framework (CPIF) Overview

     1.2 CPIF Architecture

2 Azure Architectural Pattern Concepts

     2.1 Overview of Azure Architectural Patterns

          2.1.1 Pattern Guide Use

3 Summary


Prepared by:
Joel Yoker – Microsoft  

David Ziembicki – Microsoft  
Tom Shinder – Microsoft


Cloud Platform Integration Framework Overview and Patterns:

Cloud Platform Integration Framework – Overview and Architecture

Modern Datacenter Architecture Patterns-Hybrid Networking

Modern Datacenter Architectural Patterns-Azure Search Tier

Modern Datacenter Architecture Patterns-Multi-Site Data Tier

Modern Datacenter Architecture Patterns – Offsite Batch Processing Tier

Modern Datacenter Architecture Patterns-Global Load Balanced Web Tier


Introduction

1.1 Cloud Platform Integration Framework (CPIF) Overview

The Cloud Platform Integration Framework (CPIF) provides workload integration guidance for onboarding applications into a  Microsoft Cloud Solution. CPIF describes how organizations, Microsoft Partners and Solution Integrators should design and deploy Cloud-targeted workloads utilizing the hybrid cloud platform and management capabilities of Azure, System Center and Windows Server. The CPIF domains have been decomposed into the following functions:

image

Figure 1: Cloud Platform Integration Framework

By integrating these functions directly into workloads….

Continue reading here!

//Richard

Performance Tuning Citrix Storefront 2.x – #Citrix, #StoreFront via @PeterSmali

February 3, 2014 1 comment

Another great blog post from my colleague Peter Smali!

Performance Tuning Citrix Storefront 2.x

First of all I would like to thank Sandbu who came up with an extra performance tuning trick that I have been testing for a while now.
In this post I’ll be demonstrating an updated version of Sandbu’s due some small changes since the introduction of Citrix Storefront 2.x

As we all are aware of, Citrix Storefront is fully dependent on IIS to work, but it is really suffering of some perfromance issues that surely most of us who have been testing or implementing it are aware of. So Let’s give Storefront a new perfromance birth by doing the following
Attention! Take a backup of all files you are going to modify before doing this! And Remember that Citrix Systems does not support this!!

1. Enable Socket Pooling (pooledSockets=”on”)

Open your C:\inetpub\wwwroot\Citrix\Storename\Webweb.config file as administrator and chenge pooledSockets=”off” to pooledSockets=”on”
By enabling socket pooling, Storefront maintaines a pool of sockets instead of creating a new socket each time a new user connects to Storefront, this will give a better performance for SSL based traffic.

2. Changing the application pool to always running (Windows Server 2008 R2)

If you are running Storefront on Windows Server 2012, there is already a new feature implemented in IIS called always running on the application pools but if you are still Windows Server 2008 R2 as I do then you need to do some manual changes…

But if you are still running Windows Server 2008, then you need to do the following:

2.1 Download and install Application Initialization Module for IIS 7.5. A reboot may be required to finish the installation process…

2.2 Open the C:\Windows\System32\inetsrv\config\applicationHost.config on the storefront server as administrator and locate the following setting <configuration><system.applicationHost><applicationPools> and add thealways running paramter startMode=”AlwaysRunning” on each of following application pools

•Citrix Delivery Services Authentication
•Citrix Delivery Services Resources
•Citrix Receiver for Web
•Citrix Delivery Services

The result may look like this:

add name=”Citrix Delivery Services Authentication” autoStart=”true” managedRuntimeVersion=”v2.0″ managedPipelineMode=”Integrated” startMode=”AlwaysRunning”>

2.3 Now locate <configuration>…

Continue reading here!

And you can also check this tuning blog post:

Finetuning a Citrix StoreFront deployment

And also ensure that you intelligently load balance your XML brokers, my suggestion is to use content switching in combination with load balancing to get a more optimal solution in place.

Ensure that you DON’T use FQDN’s when you add the XML broker name into the Delivery Controllers config of the StoreFront Store!! Use NetBIOS names, and NOT like farm1.company.com, rather specify “farm1″ and then ensure that the StoreFront server can resolve “farm1″ to your CS VIP, that will speed enumeration up a lot due to that StoreFront first checks via NetBIOS/WINS which isn’t that optimal!

Content Switching instead of Load balancing of XenApp XML brokers? – #XenApp #NetScaler #CS #LB

Happy StoreFront’ing!

//Richard

Why huge IaaS/PaaS/DaaS providers don’t use Dell and HP, and why they can do VDI cheaper than you! – via @brianmadden

February 3, 2014 Leave a comment

Yes, why do people and organisations still think that they can build IaaS/PaaS/DaaS services within their enterprise’s and believe that they will be able to do so with the “same old architecture” and components used before? It’s not going to be comparable to the bigger players that are using newer and more scalable architectures with cheaper components.

Enterprises just don’t have that innovation power that companies like Google, Facebook and Amazon has! And if they do then most of the time they are stuck in their old way of doing things from a service delivery point of view, stopping them from thinking outside of the box though the service delivery organisation isn’t ready for it..

This is a great blog post on this from Brian, great work!!

Last month I wrote that it’s not possible for you to build VDI cheaper than a huge DaaS provider like Amazon can sell it to you. Amazon can literally sell you DaaS and make a profit all for less than it costs you to actually build and operate an equivalent VDI system on your own. (“Equivalent” is the key word there. Some have claimed they can do it cheaper, but they’re achieving that by building in-house systems with lower capabilities than what the DaaS providers offer.)

One of the reasons huge providers can build VDI cheaper than you is because they’re doing it at scale. While we all understand the economics of buying servers by the container instead of by the rack, there’s more to it than that when it comes to huge cloud provider. Their datacenters are not crammed full of HP or Dell’s latest rack mount, blade, or Moonshot servers; rather, they’re stacked floor-to-ceiling with heaps of circuit boards you’d hardly recognize as “servers” at all.

Building Amazon’s, Google’s, and Facebook’s “servers”

For most corporate datacenters, rack-mounted servers from vendors like Dell and HP make sense. They’re efficient in that they’re modular, manageable, and interchangeable. If you take the top cover off a 1U server, it looks like everything is packed in there. On the scale of a few dozen racks managed by IT pros who have a million other things on their mind, these servers work wonderfully!

Read more…

#XenApp 7.5 is launching! – #Citrix, #HSD, #DaaS, #VDI

January 25, 2014 2 comments

Wow… this is really interesting and “weird” I must say…

XenApp is back! 🙂

And of course AppDNA is in there as well to simplify software/application management on this platform.

Description

New Citrix XenApp 7.5 makes it simple to deliver any Windows app to an increasingly mobile workforce, while leveraging the cost saving and elasticity of hybrid clouds and the security of mobile device management. Learn more at http://www.citrix.com/xenapp

Hear more about it in this video!

The video above was removed because of that it was accidentally published too early.. but you can find it on YouTube here:

//Richard

Single File Restore – Fairy Tale Ending Going Down History Lane – via @Nutanix and @dlink7

November 21, 2013 Leave a comment

Great blog post by Dwayne Lessner!

If I go back to my earliest sysadmin days where I had to restore a file from a network share, I was happy just to get the file back. Where I worked we only had tape and it was crapshoot at the best of times. Luckily, 2007 brought me a SAN to play with.

bad times with dealing with LUNSThe SAN made it easier for sure to go back into time and find that file and pull it back from the clutches of death by using hardware based snapshots. It was no big deal to mount the snapshot to the guest but fighting with the MS iSCSI initiator got pretty painful, partly because I had a complex password for the CHAP authentication, and partly because clean-up and logging out of the iSCSI was problematic. I always had ton of errors, both in the windows guest and in the SAN console which caused more grief than good it seemed.

Shortly after the SAN showed up, VMware entered my world. It was great that I didn’t have to mess with MS iSCSI initiators any more but it really just moved my problem to the ESXi host. Now that VMware had the LUN with all my VMs, I had to worry about resignatureing the LUN so it wouldn’t have conflicts with the rest of production VMs. This whole process was short lived because we couldn’t afford all the space the snapshots were taking up. Since we had to use LUNS we had to take snapshots of all the VMs even though there were a handful that really need the extra protection. Before virtualization we were already reserving over 50% of the total LUN space because snapshots were backed by large block sizes and ate through space. Due to the fact that we had to snapshot all of the VMs on the LUN we had to change the snap reserve to 100%. We quickly ran out of space and turned off snapshots for our virtual environment.

When a snapshot is taken on Nutanix, we don’t copy data, nor do we copy the meta-data. The meta-data and data diverge on a need basis; as new writes happen against the active parent snapshot we just track the changes. Changes operate at the byte level which is a far cry from the 16 MB I had to live with in the past.

Due to the above-mentioned life lessons in LUN-based snapshots, I am very happy to show Nutanix customers the benefits of per-VM snapshots and how easy it to restore a file.

Per VM protectionTo restore a file from a VM living on Nutanix you just need to make sure you have a protection domain set up with a proper RPO schedule. For this example, I created a Protection Domain called RPO-High. This is great as you could have 2,000 VMs all on one volume with Nutanix. You just slide over what VMs you want to protect; in this example, I am protecting my FileServer. Note you can have more than one protection domain if you want to assign different RPO to different VMs. Create a new protection domain and add 1 VM or more based on the application grouping.

Read more…

There was a big flash, and then the dinosaurs died – via @binnygill, #Nutanix

November 15, 2013 Leave a comment

Great blog post by @binnygill! 😉

This is how it was supposed to end. The legacy SAN and NAS vendors finally realize that Flash is fundamentally different from HDDs. Even after a decade of efforts to completely assimilate Flash into the legacy architectures of the SAN/NAS era, it’s now clear that new architectures are required to support Flash arrays. The excitement around all-flash arrays is a testament to how different Flash is from HDDs, and its ultimate importance to datacenters.

Consider what happened in the datacenter two decades ago: HDDs were moved out of networked computers, and SAN and NAS were born. What is more interesting, however, is what was not relocated.

Although it was feasible to move DRAM out with technology similar to RDMA, it did not make sense. Why move a low latency, high throughput component across a networking fabric, which would inevitably become a bottleneck?

Today Flash is forcing datacenter architects to revisit this same decision. Fast near-DRAM-speed storage is a reality today. SAN and NAS vendors have attempted to provide that same goodness in the legacy architectures, but have failed. The last ditch effort is to create special-purpose architectures that bundle flash into arrays, and connect it to a bunch of servers. If that is really a good idea, then why don’t we also pool DRAM in that fashion and share with all servers? This last stand will be a very short lived one. What is becoming increasingly apparent is that Flash belongs on the server – just like DRAM.

For example, consider a single Fusion-IO flash card that writes at 2.5GB/s throughput and supports 1,100,000 IOPS with just 15 microsec latency (http://www.fusionio.com/products/iodrive2-duo/). You can realize these speeds by attaching the card to your server and throwing your workload at it. If you put 10 of these cards in a 2U-3U storage controller, should you expect 25GB/s streaming writes, and 11 million IOPS at sub millisecond latencies. To my knowledge no storage controller can do that today, and for good reasons.

Networked storage has the overhead of networking protocols. Protocols like NFS and iSCSI are not designed for massive parallelism, and end up creating bottlenecks that make crossing a few million IOPS on a single datastore an extremely hard computer science problem. Further, if an all-flash array is servicing ten servers, then the networking prowess of the all-flash array should be 10X of that of each server, or else we end up artificially limiting the bandwidth that each server can get based on how the storage array is shared.

No networking technology, whether it be Infiniband, Ethernet, or fibre channel can beat the price and performance of locally-attached PCIe, or even that of a locally-attached SATA controller. Placing flash devices that operate at almost DRAM speeds outside of the server requires unnecessary investment in high-end networking. Eventually, as flash becomes faster, the cost of a speed-matched network will become unbearable, and the datacenter will gravitate towards locally-attached flash – both for technological reasons, as well as for sustainable economics.

The right way to utilize flash is to treat it as one would treat DRAM — place it on the server where it belongs. The charts below illustrate the dramatic speed up from server-attached flash.

Continue reading here!

//Richard

%d bloggers like this: