Subscribe

The circle of life

The (r)evolution of the end-user device.


Johannesburg, 30 Oct 2014

The early 90s was a great time to start in IT. The world-wide-Web as we know it today did not exist and bulletin boards were the order of the day. US Robotics made the best modems known to mankind and ISDN wasn't even conceived yet.

It was a time of great change, paradigm-shift type moments, where we moved from centralised, mainframe environments to decentralised compute farms distributed throughout the organisational environment and geography.

Simple diginet links provided us with connectivity and essentially each site was a scaled-down copy of the head-office blue-print. Mainframes still provided virtualisation solutions, allowing end-users who were distributed throughout the organisational network, to connect using TN3270 emulators running on DOS or Windows 3 or just plain dumb terminals to the main beast being maintained, nurtured and protected at corporate head-office.

The main corporate applications were still running on mainframes and desktops were deployed to provide simple user functions like e-mail, file and print services and general productivity applications like MS Office. This was the dawn of the age of the corporate desktop (and later notebooks) as an end-user device.

Then, companies started implementing server-based workloads such as databases and other custom developed back-end applications. They also developed "fat" client applications that required persistent connectivity to these back-end applications. Slowly applications got ported from the mainframe onto Wintel based server architectures. Diginet links just did not cope with the bandwidth hungry desktop client applications and a new solution had to be found to feed this distributed compute architecture ? bigger links with more bandwidth (which was not always possible and extremely expensive to begin with ? oops, not much has changed in this arena over the past 25 years in SA). That's when desktop virtualisation became the de facto standard to address the desktop client-server connectivity challenge.

Citrix WinFrame to the rescue. Citrix developed a high-latency friendly protocol, called ICA which enabled users located at either local or remote offices to connect to server-based back-end applications using their front-end applications, as if they were running on the server LAN ? break-through technology. Things developed further and later Microsoft introduced their RDP protocol to provide similar functionality. Personally, ICA was always the better bandwidth-optimised protocol for desktop and application virtualisation in those years.

Many years later, a company called Teradici developed a new protocol, PC-over-IP (PCOIP) which opened up entirely new possibilities in terms of end-user device virtualisation.

Server sprawl, bandwidth and operational costs forced companies to start adopting server virtualisation strategies in the 2005 to 2006 time-frame. Centralising as much back-end applications as possible became the goal everybody was aiming for.

I believe that we have now reached a point where server re-centralisation and virtualisation is a no-brainer. If server virtualisation is not part of your current strategy, your neighbour is most likely Fred Flintstone and maybe you even secretly glance over at Wilma every morning when you leave for work.

The next remaining bastion is the end-user device; the desktop and the laptop. Virtualising the desktop has been relatively easy and plausible since the device is stationary and has a persistent connection to the corporate LAN. Mobility introduced the challenges associated with a holistic end-user virtual environment. With the introduction of the tablet, Apple introduced another one of these "time-space-continuum" disruptions that we encounter every so often in information technology evolution. People have become extremely mobile and yet they still require access to corporate applications wherever and whenever. This is where I believe the greatest challenge use to be; providing a secure method to making corporate information assets available to the mobile end-user.

This last behemoth has been slain by VMWare's Horizon (with View) Suite which brings full end-user virtualisation and connectivity to both the notebook and tablet/smartphone user, does not matter where they find themselves. A comprehensive, holistic virtual strategy now lies before us like a ripe wheat field, ready to be harvested and enjoyed.

But, the wheat field still requires some labouring before the harvesting can occur. We need to size our hosting environments extremely carefully. Guesstimates and industry "rule-of-thumbs" just won't do it accurately enough. Time must be taken to perform proper, scientific assessments of the environment intended for virtualisation. If not, you will most surely end in failure.

The use of assessment tools is vital in gathering performance, application and resource metrics of the environment. Each environment will exhibit different "personalities" in terms of its metrics profile and will require a unique design approach. No two solutions will look the same, and that's because each company has a unique worker, application and performance profile that needs to be taken into account when designing your end-user compute virtualisation blueprint or prototype.

Some metrics that require careful scrutiny and design consideration include:

* What user profiles require support?
* Kiosks, developers, power users, task-based workers etc
* What is the scope of each user type?
* How many simultaneous users will connect?
* Assess the peak resource requirements for each user over a 30-day period:
* CPU
* RAM
* Network Traffic
* Disk
* IOPS
* Throughput
* Read and Write Ratios
* Evaluate and create baselines for both physical (before) and virtual environments (after):
* Login and Logout times
* Virtual Machine Boot time
* Response times
* Compare baselines
* Determine end-user experience index (before and after) and compare
* Consider the possible end-user device states in your underlying storage design to accommodate for:
* Boot Storms,
* Logon Storms and
* Steady State.

So, in conclusion, by taking the above design guidelines into careful consideration when designing your virtual EUC (end-user-compute) cluster, I believe that we have gone full-circle again, back to a centralised environment where every corporate resource and asset can be managed, protected and distributed centrally, where it can run efficiently and optimally and be available to any end-user device imaginable leveraging the latest advances in VMWare's virtualisation technologies.

So, when you glance over at Wilma tomorrow, take out your stone tablet and chisel the following note to her: "Good bye, the attraction was always just physical" - and go virtual!

For additional information or any further assistance, contact Sarel Naud'e at snaude@ubuntusa.co.za

Share

Editorial contacts

Sarel Naude
Ubuntu Technologies
snaude@ubuntusa.co.za