Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Endpoint Virtualization Community Blog

On the future of VDI

Created: 18 Aug 2009 • Updated: 29 Jul 2010
mikejansen's picture
+3 3 Votes
Login to vote

So first of all, I'm as biased as can be when talking about the future of VDI and second of all, this is my view on the world of VDI and Virtualization in general. You may or may not agree with me.

For the past year I've been working with Erik Westhovens of DinamiQs to architect and develop the fastest, most scalable and best performing Virtual Desktop platform in existence today.

That's a bold statement. Intentionally, of course. Like I said, I'm biased. However, this bias is not because I'm just rooting for the home team. I have in fact researched many technologies for one of my customers (the training center of a globally operating business software vendor) who was struggling with the challenges of deploying a 5.000 user environment with future growth to the corporate environment which contains another 40.000 desktops.

Challenges and Demands

The challenges of the traditional VDI solutions are many:

  • deploying of desktop images takes a lot of time
  • automating deployment is not trivial (think sysprep)
  • managing images is like managing fat clients, nothing changes (DLL hell)
  • traditional VDI TCO is not compelling versus regular fat clients
  • storage for traditional VDI becomes huge, unmanageable and expen$ive (5.000 times 30GB, you can do the math)
  • density of virtual machines on physical servers is constrained by memory, network and IO
  • Updates and upgrades of applications causes downtime (ie bootstorms, virus-updates, patch Tuesday), especially in large scale environments

In the industry today you will find many solutions that aim to improve any of the above mentioned items. But unfortunately none address all of them. What you will find is that in many cases you trade off an advantage with a disadvantage. In fact, it seems that most solutions work around three SBC concepts: sharing, streaming and isolation. And most solutions will provide you with a hybrid solution that is neatly encapsulated in an 'intuitive' management environment. So far so good, but for any solution to actually scale, you need to ask yourself a couple of questions and see if and to what extent you meet a particular set of demands.

Actually these were the demands from my customer for his environment (and remember, this was end of 2007):

  • deploying desktops in (semi) real time
  • one user per desktop with full admin rights for that desktop (for training of software installs)
  • persistent, roaming profiles without the usual problems
  • simple automation and pre-provisioning of desktops
  • manage both image and applications inside the image
  • Improve installation and de-installation of applications
  • reduce the storage requirements of desktops versus traditional VDI
  • reduce OS importance and focus on applications (ie make any Windows an application launch platform)
  • increase density of virtual desktops on servers through IO and memory optimizations
  • reduce management effort to the extreme (1 admin for 10000 desktops, or better)
  • make it as 'green' as possible
  • make it as economically attractive as possible

As you may have noticed my customer was not-at-all demanding.

So we looked at Microsoft products (App-V/Softricity, slightly suboptimal packaging, lots of work and too much streaming and of course Terminal Server, shared, One BSOD To Drop Them All ), we looked at Symantec products (standalone desktop product SVS or the AppStream thing which, well... streamed), we looked at Citrix products (but not sure what the message was that Citrix was sending out), we looked at VMware products (Nice packaging with Thinstall/Thinapp, not sure about the streaming and slightly worried about linked clones) and we investigated a whole host of known and less known startups and we found... nothing.

You see, the tradeoffs become visible when you ask yourself a simple question each time you look at a solution. That question is: "What about 10.000? Simultaneous?" All of a sudden you start finding bottlenecks if you apply those 10.000 users to the demands that my customer gave me. If you share in his environment, you'll have many servers to manage, if you stream applications to (virtual) desktops you're running in severe IO problems both on the network and on the disk, but also in the number of provisioning servers required for specific -popular- applications. Isolation is of course completely unmanageable for 10.000 individual clients.

A small piece of history

One day in August of 2008 a then colleague of mine, called Roger Kellerman, told me to join him for a presentation of DVS (Dinamiqs Virtualization Solution) at a company called DeltaISIS. When I saw SVS in the demo screen it held no surprise for me. After all, I had looked at it and since it virtualized only for a single desktop, there was no way I could use it. However, the DVS demo actually showed me that it was possible to manage multiple SVS stations in a network. In my mind I clicked that together with an infrastructure picture of a centralized virtual setup and I knew that this was the way to go.

A few months later I introduced Erik Westhovens to my customer, Jean Paul Beerens, and let him demo his solution and then we talked about the way I had envisioned the actual concept that we now call VirtualStorm. At the end of an interesting session we had a detailed plan on how to set up a scalable, extremely virtualized, centrally managed Virtual Desktop Infrastructure that would meet or exceed the demands of my customer. The environment we designed would not share at the OS level, would not stream applications, would not be isolated and would be simple to manage from a single console and, best of all, would have only one Windows image to maintain and one high performance application repository for provisioning. And did I mention the end-user experience yet?

The implications were staggering... Because of that we called the concept VirtualStorm.
The first VirtualStorm image was live a few days later and since then all the demands on the list of my customer have been met.

The current state of the Virtual World

So what is VirtualStorm really? Well, it's actually a best of breed solution that is leveraged by a few pieces of code that glue all these solutions together to the concept that we call VirtualStorm. The components are VMware ESX, Microsoft Windows (yes, all Windows since 2000), Symantec's SVS and the agents and applications that together form the DVS4VDI product suite. In effect VirtualStorm is a highly scalable, well behaved, low IO, high density environment that virtualizes desktops and applications to the extreme. And with everything we do, we ask ourselves: "Will this scale? What if I want to double the workload within a short period of time? How do we keep it manageable? How can we distribute workloads properly?"

We're currently working with many -large- customers to set up demos and proofs of concept and considering the pace at which we are growing reseller- and customer base there are very, very, very busy times ahead for us. Fortunately the group we're part of has sufficient resources to support us in these efforts.

Now, about that future...

This article is about the future of VDI, so what will that future look like? Again, this is my opinion, you can agree or disagree with any or all parts:

  • Your desktop will become a screen. No more than that. A window on your actual desktop into the data center. Best example of a simple screen device is Sun Microsystems Sun Ray device. There will be others, no doubt.
  • Your Operating System will become a launch platform for your applications. After all, what do you use an OS for, really?
  • Density of desktops on physical server hardware will increase drastically. In general we're not doing a lot with all the power at our disposal, so we can increase density as long as Moore's Law is in effect.
  • Reduction of centralized storage for desktop OS and Applications. Why provision many times when you only have to once?
  • Real time deployment of Operating System and applications. It should just be there.
  • Strongly reduced data-transport, only information, resulting in instant availability of profiles and applications. Why use IO if you don't have to?
  • For management: one desktop to maintain, one central unlimited repository, scaling to millions, but managed by few.

Sounds too good to be true? Maybe a bit too far into the future?

Well, I have to disappoint you, because this is NOT the future.

This is what is available right now.

Don't even get me started on the real future of VDI as I see it in my mind...
But that's another article, so I'll leave it at this. For now.

Mike Jansen