Video Screencast Help

2010. The Year VDI Really Starts

Created: 26 May 2010 • Updated: 29 Jul 2010 | 6 comments
Language Translations
erikw's picture
+7 7 Votes
Login to vote

In 2010 many companies worldwide are evaluating and eventually deploying VDI. VDI will be growing tremendously in the next few years, and there are many business cases why to choose for VDI.

But there are also many problems to solve before you go to VDI.

Not all problems are VDI related. If you or your company is evaluating VDI, you should take a look at these 10 questions before you even start with your implementation.

1) What Operating platform to use?

First of all you have to decide what platform you want to use. Most companies do not want to manage multiple platforms, so the choice is often easy. But what do we mean with platform?

We have two platforms in this problem. The first platform we need to choose is our operating system we choose. Currently over 99% of the workforce desktops is Microsoft. So you probably choose for Windows. That's an easy one. But there are multiple version available. The real choice if you go for Windows is to choose XP, Vista or Windows 7. Most companies do not want Vista for their critical desktops, so that choice can be easy resolved. Now we have XP and Windows 7 to choose from.

Windows XP is the most used and best known operating system now, but Windows 7 is quickly getting a bigger install base. Because companies understand the value of VDI they now also see the time to upgrade to the newest platform and the choice for Windows 7 is easily made.

Especially financial institutions are having huge problems going to Windows 7. The reason is that applications and especially legacy applications are designed to use Internet Explorer 6. Windows 7 has Internet Explorer 8 by default and it is not possible to use older versions. DinamiQs developed a great way to have IE6 in Windows 8.

2) What Hypervisor to use?

At this moment there are three hypervisors that can be used to run your Virtual desktops on. All of them have benefits and disadvantages.

VMware ESX or ESXi

  • VMware ESx and/or ESXi are the most common used Hypervisors. Most companies already use VMware for their Virtual servers, and it is easy to choose VMware to run your Virtual desktops on because companies already have a lot of knowledge on this platform. No new training for administrators is necessary. But VMware has created several very compelling additions that make VMware the best Hypervisor. Think about technologies as Vmotion, memory overcommit and Disaster recovery solutions. To manage VMware ESX and/or ESXi you also need VirtualCenter.

Microsoft Hyper V

  • Microsoft Hyper V is a newer solution and one of the benefits is that it is using microsoft Windows underneath. Where VMware gives administrators a black and white console where administrators have to work with command lines, Microsoft Hyper V has a Graphical interface allowing administrators to click for configuration. Hyper V has no Memory overcommit. If your server has 64GB memory, then you can run at most 64 Virtual machines with 1GB of memory. If you are deploying Windows 7 every image needs about 2GB and then you can run 32 Virtual Machines on each server

Citrix xenserver or Xensource opensource.

  • A couple of years ago Citrix bought Xensource, and the xenserver allows administrators to also use tools from the Xensource open source community. Xensource has no memory overcommit, but Citrix is going to deploy a technology like VMware's Vmotion very shortly and that allows administrators to migrate machines automatically to other servers for maintenance.

3) Density

Density means, how many machines can be run on each server. When starting with deploying VDI, you have to look at densities per server versus performance on each desktop. The higher the density, the slower the performance.

Now we should do some math. When we buy a server for virtual desktops it is a good reason to consider Blade servers. Blade servers can be attached to a blade rack. A traditional blade rack contains 8 or 16 blade servers.

So if we buy a blade rack with 16 blade servers, containing 64GB of memory and a dual socket Quad core nehalem processor, this whole config costs around $5000 for the backplane and $6000 for each server. The total cost of the hardware would be around $100,000. In the previous selection we saw three hypervisors. In the example below we look at 4 solutions and the densities that they achieve with a good performance. When you deploy more images, performance of the virtual desktop goes down.

VMware ESX and/or ESXi

On the hardware in this example you are able to run 80 to 100 Virtual Machines with a performance qualified as good. The endusers have a good performance and in the usual situation the applications respond quicker or as quick as on their traditional fat client. In our example of 16 blade servers this mean that you are able to run 1300 to 1600 virtual machines. The price for each desktop would be between $62 and $77 on the hardware. Running more virtual machines is possible, because VMware allows you to overcommit memory. If your end users use low performance applications you can easily address more machines and still have a good performance.

Microsoft Hyper-V

With Microsoft Hyper-V you will be able to run between 55 and 60 virtual machines on the same hardware. Keep in mind that there is no memory overcommit possible, so deploying more machines means less memory for each machine. In this example you can deploy between 880 and 960 virtual machines where every virtual machine would cost between $104 and $113 on the same hardware

Citrix Xenserver or Xensource

The density on Xenserver or Xensource is almost identical on the density of Hyper-V. Therefore the total amount of virtual machines are almost the same, as the price is. In this example you can deploy between 880 and 960 virtual machines where every virtual machine would cost between $104 and $113 on the same hardware.

DinamiQs VirtualStorm.

DinamiQs Virtualstorm runs on VMware ESX and ESXi and contains two patent pending technologies that enable you to get much higher densities. First inside VirtualStorm there is a disc I/O driver allowing you to touch files and data over the disk driver instead of using CPU intensive network traffic. Second there is a program called MES (Memory Enhancement Stack) inside the image that allows administrators to give images as low as 384 MB memory in Windows XP and 512 MB of Memory in Windows 7. Still every Windows XP in use the MES will use at least 1256 MB of memory in Windows XP and 2256 MB in Windows 7 without taking valuable resources on the hardware. The MES is a defragmented pagefile and administrators can configure it completely to run kernel processes and drivers on the servers memory and applications to run in paging memory. VirtualStorm allows administrators to deploy and configure over 270 Windows XP desktops and over 230 Windows 7 desktops on the same hardware configuration. In this calculation on the 16 blades with Virtualstorm you can run between 3680 and 4320 virtual machines. The hardware cost for the desktop would be between $23 and $27. That is almost 4 times less the amount of Virtual desktops as in a straight VMware environment and almost 40 dollar per desktop cheaper.

4) Applications

Many people already blogged and wrote about VDI. And yes, it is all about the applications. To successfully deploy VDI, you should take a big look at your applications and your application deployment mechanisme. One of the benefits of VDI is to use one master image and use that as you master for all other images. But if you install all applications in that image, you need a license for every user. For Office that is no problem as your users probably all use Office. But what to do with an application like Adobe Creative Suite 4? There are only a few users using this application and it is very expensive. So installing all applications is a costly case when you have applications like that.

Most administrators consider the use of Software Virtualization together with local installed applications. But then we get another question? What application Virtualization do we choose?

  • Microsoft App-V
  • VMware Thinapp
  • Symantec Workspace Virtualization

Microsoft App-V has a succes ratio in packaging between 50 and 60% although there are companies that strive to 100%. The biggest advantage of App-V is that it is rebuild by Microsoft and Microsoft also created Windows. So the applications will work best because Microsoft is behind both technologies. But App-V does not allow you to virtualize drivers and services and you also have to take care for applications that have to work together like Office with Visio or with Microsoft Project. When you use Office and Visio, it is best method to install them both in the same layer so that they can work completely together. Virtualizing them into two packages demands you to create pointers so that both programs can work together. In the case of Adobe Creative Suite you will not be able to virtualize it with App-V. First of all App-V does not allow you to package applications exceeding a specific size, and Adobe Creative Suite 4 is almost 19 GB. So when you choose App-V you also need an application distribution mechanism to install applications locally.

VMware Thinapp is a virtualization technology that enables you to create single executables that contain all files and registry keys for the application. To enable an enduser to use that executable, the enduser can use the application almost like it is installed locally. Inside the executable there is also the agent, so there is no need to install an agent in the base image. The succes ratio of packaging applications is somewhat higher then with App-V, and is between 60 and 75%, but again is very dependent on drivers and services. If there is a driver or service in the software, then you can not Virtualize it. Although there are several services and drivers in Adobe Creative Suite 4, you will be able to create packages with only one of the functionalities that are in Creative Suite 4. So one package contains Macromedia Flash and another contains Adobe Photoshop. In that scenario you need to consider the ability to work together, and that demands you to do a lot of customization inside packages before you deploy them. Using Thinapp demands you to take a look at memory inside your virtual desktops as most of the applications run in memory instead of on disk.

Symantec Workspace Virtualization has the highest success rate. Although Symanetc speaks about 90%, packagers are going up to over 98%. Symantec is the only virtualization technology that enables you to package drivers and/or services. It is also the only virtualization technology that make applications work together just like they are locally installed without changing or adding scripting to it. With Symantec Workspace Virtualization you are able to run complete empty images with only Windows in the base and add every application afterwards.

5) full clones? Linked Clones? Embedded Clones?

When deploying your images there are three methods,

  • Full Clone
  • Linked Clone
  • Embedded Clone

Full Clone enables you to deploy full clones of every image. If your Base Windows desktop is 10 GB, then in a full clone every desktop will be 10 GB. Running 1000 images of 10 GB, demands you to have at least 10 TB of storage. Next to that you also need to consider that you need storage for the swap file. The swap file is in size as big as the virtual memory. So if you give every image 1 GB of memory, then every image needs also 1 GB on disk. Next to that you have to keep in mind that if your images are Non-Persistent, changes on the base are created in separate storage files. In a 1000 images calculation you need at least 15 GB for every full clone. When you use App-V or VMware Thinapp as virtualization technology and your images are non-persistent, you even need additional space at least in size of the applications you stream into every image.

It is best to buy twice the size of storage as the base image is in size.

Linked Clones are clones that only contain a part of the base. If your base is 10 GB, every linked clone will be around 400 MB. The size will grow when you touch applications, DLL's or other files in the master image, as every file you use will be copied to the linked clone. This makes your storage demands grow, and we have multiple samples of companies that run out of space very soon demanding them to buy more storage. Keep in mind that you should size storage 1.5 times the size of the base image

Embedded Clones is patent pending technology of DinamiQs VirtualStorm. When you use embedded clones, every clone will be in size 2.6 KB. When the image is started, it creates a few log files and a SWVP file in size of the virtual memory. Next to that there will be a redo file on storage containing the user settings and the memory enhancement stack. The total size of an embedded clone will be between 1.2 and 1.6 GB per Windows XP image and between 2.4 and 2.8 GB for a Windows 7 image depending the size of the memory enhancement stack you configure.

In general you should size storage between 0.2 and 0.4 times the size of the base image

6) Persistent or non persistent?

When deploying virtual images you have to consider if you make them persistent or non persistent. Persistent image is like your desktop. You deploy the image, assign it to the end user and every time the enduser logs on, he or she will get the same image. When your image is pretty big, like 10 to even 20 GB, and you use full clones, the desktop deployment will take at least 10 to 15 minutes to roll out. If you are deploying hundreds of images it takes a lot of time to deploy those images, and you also have to keep in mind that you need more traditional technology to maintain those images. Every image needs to be updated and you need to have a software deployment solution like Altiris Deployment Solution in place to maintain these images. The persistent full clone is also known as a virtual fat client, so treat it like a fat client. If the image gets corrupted with a patch or software installation, you need to consider that settings and data in the image gets lost. Eventually when you choose for persistent full clones, I believe that you rather just buy fat clients and leave your environment like it is.

Non-persistent images do not stay available. In a VMware or Xen desktop environment the image is restarted and during restart everything is lost. So when the enduser logs off, the image loses all changes and is back where it began.

In a DinamiQs embedded environment when the enduser logs off, all user settings are verified on the network and the image is simply deleted. The DinamiQs resource pool allocator always keeps the amount of new available images up to date to the amount you set it on the time that you set it. So if you want 10 images available, the system keeps 10 images available during the whole day. But from 7 AM to 9 AM when the majority of users start working you want 100 images available. The system will deploy 10 images continuously and at 7 AM it will deploy 90 more to 100 until 9 AM. Then it will go back to 10. Every image will be completely deleted when the enduser logs off.

7) Storage

As you have seen above, storage is very important. For smaller deployments you can run on local storage on each server, but as soon as you have more desktops, you will no longer be able to use local storage.

In general you should buy a NAS that enables you to run all your images on storage. This NAS can contain SCSI, SATA, SAS or even solid state drives. Make sure your storage can be used with at least 1 GB network ports and the faster the disks are, the faster the images will be. I can create multiple blogs about storage, so I will not go deeper into this matter.


A lot of people are talking about IOPS. In a VDI environment the images are actually stored on disks, and every disks can handle a maximum amount of Input Output commands. In IOPs there are three kinds that you should keep in mind. Read IO's, Write IO's and Random.

A complete document about IO's and IOP's can be found on this link: It's all about IOP's

9) Broker and End User Devices

Of course you need to have some kind of device or software to connect to a VDI image. This connection contains two pieces. The device and the protocol.

The device is what the user eventually uses to connect. This can be a Sunray device that I will post a new blog on later this week, but can also be a Wyse thinclient, traditional fat client, internet cafe pc with Internet Explorer or even an Ipad will work great.

All those devices use a specific protocol to connect to the backbone where the images are running.

Protocols available now are:

  • RDP Remote desktop. Most common used protocol, fast, but bad experience with Multimedia
  • SUN AIP. SUN AIP is used by Sunray ultra thin clients and connects to a Sunray server that again connects to the VDI desktop. With the current Multimedia redirection technology this is a very good choice, but it will not show full HD movies.
  • PCOIP. VMware's PCOIP protocol has very good Multimedia support, but it also uses a lot of bandwidth. When used over the internet, sometime the image seems to be very slow or even looses connectivity.
  • Citrix HDX. I did not test this protocol enough to give my honest opinion on this

10) User personality

Last but not least. This is probably one of the most important points. What about the user personality?

When endusers start using a VDI desktop they make specific configurations like their Outlook settings, Outlook signature but also the background and color scheme settings inside Windows.

All this together is called the profile. There are three profile methods.

  • Mandatory
  • Roaming
  • Local

When you use a mandatory profile, every end user always gets a brand new, clean profile. But all settings you need like Outlook settings and other settings are not kept. You need scripting to connect to exchange servers etc., whatever the end user tries to keep. When he logs off he looses everything.

In a roaming profile the enduser settings are stored on the network server by adding the path to the Active Directory. When the end user logs on, the settings are downloaded to the image. When endusers place Excel files or documents on the desktop or into their My Documents folder, the profile will grow and the bigger the profile is, the more storage it takes and the longer the log on times get.

Local profiles are generally used on laptops and all data is on the image or desktop and stays there. The local profile is the worst choice in a VDI environment.

There are several companies bringing profile solutions to your environment. The two best known and used companies are RTO soft and DinamiQs Unified profile.

RTO was OEM'ed by Symantec in the past and is now part of VMware. The integration of the RTO software will probably be done in View 4.5 Service Pack 1 or 2, as it is not integrated in View 4.5

DinamiQs Unified profile manager or DUP uses a hybrid model. It combines a roaming profile for every end user on the network, but instead of downloading it into the image, the DVS agent will create a pointer to the profile directory. All settings stay on the server. Best example how it works, or what it adds as functionality to the enduser is to open a Word document. Start typing in this Word document. After typing hundreds of words, and the desktop gets a blue screen without saving the document, all settings will get lost. When the enduser connects to a brand new desktop and starts Word, there will be an autosave popup telling the enduser that there is an previously edited unsaved document. The enduser then can save the document without losing all data. He or she only loses the last 4 to 5 minutes depending the settings that the administrator configured.

When the enduser connects with a portable the DUP sees that it is a portable and will download the image locally.


Although I did not mention all issues that you should consider before starting with VDI, VDI is a very new and great way to do your desktops. VDI adds a lot of new security to your network as all data and desktop images including all settings and software is in the datacenter and stays in the datacenter. If an enduser's portable device is stolen, the thief only has the hardware and not the software and data. There is simply nothing on that device.

Big, well known companies are providing solutions, but they cost a lot of money to get everything and demand a lot of training.

New companies like DinamiQs get a lot of the market share right now by using best of breed technology.

ESX as Hypervisor, DVS manager manages the images, deployment and servers. DVS manager also addresses large deployments. Where VMware vCenter cannot exceed 3000 to 400 VM's in a single managed environment, DinamiQs goes over 100.000 images in one environment

Microsoft as desktop operating system. DVS manager manages the desktops, adds and removes software, and maintains the user personality, Windows XP and Windows 7 independent

Symantec Workspace Virtualization as virtualization solution. DVS manager manages the virtual applications, maintains the read/write part of the layers, and adds or removes software based on active directory membership.

DinamiQs brings down the price of VDI tremendously and therefore it makes TCO and ROI work.

If you want to know more about DinamiQs and/or VirtualStyorm please visit the website

Comments 6 CommentsJump to latest comment

Scot Curry's picture

Rock on VirtualStorm.  Thanks for all of this info.

Login to vote
MrLeV's picture

YOu can also, instead of cloning your desktop VMs, stream the disk to them.

I used to do that with HP Image Manager.
Streaming a 10GB XP image to 1000 VMs would then use only... 10GB of Image space.
You need a little temporary space (really temporary), let's say a maximum of 2GB per VM, plus the virtual memory size (from the host perspective, not from the guest perspective). 3 to 4GB of disk space per VM are a good approximation.
Note that my VMs are configured not to have any "hard disk drive". They PXE boot to HP Image Manager server.
It's fast and efficient.
And when I need to update the image on 1000 VMs, I just do it once, on the reference image, and it gets populated to all the VMs as soon as my updated virtual disk image is shared and the guest VMs are rebooted.

As for user data, I use folder redirection and roaming profiles.

Very efficient indeed, I would not use VDI without an OS/Disk streaming solution.

Login to vote
erikw's picture


Thanxs for your insight.
The method you are using is definetly a good way to go. But I believe it also takes lot of Memory resources. My questions to you would be:

1) How do you handle bootstorms when many machines start at once?

2) How does it scale? That means how many machines are you able to deploy on a server with what processors and memory?

3) Do you use application virtualization and how do you manage them?

Again, the method you use is absolutely not a wrong one, but i think it will use lots of resources on the server.
I hope to get your answer so that others can learn.

Regards Erik Dinamiqs is the home of VirtualStorm (

If your issue has been solved, Please mark it as solved

Login to vote
MrLeV's picture

Erikw -

Good questions indeed.

You can actually optimize the total amount of memory used by your VMs: enough/more memory for the "virtual disk server" (which I run in a VM or even at the hypervisor level), restricted memory size for the desktop VMs.

Actually, you can start several machines at once, when this is not recommended with VDI when you are suppose to start one machine after the other (or very few).
The reason is that with traditional VDI, you have a lot of concurrent (host) disk accesses when booting several VMs simultaneously. With OS streaming, there is only one disk drive (the virtual one)  that is used and it can be in a VM on the same host than the client VMs, or on another host (server class?).  And if your "virtual disk server VM" is well configured (enough RAM), most of the disk sectors needed for a VM to boot will be in the virtual disk server's disk cache as soon as the first VM booted completely. Then there is even no HDD access (or very few)
Furthermore, Virtualized Network IO in the same host/cluster is more efficient than virtualized disk I/Os,

Exactly the same number of VM that you can use with the non-streaming scenario. Or even a little more, because of the improved "disk" performance when using OS streaming in a VDI environment. Actually the exra VM you need to run your virtual disk server is compensated by the fact that you can use more "diskless" client VMs on the same host.
For the rest, well, the CPU an memory usage are used in a very same fashion as with traditional VDI.

Application Virtualization:
Exactly the same way than with non-streamed VDI.
But there I'd like some better integration of OS streaming and Application Streaming. Wyse Streaming Manager pretends to do that but it it seems to be a mix of an old version of Ardence and Endeavors Application Streaming, and the pretended integration is not really different than using HP Image Manager with AppStream for instance (which I do).
What I'd like would be to be able to manage the "DesktopVM-StreamedDisk-User-Applications" sets from a unique admin console. Things like
User A (or belonging to a certain OU in AD) will get a VM of type X that boot off virtual Disk D and a set of streamed  applications containing App1, App2 etc.
I made my own simple set of tools, but they are not very user friendly, based on scripts...

I hope you found these answers useful.
And you can try it by yourself, since HP gives a way a free evaluation version of Image Manager for 90 days and 20 clients.

The product is quite raw and sometimes not very user friendly, but very effective when you have set it up.
I saw that some decent community support is provided in one of the HP forums.

Login to vote
Khalid H Mashayekh's picture

VMWare View 4.0 are the best yes

Login to vote
LikesIT's picture

Regarding multimedia performance in VDI, you might want to take a look at Ericom Blaze, a software product that accelerates AND compresses standard Microsoft RDP, so it speeds RDP while conserving bandwidth. Blaze accelerates RDP performance by as much as 10-25 times, and helps deliver higher frame rates and reduce screen freezes and choppiness, while significantly reducing network bandwidth consumption especially over low-bandwidth/high latency connections.

Blaze can also work with VMware View.  It's true that PCoIP is certainly a great display protocol.  However in some scenarios of high latency/low bandwidth remote connections (like over certain WANs), you may need to complement the VMware View deployment with Ericom Blaze.  You can use VMware View with PCoIP for your LAN and fast WAN users, and at the same time use VMware View with Blaze over RDP for your slow WAN users.  This combined solution can provide enhanced performance in both types of environments, letting you get the best out of VMware View for your users.

Read more about Blaze and download a free evaluation at:


Login to vote