Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Endpoint Management Community Blog

Computer Build Statistics

Created: 31 May 2012 • 2 comments
ianatkin's picture
+4 4 Votes
Login to vote

Metrics, metrics, and more metrics.

I've been analysing the results from the last few months of deployments (encompasing nearly 300 machines across several departments with varying hardware) by 'binning' the deployment times got the graph below,

The blue diamonds are real results, the the two gaussian curves are trend-fits. The total deployment time is the cummulation of the time to,

  • Deploy the image with rdeploy
  • Run Windows mini-setup
  • Install latest patches (Flash, Adobe Reader, Java etc)
  • Install Departmental specific software (i.e. business critical software not present in the main image)
  • Install the Altiris Agent and get it's GUID
  • Domain Join

These statistics are generated from our deployment reports -a custom addition to our DS environment. We tag our job sets in ImageInvoker with a dummy job called "(START)". The last job in our deployment jobsets generates a deployment report  detailing (among other things)  the time and duration of each of the jobs which were executed on the machine since the start marker. These results are emailed to a log server for archive.  A vbscript can then be run on demand to extract the total deployment times from the archive for histogramming in Excel.

The  First Peak

The probabiliy peaks show that a machine is most likely to be deployed in 38 minutes. This highest peak corresponds to the most department builds where the software deployed post-rdeploy is restricted to web plug-in updates and small departmental packages. The standard deviation here is quite small at 7 minutes. The primary causes of this spread are,

  1. The time it takes the agent to get it's GUID.
  2. Vaguraries of network performance
  3. Deskop hardware variations

The Second Peak

This peak is lower and fatter with twice the standard deviation of the first. I think that if we'd been able to  harvest more data, we'd see a bit more structure.  This peak is mostly due to departmental deployments that contain major package deployment as a post-image job. These come down over the network, and we see a wider spread here due to the extra time required to download these packages on a intermittently busy network.

Another reason for this peak is due to our image deployment methodology -USB Flash drives. Sometimes a helpdesk tech will select with ImageInvoker and image which is not present on a USB Flash Drive. This results in our accelerated  deployment option (using a local image) being unavailable and the image automatically fails back to a server download which is lengthy. These results merge into one peak as the time taken to download from the server, is about the same as the time taken for a fast flash deployment followed by large departmental package download and install. 

What's Next

Ideally, we want more imaging sessions to fall within the boundary of the first peak.

Whilst we can't accelerate the deployment of the large post-install packages we can  get more imaging sessions to be directed from the attached USB device. I had a chat to one of the techs and implementing an option to update an USB stick with a missing image before engaging a server based imaging session looks like a winner. That way, if they haven't updated their drive before imaging only their first imaging session will be delayed (in order to update the USB drive) -subsequent imaging sessions will however be at the top-notch deployment rate.

Comments 2 CommentsJump to latest comment

Ian_C.'s picture

Good to see somebody analyse what & how they do it.

More data would really help to validate what you are seeing as an early trend.

Would also be interesting to know what data you collect. Maybe you could slice it per machine models and / or OS deployed.

PS As our machines are spread through out the world and we can't send a technician everywhere, we are looking at deploying the WinPE / PXE image onto the local hard drive. No need for USB and less network traffic, but still large image downloads.

Please mark the post that best solves your problem as the answer to this thread.
0
Login to vote
ianatkin's picture

Gathering this type of data simlpy is pretty important in my mind. Its the only way of getting a good overview of how the technology (and your processes) are working for you.

Our stats collect the duration of each job run on the computer, as well as the total deployment time. Although hardware data is also available, we don't analyse by machine model. That's purely because the  diversity of hardware models we support means the data captures per model are going to be light, hence drawing conclusions probably not so good.

And for sure -slicing up by OS deployed is going to be key once our Win7 build goes live. All the builds stats shown above are for XPSP3.

Ian Atkin, IT Services, Oxford University, UK

Connect Etiquette: "Mark as Solution" those posts which assist you most in resolving your problem, and give a thumbs up to useful articles and downloads

+1
Login to vote