Video Screencast Help

Create image job keeps re-running

Created: 10 Apr 2014 | 9 comments

We are using Altiris 7.5, I had to create a new image due to new agent and plug-ins. My image is built off of a VM. I made a mistake on my create image job; I put in a reboot task before the prepare for image task. Ran the job before I noticed the mistake. I stopped the job, removed it from the job from the vm, removed the reboot to task. Now I have the job setup correctly:

1. prepare for image capture

2. create image

3. reboot to production

Trying to re-run the job in the correct order, but the old reboot to task still runs no matter what!!! The job was stopped and removed from the vm, I even removed the vm workstation name from the console so the vm can re-establish itself with the server. The task still runs. Where/how is the task stuck in the vm? Is there a file in Windows under Altiris Agent\TaskManagemnt somewhere?

Operating Systems:

Comments 9 CommentsJump to latest comment

Thomas Baird's picture

bounce the task management services that it connects to (object host) and on the client (when disconnected so that it doesn't bounce), clear the task management cache folder completely.  I'm assuming the system simply keeps rebooting - that is, it goes into windows, and immediately shuts down and restarts, meaning that PXE is not the culprit, it's something in the agent.

You COULD remove the agent completely and reinstall it if all else fails, but I'd try the above 2 first.

Thomas Baird
Enthusiast for making things better!

MJammer's picture

I recently tried the advanced button within the job on the console and changed the 'End task" from 90 to 5 minutes. Maybe during the 90 minutes the job found itself recycling? Then finally decided to stop onit's own after the 90 minutes are up?

Your assumption is right: boot up windows > log in > as soon as the agent speaks to the server the task runs again.

Where is the task managent cache folder? Is it program files\Altiris\Altiris Agent\TaskManagement??

Thomas Baird's picture

Yes.  You found the folder and there's a KB on clearing this.  Bouncing those services on the site server prevents a job that is deleted from being re-sent.  So there are 3 places a job can be "run from":  The server (you deleted it there), the Site server (task server, where the Object Host service is, or the NS which performs that function) and the client (if the cache folder is doing weird things).

GL

PS>  Careful with that short of a timeout.  It can take that long to even GET the task at times.  And remember, when the timeout hits, even if the task is running, it will show failed in the console - even if the task runs to completion.  Eg. you have an MS office deployment that takes 10 minutes to install.  15 min timeout, it takes 10 min to get the task.  here's what'll happen:

1) schedule the task

2) client gets it in 10 min.

3) task is marked failed at 15 min.

4) client finishes installing at 20 min and reports up done.

5) you check the task, it shows failed on one screen, and successful in the drill down.

Yup, that's what happens.

Thomas Baird
Enthusiast for making things better!

MJammer's picture

I understand what you are saying with changing the timeouts. I should set it back to it's deault 90 mins.

I do not handle the server-related responsibilities, that will get performed by the Admin if necessary.

Not sure if a separate entry should be done, but it seems related. Lately, our application package delivery schedule status are coming up the same way my image jobs are coming up. The app deliveries status shows '... Not Started', and the Status Detail shows'Start Pending'. While using 7.1 these status' would show that the delivery is moving along with up to the second status on the delivery. On the client pc the delivery is going well, but the status on the console for that client says it not starting.

Thomas Baird's picture

Remember, that a task status doesn't change until after a task is done, or started, but not during.  And, remember that the Client Task Dataloader service is who reports up statuses, so if it's not working, the console will never show progress.

Get to the task server and troubleshoot it.  Restart that service at least.

Thomas Baird
Enthusiast for making things better!

MJammer's picture

I'll get the admin to look at the client task dataloader. The tasks I mentioned ran this morning, it was a delivery of MS Visio 2007 and MS Access 2007. Between detection checks, file downloading, and the install they each take 10 minutes. Give or take a minute. As of right now: 10:30 AM est; they finished over one and a half hours ago.Looking at the console right now and the status is still "Not started" and "Start Pending".

Hopefully the restart of that service will make it work.

Thomas Baird's picture

Another suggestion - do NOT use Tasks to start Software Delivery policies.  Schedule them and use policies.  Tasks are WAY over-used in SMP v7.  Poliices are more reliable, lighet on bandwidth, have better reporting, etc, etc...  You've already built the package and detection rules - don't make a quick-delivery, make a policy.

Trust me, it'll pay off BIG TIME over the long haul.  And I mean really big.

Thomas Baird
Enthusiast for making things better!

MJammer's picture

Under Managed Software Delivery > Go to my policy(s); I have a policy for every software release I build. I do not have the option with a right-click to delivera policy. Ex: Access 2007, I have a software release and a policy built to deliver the release. I can right-click the sw release and choose quick/manage delivery; polcies do not give me that right-click option. That is why I delivered it that way today.

Thomas Baird's picture

Just remember that a "Quick Delivery" relies on the un-reliable task engine.  It sounds convenient, but it's better to set a time stamp, and then send out an update configuration request instead and simply let the policies handle it.

Even assuming that Task is working at 100% (that is, statuses are repoting up, things don't time out, etc, which right now for you they are not), Task is still less reliable.  It has the advantage of "doing things now" - or rather, of TRYING to do things now, but it has definate weaknesses which is why it should be used very sparingly.

For instance, the task history stores 200K rows and then wipes.  That's right - wipes the extra rows.  Task history GONE.  No alternate store, nothing.  If a task expires before completion, it always shows failed, no matter if it actually succeeded or not.

Contrast this to policies that don't time out until you tell them to, can run even when the computer is disconnected, will report up success hours later if need be (e.g. if it ran when disconnected), keep their data in the logs for at least up to or exceeding 1M rows, have built in reports that gather succes rates that are very mature...

Using tasks for almost anything outside of DS is futile.  It should be used as emergencies only (which I'll grant this might have been)..  0-day exploits for instance you might not want to be patient for policies for.  :P

Quick delivery and tasks are very seductive, just not very efficient.  Stick with the slower policy method and plan just a bit ahead and be paid off well over the long haul.

Thomas Baird
Enthusiast for making things better!