Video Screencast Help

VCS failovers and copies the crontabs

Created: 25 Sep 2013 • Updated: 25 Sep 2013 | 3 comments

Hello,

I am using VCS on Oracle M9000 machines. I have three node cluster. The question is when I failover the services from one node to another I want all the crontabs to be copied to the other live node as well. Which doesnt seem to be working fine for now in my Domain. Can you please help me out that where to define this 'copy cron' procedure so evertime when one enviorment fails over to another node it also copies the same cron from the previous system.

Or if there is any procedure which copies the crontabs of every user daily on all cluster nodes. I need to know if this can be configured in VCS. All useful replies are welcome.

Best Regards,

Mohammad Ali Sarwar

Operating Systems:

Comments 3 CommentsJump to latest comment

mikebounds's picture

If you have a script that copies crontabs from their usual location on local disk, then this is not going to work when server fails as the failover node cannot copy from a server that is down.

Therefore if you want to failover crontabs, you need to put them on shared storage.

So supposing you have a diskgroup which is shared between the nodes with volume mounted on /data1 then you could move crontabs from /var/spool/cron to /data1 and create a symbolic link from /var/spool/cron/crontabs to /data1/cronabs.  The problem with this is that no crontabs will run on inactive nodes and you may need local root crontabs to run on inactive nodes.

So you could link each users crontab to shared storage and leave roots as local, but there can still be issues, especially in your case where you have 3 nodes:
Supposing you have /data1 on node1 and /data2 on node2 and both fail to node3.  If /data1 and /data2 can both run on node3 at the same time then crontabs won't work if you have the same users on both as you can't link to two files so this will only work if you have rules so that node3 can't run both services or each service group has its own set of unique users and also this will then involve having a script (in a Application resource or postonline trigger) to change the link so that /var/spool/cron on node3 points to users on either /data1 or data2.

There are other possibilities, but it depends on what you have so you need to explain your environment:

  1. What service groups do you have on what nodes
  2. Where are the service groups allowed to fail to and are multiple service groups allowed to fail to the same node
  3. Are crontab entries being created frequently or are they pretty static 
  4. Do any service groups have the same username associated with them as another service group

Mike

 

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

g_lee's picture

example of using a wrapper script to test group status to determine whether to run cmd:

https://www-secure.symantec.com/connect/forums/bes...

specifically this comment:

https://www-secure.symantec.com/connect/forums/bes...

other considerations to take into account if attempting to use cron failovers (ie: if you choose to take a different approach to Gene's suggestion above)

http://mailman.eng.auburn.edu/pipermail/veritas-ha...

note: original post edited to remove OP's phone number per Community Etiquette / code of conduct:

https://www-secure.symantec.com/connect/sitehelp/s...

If this post has helped you, please vote or mark as solution

Ali Sarwar's picture

I am not sure that shared file system would be of any help for 3 node cluster. However I figured out something else. I found some postonline & preonline files in /opt/VRTSvcs/bin/triggers/. I am going to play with them a bit. What I will do is I will define a script in postonline file that will take the copy of last crontab and will copy it to the new node it will be failed over to.

If it will work I will get back here to post more comments on it. 

Thank you for your help.