Video Screencast Help
Scheduled Maintenance: Symantec Connect is scheduled to be down Saturday, April 19 from 10am to 2pm Pacific Standard Time (GMT: 5pm to 9pm) for server migration and upgrades.
Please accept our apologies in advance for any inconvenience this might cause.

Add ability for SFCFSHA Rolling Upgrade to Unmount filesystems as needed

Created: 31 Oct 2013 | 4 comments
Logan Doux's picture
1 Agree
1 Disagree
0 2 Votes
Login to vote

Would it be possible to add the ability for the SFCFSHA script to automatically unmount filesystems as needed during a rolling upgrade?  We have many clusters that need to have disks under Veritas control mounted at boot, and would like to be able to use the rolling upgrade feature to minimize the impact on these clusters during patching.

Comments 4 CommentsJump to latest comment

AHerr's picture

Hi Logan,

If you have VCS controlling CFS mount points, they should be started during the cluster online process.  It should not be in the OS boot automount list (different place on each UNIX OS).

For Rolling Upgrades, the cluster mount points should be unmounted when the rolling upgrade is occuring.  We are open to your concerns.  What workflow are you looking for during the upgrade process?

 

Thanks,
Anthony

0
Login to vote
Logan Doux's picture

Our issue is more with disks that aren't under VCS control.  We have multiple applications that use Vormetric encryption, and those disks are required to be mounted as boot.  The install script has no issue unmounting disks that are under VCS control, but it won't unmount Veritas disks that are not under VCS, which results in us having to intervene and manually unmount the disks.  This causes us to have to take longer outages for the applications.

 

Our hope was that Rolling Upgrades would not require us to have long outage times, but if we have to manually unmount disks and run an upgrade, we haven't gained anything.

0
Login to vote
AHerr's picture

I agree with your sentiment, that extended outages or working outside of the framework is a problem.  Is there a reason the disks are mounted outside of the cluster?  If they are required for an application, shouldn't they be included in the VCS Service Group that controls the startup/shutdown of the application?

When rolling upgrades occur, we evacuate the applications from the node to be upgraded.  For resources outside of the cluster control we do not take action.  We would not unmount the boot file systems or bring down the production IP address.  The assumption is that resources that need to be taken offline during an upgrade are within the cluster. 

The other option could be to include the mount points in a local only Service Group but do not auto start it.  When the box goes offline, the mount points will be taken down as well.  Would that help your situation?

0
Login to vote
Logan Doux's picture

After consulting with a co-worker:

 

Reason 1)

 

We run multiple applications on the server and they are not all defined in cluster.  We have standardized our agents in the same directory which is on SAN storage and uses VXVM for ease of growth.  We have run into issues in the past with local storage that filled up and required us to move to SAN.

 

Reason 2) 

 

We run Vormetric encryption which requires binaries to be trusted at boot time.  The cluster software is the last thing to come up therefore the binaries do not get trusted in time and the Oracle databases do not work.  This is the reason for the Oracle binary mounts to be performed outside of cluster.

0
Login to vote