Video Screencast Help

Blue Disk Icon in SFWHA against disk

Created: 11 Jun 2014 • Updated: 18 Jun 2014 | 5 comments
This issue has been solved. See solution.

I would like to know whether the blue information icon that I am seeing on the passive node of a sfwha cluster will clear when a failover is initiated.

The issue has arisen following a recent disk expansion which all went well apart from the odd issue described, which was as follows:

  • Break the Mirror on EMC CX4 Mirrorview Group on Unisphere
  • Remove secondary image lun on Unisphere
  • Delete the Mirrorview Group on Unisphere
  • Expand lun attached to live server on CX4 within Unisphere
  • Go into VEA on the active server and do a rescan
  • Then run the resize wizard to expand the disk, which in turn will show accordingly in windows explorer (Done on the fly without downtime to production system)
  • Remove the secondary lun from the storage Group on the passive node of the CX4 array
  • Expand the secondary lun on CX4 within Unispshere (encountered error on cx4 and ended up deleting lun and recreating at the new size)
  • Recreate the Mirrorvirew Group using the active lun from the active node of the array on Unisphere
  • Add the secondary lun back in to the Mirrorview Group on Unisphere
  • This will start the mirroring of luns from active to passive luns
  • Lastly add the secondary lun back into the Storage group within Unisphere on the passive node of the array

Once the above was done a final rescan was done on the passive server within VEA to ensure the disk could be seen. It is visible be with a blue informational icon against it and the only options you have is to reactivate.  This is the same against all the other disks on in VEA in the passive node as well.  The only thing I can think of is the we had an issue on cx4 when trying to expand the lun, so i ended up deleting the lun and recreating at the new size.

It is not asking for a signature to be written, but I am wondering if the issue will write itself once a failover has been initiated in VCS from active to passive server, since there must be information within the dynamic disk cluster group, which this disk is apart of on the live server.  Information on this in the administrator guides is as shown per attached.  Do the server (passive node need a restart), I am assured there is no data loss etc, but I want to be sure thaty I can failover in a emergency.

Any guidance would be appreciated.


Operating Systems:

Comments 5 CommentsJump to latest comment

mikebounds's picture

Usually this means the disk is in a deported diskgroup and as your extract from the manual says, this symbol means there is 

No degradation or loss of data; the system can still function normally.

I think if run "vxdisk list" on the passive node, this will show the disk is in a deported diskgroup and it shouldn't show a "failed" or "no signature" state.

Once you failover, the diskgroup should import and then the other node will have the blue symbols.


UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

slick's picture


Thank you for the update.

The disk is not showing failed and I agree that with it been the pasive node and a disk which is part of a dynamic disk group, it will be in a deported state since the dymanic disk group is on the active server.

None of the other disks which are part of the deported disgroup have a blue icon status, therefore i am wondering of this will correct itself once a failover is done. It is not normal for there to be a blue icon status, since i have checked other Veritas clusters which we have running and the this is first.

I just want to sure that if and when we do need to failover it will work, since from the SAN side all looks to be working and in order. I am getting a call logged with support to also seek guidance.

mikebounds's picture

I only have a 1-node cluster in my VMWare lab setup and I have a blue "i" on the disks, volumes and the diskgroups for a deported diskgroup - however for a 1-node cluster if the diskgroup is deported, it is not imported elsewhere (i.e the hostid in the private region is blank as no system owns the diskgroup), but your diskgroup is imported on active node and deported on passive node (i.e the hostid in the private region contains the hostname of active node).

For diskgroups that are shared between 2 or more hosts, I can't remember if deported diskgroup has a blue "i" symbol if it is imported on other nodes, but it should be consistent, however, I guess you don't have a shared disk - you have a replicated disk using mirrorview (i.e I assume now, that you have mirrorview between you 2 cluster nodes as oppose to a SAN shared disk between your 2 cluster nodes, using Mirrorview to replicate to a 3rd DR site).

With replicated disks, the disk is not usually completely accessible to the passive node in that Mirrorview is controlling access and not just SFW and in this case I have found that SFW sometimes shows the state it last knew if the disks are partially accessible.  So as you re-created the LUN, SFW may have seen the disk during your process and saw that it was deported which is why it shows differently to the others.


UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

Marianne's picture

None of the other disks which are part of the deported disgroup have a blue icon status,

I have often seen this - really nothing to be concerned about.

If you perform a 'rescan' on the inactive node, all the disks in the deported diskgroup should look the same.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

slick's picture

Thank you Mike/Marianne,

Issue was resolved by removing this disk from the storage group on Unisphere  and doing a rescan and then adding it back into the storage group on Unipshere, followed by a server reboot.

Strange issue, which although not a problem as such had to be resolved to ensure passive node ready should we need to failover in the event of an emergency.