Video Screencast Help

vxddladm show DMP state as not active

Created: 14 Feb 2014 | 18 comments
Remco Etten's picture

Goodmorning,

I have issue that I can't seem to solve and I'm dire need of assistance.

I have veritas cluster 5.1 running on solaris 10 connected to an 6180 storage array. The array is directly connected to 2 hotsts. (no switch!).

Controller port 1A is connected to host A

controller port 1B is connected to host A

controller port 2A is connected to host B

controller port 2B is connected to host B.

 

DMP is taking care of the multipathing bit and looks ok, however I see that the state is set to not active:

Output fromt the vxddladm listsupport libname=libvxlsiall.so :

LIB_NAME                                      ASL_VERSION              Min. VXVM version

==========================================================

Libvxlsiall.so                                   vm-5.1.100-rev-1    5.1

 

The output of the vxdmpadm list dmpEngenio :

Filename:                                       dmpEngenio

APM name:                                     dmpEngenio

APM version:                                   1

Feature :                                          VxVM

VxVM version                                  51

Array Types Supporred:                  A/PF-LSI

Depending Array Types                  A/P

State :                                              Not-Ative

 

Output from vxdctl mode:

mode : enabled.

 

Both hosts show the same result state : Not-Active

So my question is : How do I set the state to Active. Bare in mind that this is a full production system so I have make sure that any commands given will not disrupt production. I will schedule downtime in that case.

 

Can someone assist me?
Many thanks!

Remco

 

 

 

 

 

 

 

 

 

 

 

 

Operating Systems:

Comments 18 CommentsJump to latest comment

Gaurav Sangamnerkar's picture

Hi,

ASLs would be set to active automatically . I don't reckon any command specific to set an ASL to "Not active" state ..

Are you sure that this is the ASL in use ? also, is there anything excluded ? paste below output

# vxddladm listexclude all

# vxdmpadm listexclude all

# vxddladm listsupport all

# vxddladm list devices

 

G

PS: If you are happy with the answer provided, please mark the post as solution. You can do so by clicking link "Mark as Solution" below the answer provided.
 

Gaurav Sangamnerkar's picture

Also, is there a impact of this being set to not-active ?

PS: If you are happy with the answer provided, please mark the post as solution. You can do so by clicking link "Mark as Solution" below the answer provided.
 

Remco Etten's picture

Hello Gaurav,

I wil gather the requested information.

The impact seems to be that I'm getting hunderds of mesages with scsi write errors, so it seems that write commands are going over both paths.

As soon as I have my information, I will let you know, thanks in advance.

Remco Etten's picture

For some reason I cannot seem to paste the data in here.. very annoying.

 

 

 

AttachmentSize
output.txt 9.81 KB
Remco Etten's picture

anyone else that can assist me with this?

Thanks in advance

Hari Krishna Vemuri's picture

The APM would be active only if its required. I think in your case the array is ALUA compliant and hence would be using the generic ALUA APM

Please provide the output of "vxdmpadm listenclosure all". The Engineo APM is required only if the array type in the output matches that of the APM (A/PF-LSI)

 

Remco Etten's picture

Hello hari, 

I do not know what is meant with ALUA etc. but I have attached all information from the explorer in the vxvm.zip file. I believe you will find the requested information. 
Thanks for your assistance.

AttachmentSize
vxvm.zip 1.32 MB
Gaurav Sangamnerkar's picture

Hi,
 

I believe you are hitting below

http://www.symantec.com/docs/TECH74115

The ASL supports this particular array in various modes - A/P-C, A/PF-LSI, A/A. However, the APM would become Active only when the array is configured in A/PF-LSI mode. For the other two modes (A/P-C and A/A), the APM will be marked as Not-Active and the default APMs (dmpaa and dmpap) will be used respectively.

You will need to run vxasldebug to confirm the correct setting of array at this point as per technote.

 

G

PS: If you are happy with the answer provided, please mark the post as solution. You can do so by clicking link "Mark as Solution" below the answer provided.
 

Remco Etten's picture

Thanks Gaurav,

I will request the customer to run the command on both clusternodes and let you know the output.

Regads

Remco

 

Remco Etten's picture

Hello Gaurav,

 

I have attached the requested files.

Thanks in advance for having a look!

Remco

 

Gaurav Sangamnerkar's picture

I've looked at the output in the tar file, MIRTL02.info.0217152740.19910.log , I see its set to A/P

libvxlsiall.so:claim_device()          : CLAIMED
    VID                                    : SUN
    PID                                    : SUN_6180
    ANAME                                  : SUN6180-
    ATYPE                                  : A/P  <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    CAB_SERIAL_NO                          : 60080E500018C9B4000000004E0B9255
    LUN_SERIAL_NO                          : 60080E500018345E000002394E37D4B2
    ARRAY_CTLR_ID                          : B
    PORT_SERIAL_NO                         : B-2
    REVISION                               : 0777
    SCSI_VERSION                           : 5
    UDID                                   : SUN_SUN_6180_60080E500018C9B4000000004E0B9255_60080E500018345E000002394E37D4B2
    LUN_OWNER                              : N
    CUR_OWNER                              : N

    claim_device() New attribute           : ARRAY_CTLR_ID

    ARRAY_CTLR_ID                          : B
=====================================================================

ibvxlsiall.so:claim_device()          : CLAIMED
    VID                                    : SUN
    PID                                    : SUN_6180
    ANAME                                  : SUN6180-
    ATYPE                                  : A/P  <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
    CAB_SERIAL_NO                          : 60080E500018C9B4000000004E0B9255
    LUN_SERIAL_NO                          : 60080E500018345E000002394E37D4B2
    ARRAY_CTLR_ID                          : A
    PORT_SERIAL_NO                         : A-2
    REVISION                               : 0777
    SCSI_VERSION                           : 5
    claim_device() New attribute           : CUR_OWNER

    CUR_OWNER                              : N
 

 

So I believe the technote is giving the right direction, you need to get array to be configured in A/PF-LSI mode for ASL to come to active state

G

PS: If you are happy with the answer provided, please mark the post as solution. You can do so by clicking link "Mark as Solution" below the answer provided.
 

Remco Etten's picture

Thanks Gaurav,

I have attached the output from vxdmpadm listapm.

If the array is upgraded to a higher level firmware that supports ALUA, will this be automatically recognized by vxdmp? or does this require additional steps.

AttachmentSize
vxdmpadm-listapm.txt 3.85 KB
Gaurav Sangamnerkar's picture

It should get automatically detected by device discovery of DMP

 

G

PS: If you are happy with the answer provided, please mark the post as solution. You can do so by clicking link "Mark as Solution" below the answer provided.
 

Remco Etten's picture

Allright, thanks. However, since this is a clustered environment, I think for the upgrade of the arrary we would require total downtime? Would you agree or would simply shutting down node A, then perform the firmware upgrade on the 6180 to fw 7.84.xx (which enables ALUA). After the upgrade is succesful, bring up node A again, switch the cluster to node A and reboot B.

 

Does this sound as a correct actionplan?

Thanks

Gaurav Sangamnerkar's picture

Well I won't completely agree with above plan because its a shared storage. If array firmware is going to cause an impact, it will impact both the nodes as its a shared storage.

I would suggest to check with Vendor, if its an outage change, get a complete shutdown of environment, upgrade firmware & then start up both the nodes.

If you are using IOFencing, make sure that array firmware upgrade is not touching IOFencing keys on coordinator & data disks. If its  a graceful shutdown of cluster it wouldn't impact IOfencing because, with shutdown of environment, keys will be removed from data disks & once array firmware is upgraded & IOFencing starts, keys will be registered again.

Also, firmware upgrade will enable the ALUA mode however make sure the array settings set it to A/PF-LSI mode

 

G

PS: If you are happy with the answer provided, please mark the post as solution. You can do so by clicking link "Mark as Solution" below the answer provided.
 

Remco Etten's picture



Thanks for the elaborate answer, I wil forward this to our customer.

One question: what do you mean with the last sentence : that I have to make sure that the array setting set it to A/PF-lSI mode?

From what point of view? When I have done the upgrade on the 6180 array, it will automatically 'work' in ALUA mode. Do I have change a setting on the DMP side?

Thanks

Gaurav Sangamnerkar's picture

Not on DMP side, It should be set on array so that DMP will detect it. I would assume array has options to set various modes like ALUA, A/A-A, A/PF-LSI. If array has A/PF-LSI setting, set on that & veritas should detect it, if A/PF-LSI setting is not there, try with ALUA on array

 

G

PS: If you are happy with the answer provided, please mark the post as solution. You can do so by clicking link "Mark as Solution" below the answer provided.