Video Screencast Help

remove LUN from solaris 10

Created: 07 Mar 2012 | 7 comments
giacomopane's picture

Hello,

I need help to remove the LUN on Solaris 10 with the storage foundation. 

The following details of the system:

______________________________________________________________________

OS Solaris  5.10 Generic_142900-03 sun4v sparc SUNW,T5240

SF # pkginfo -l VRTSvxvm
   PKGINST:  VRTSvxvm
      NAME:  Binaries for VERITAS Volume Manager by Symantec
  CATEGORY:  system
      ARCH:  sparc
   VERSION:  5.0,REV=05.11.2006.17.55
   BASEDIR:  /
    VENDOR:  Symantec Corporation
      DESC:  Virtual Disk Subsystem
    PSTAMP:  Veritas-5.0-MP3RP1-2008-12-07
  INSTDATE:  Jun 04 2009 17:26

 

Multipath vxdmp.

The LUN in on IBM DS8000Series and has 2 path
_____________________________________________________________________

I have already done :

vxdisksetup -C device
vxdisk rm device
rm device into /dev/vx/dmp & /dev/vx/rdmp
remove the LUN from the storage 

Now  I should have the 2 path into in unusable, but unfortunately i find one path into unusable and anothe in failing

# cfgadm -al -o show_FCP_dev c4
Ap_Id                          Type         Receptacle   Occupant     Condition
c4                             fc-fabric    connected    configured   unknown
c4::500507630a03033b,0         disk         connected    configured   unusable
c4::500507630a03033b,1         disk         connected    configured   unknown
c4::500507630a03033b,2         disk         connected    configured   unknown
c4::500507630a03033b,3         disk         connected    configured   unknown
c4::500507630a03033b,4         disk         connected    configured   unknown
c4::50060e8015265a10,0         disk         connected    configured   unknown
c4::50060e8015265a10,1         disk         connected    configured   unknown
c4::50060e8015265a10,2         disk         connected    configured   unknown
c4::50060e8015265a10,3         disk         connected    configured   unknown
c4::50060e8015265a10,4         disk         connected    configured   unknown
c4::50060e8015265a10,5         disk         connected    configured   unknown
c4::50060e8015265a10,6         disk         connected    configured   unknown
c4::50060e80164da510,0         disk         connected    configured   unknown
c4::50060e80164da510,1         disk         connected    configured   unknown
 

# cfgadm -al -o show_FCP_dev c5
Ap_Id                          Type         Receptacle   Occupant     Condition
c5                             fc-fabric    connected    configured   unknown
c5::500507630a08433b,0         disk         connected    configured   failing
c5::500507630a08433b,1         disk         connected    configured   unknown
c5::500507630a08433b,2         disk         connected    configured   unknown
c5::500507630a08433b,3         disk         connected    configured   unknown
c5::500507630a08433b,4         disk         connected    configured   unknown
c5::50060e8015265a00,0         disk         connected    configured   unknown
c5::50060e8015265a00,1         disk         connected    configured   unknown
c5::50060e8015265a00,2         disk         connected    configured   unknown
c5::50060e8015265a00,3         disk         connected    configured   unknown
c5::50060e8015265a00,4         disk         connected    configured   unknown
c5::50060e8015265a00,5         disk         connected    configured   unknown
c5::50060e8015265a00,6         disk         connected    configured   unknown
c5::50060e80164da500,0         disk         connected    configured   unknown
c5::50060e80164da500,1         disk         connected    configured   unknown

can someone help me?

regards

Comments 7 CommentsJump to latest comment

Arojasbe's picture

Hi good morning
 
We have two methods we tested (The example is made with one of our disk)

The first:

1. The removed devices show up as drive not available in the output of the format command:

# format
Searching for disks...done
................
     255. c1t50000974082CCD5Cd249 <drive not available>
          /pci@3,700000/SUNW,qlc@0/fp@0,0/ssd@w50000974082ccd5c,f9
................

     529. c3t50000974082CCD58d249 <drive not available>
          /pci@7,700000/SUNW,qlc@0/fp@0,0/ssd@w50000974082ccd58,f9

2. After the LUNs are unmapped using Array management or the command line, Solaris also displays the devices as either unusable or failing.

# cfgadm -al -o show_SCSI_LUN | grep -i unusable
#
# cfgadm -al -o show_SCSI_LUN | grep -i failing
c1::50000974082ccd5c,249       disk         connected    configured   failing
c3::50000974082ccd58,249       disk         connected    configured   failing
#

3. This will kick the device from failing to unusable. and also removes them from format o/p.

# luxadm -e offline /dev/rdsk/c1t50000974082CCD5Cd249s0
# luxadm -e offline /dev/rdsk/c3t50000974082CCD58d249s0

4. Check the state

# cfgadm -al -o show_SCSI_LUN | grep -i unusable
c1::50000974082ccd5c,249       disk         connected    configured   unusable
c3::50000974082ccd58,249       disk         connected    configured   unusable

5. To remove the device from the cfgadm database, run the following commands on the HBA:

# cfgadm -c unconfigure -o unusable_SCSI_LUN c1::50000974082ccd5c
# cfgadm -c unconfigure -o unusable_SCSI_LUN c3::50000974082ccd58

6. Repeat step 2 to verify that the LUNs have been removed.

7. Clean up the device tree. The following command removes the /dev/rdsk... links to /devices.

# devfsadm -Cv

The second if the first does not workTo clean up the device tree after you remove LUNs

# vxdisk list

# vxdisk rm c2t500604844A375F48d101s2

# vxdmpadm getsubpaths

# vxdmpadm exclude vxvm dmpnodename=c2t500604844A375F48d101s2

# vxdmpadm exclude vxdmp dmpnodename=c2t500604844A375F48d101s2

# luxadm -e offline /dev/rdsk/c2t500604844A375F48d101s2

# vxdctl enable

# vxdisk rm c2t500604844A375F48d101s2

# luxadm -e offline /dev/rdsk/c2t500604844A375F48d101s2

# cfgadm -al -o show_SCSI_LUN | grep -i unusable

# cfgadm -c unconfigure -o unusable_SCSI_LUN c4::500604844a375f47

# cfgadm -c unconfigure -o unusable_SCSI_LUN c2::500604844a375f48

# devfsadm -Cv

 

Alfredo Rojas

giacomopane's picture

Hello,

I am in the situation of failing

# cfgadm -al -o show_FCP_dev c5
Ap_Id                          Type         Receptacle   Occupant     Condition
c5                             fc-fabric    connected    configured   unknown
c5::500507630a08433b,0         disk         connected    configured   failing
c5::500507630a08433b,1         disk         connected    configured   unknown
c5::500507630a08433b,2         disk         connected    configured   unknown
c5::500507630a08433b,3         disk         connected    configured   unknown
c5::500507630a08433b,4         disk         connected    configured   unknown
c5::50060e8015265a00,0         disk         connected    configured   unknown
c5::50060e8015265a00,1         disk         connected    configured   unknown
c5::50060e8015265a00,2         disk         connected    configured   unknown
c5::50060e8015265a00,3         disk         connected    configured   unknown
c5::50060e8015265a00,4         disk         connected    configured   unknown
c5::50060e8015265a00,5         disk         connected    configured   unknown
c5::50060e8015265a00,6         disk         connected    configured   unknown
c5::50060e80164da500,0         disk         connected    configured   unknown
c5::50060e80164da500,1         disk         connected    configured   unknown
 

then i run

 # luxadm -e offline /dev/rdsk/c5t500507630A08433Bd0s2
devctl: I/O error

devctl: I/O error not good sound :(

i can try to run vxdmpadm exclude vxvm dmpnodename and after i run again luxadm -e offline, but I have the same error.

any idea?

Giacomo

Arojasbe's picture

Try

# vxdmpadm exclude vxvm dmpnodename=c5t500507630A08433Bd0s2

# vxdmpadm exclude vxdmp dmpnodename=c5t500507630A08433Bd0s2

Both comands

If not, do not know what else to do
giacomopane's picture

yes, I had already run both commands.

any idea?

Gaurav Sangamnerkar's picture

Hi All,

first question, is this failing state causing any impact ?

one thing for sure,  this has nothing to do with Veritas DMP here, the focus only needs to be towards the OS layer.

so having said that, point is how to get a "failing" device to "unusable" state ....

1. ideally the device would land in "unusable" state automatically however because of unknown reason its the state is stuck in kernel.

2. you can try force the "unusable" state using

cfgadm -c unconfigure -o unusable

Now, you can try with option 2  first ... if it doesn't work, I can think to remove the device from /dev/dsk & /dev/rdsk & then re run "devfsadm -Cv" to regenerate the device tree to see if anything changes. If this also doesn't help, I would suggest to log a case with Oracle.

I would not surprise if Oracle comes & says to a reconfigure reboot.

Thanks

Gaurav

PS: If you are happy with the answer provided, please mark the post as solution. You can do so by clicking link "Mark as Solution" below the answer provided.
 

giacomopane's picture

hello all,
sorry if I reply late, but I was busy with another customer.
First of all, many thanks for your thoughts, now, I have to remove the LUN on another SUN system, I try to execute the procedure and see the result we get

 

al_from_indiana's picture

If its in a failing state, the only way you would be able to clear and decommission the lun is to reboot the server.  This is a OS-specific issue it seems per Oracle CR 6758053.

If you have a Oracle support account, take a look at ID 1412745.1