Video Screencast Help

vxdisk list on 2nd machine is not sucessfull

Created: 23 Aug 2012 • Updated: 26 Aug 2012 | 6 comments
shakeel-qureshi's picture
This issue has been solved. See solution.

Hi team,

 

I am trying to setup a VXVM env in testing, which is in actuall a LIVE env.

 

A) on ESX Server 5i,

   1) two shared disks created via:

vmkfstools -c 5G -d eagerzeroedthick -a lsilogic disk1.vmdk

  2) two solaris 10 u9 machines: -->given both shared disks spearate scsi virtual controllers for both machines.

---> format of both machines showing same controllers for shared disks.

      ssh with out password

   3) installed SF on both machines.

   4) vxdisk list showing the disks,  but the issue i am facing is:

when i create disk group on machine 1, disk group is created, i can make volume on it, and can even mount it.

BUT

2nd machine does not understand what happen to the shared disk. 2nd machine remain unaware what has happen to shared disks. when i did vxdisk list on 2nd machine, when doesnot showed me any diskgroup has created on shared disks.

 

any idea? why both machines shared disks are not acting while creating diskgroup.

Comments 6 CommentsJump to latest comment

mikebounds's picture

I would guess that VMWare shared disks are not set up correctly - to confirm, try creating a solaris slice using format command and then create a filesystem (using newfs) and then see if you can mount filesystem on the other node - if you can't then issue is VMWare setup, not vxdisk.

Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

Marianne's picture

Which version of SF? 

Depending on SF version, you may need to first run 'vxdctl enable' on 2nd machine, then run 'vxdisk -o alldgs list'.

Supporting Storage Foundation and VCS on Unix and Windows as well as NetBackup on Unix and Windows
Handy NBU Links

shakeel-qureshi's picture

mikebounds@ shared disks are sucessfulll, even i have followed your action and created slice on one of the shared disk & did newfs, and performed mount from another node. it worked.

I am attaching the log, plz go through it:

Marianne@ SF is 5.1, i installed again SF enterprise only and created disk group from 1 machine, mkfs and mounted it. on 2nd machine  i performed:

 

-bash-3.00# vxdctl enable
-bash-3.00# vxdisk list
DEVICE       TYPE            DISK         GROUP        STATUS
disk_0       auto:none       -            -            online invalid
disk_1       auto:none       -            -            online invalid
disk_2       auto:none       -            -            online invalid
disk_3       auto:cdsdisk    -            -            online
disk_4       auto:cdsdisk    -            -            error
-bash-3.00# vxdisk -o alldgs list  
DEVICE       TYPE            DISK         GROUP        STATUS
disk_0       auto:none       -            -            online invalid
disk_1       auto:none       -            -            online invalid
disk_2       auto:none       -            -            online invalid
disk_3       auto:cdsdisk    -            (dogg-dg)    online
disk_4       auto:cdsdisk    -            -            error
-bash-3.00# 
 
it showed me on disk_3 (dogg-dg) but how can i do import/deport.
 
please correct me ---> if my one machine is down, 2nd machine should automatically up this volume? as the shared disks is still connected with 2nd machine.
 
also can i mount tht volume (dogvol) o both machines same time?
AttachmentSize
vxvm.txt 22.78 KB
shakeel-qureshi's picture

 

Also another question is: Machine 1 is down & 
 
--> on 2nd machine when i performed disk group import, it was imported sucessfully, i can see it via vxdisk list, vxprint -hrt etc etc
 
Question is how can i mount this in 2nd machine, as when i performed this step it encountered:
 
-bash-3.00# mount /dev/vx/dsk/dogg-dg/dogvol /testdog/
mount: /dev/vx/dsk/dogg-dg/dogvol no such device
 
kindly suggest ! also i am uploading the log.
AttachmentSize
vxvm1.txt 9.11 KB
mikebounds's picture

From your last message it sounds like you can import diskgroup on second node - so to clarify things:

  1. With SF you can only import diskgroup on one node at a time.  If you want to import on multiple nodes simultaneously then you need SFCFS which is an additional license that gives you Cluster Volume Manager (CVM) and Cluster File system (CFS).
     
  2. When a node with a diskgroup imported crashes, the other node will not automatically import the diskgroup - you need cluster software like VCS for this to happen.
     
  3. After you import diskgroup, you need to start volumes - to do this you can use "vxrecover -s diskgroup" or "vxvol -g diskgroup startall".  Untill you start volumes the device will not exist in /dev/vx/dsk/diskgroup.

Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

SOLUTION
shakeel-qureshi's picture

mike@ you explanation has solved the query.

 

i was missing vxvol -g diskgroup startall

and then to perform mount -F vxfs /dev/vx/dsk/.../...    

it has worked and now the manuall import/deport happing sucessfully in a VMWARE environment.