Video Screencast Help

VCS 6.0PR1 on Solaris11 zone resource not coming up

Created: 30 Apr 2013 | 32 comments

Here is my main.cf:

 

include "OracleASMTypes.cf"
include "types.cf"
include "Db2udbTypes.cf"
include "OracleTypes.cf"
include "SybaseTypes.cf"

cluster s11cluster (
        UserNames = { admin = hOPhOJoLPkPPnXPjOM,
                 administrator = aPQiPKpMQlQQoYQkPN,
                 z_zone_res_solaris11-1 = fMNfMHmJNiNNlVNhMK }
        ClusterAddress = "192.168.0.40"
        Administrators = { admin }
        )

system solaris11-1 (
        )

system solaris11-2 (
        )

group ClusterService (
        SystemList = { solaris11-1 = 0, solaris11-2 = 1 }
        AutoStartList = { solaris11-1, solaris11-2 }
        OnlineRetryLimit = 3
        OnlineRetryInterval = 120
        )

        IP webip (
                Device = ipmp0
                Address = "192.168.0.40"
                NetMask = "255.255.255.0"
                )

        NIC csgnic (
                Device = ipmp0
                )

        webip requires csgnic

        // resource dependency tree
        //

        //      group ClusterService
        //      {
        //      IP webip
        //          {
        //          NIC csgnic
        //          }
        //      }

group zpoolgrp (
        SystemList = { solaris11-1 = 0, solaris11-2 = 1 }
        ContainerInfo @solaris11-1 = { Name = z1, Type = Zone, Enabled = 1 }
        ContainerInfo @solaris11-2 = { Name = z1, Type = Zone, Enabled = 1 }
        AutoStartList = { solaris11-1, solaris11-2 }
        Administrators = { z_zone_res_solaris11-1 }
        )

        Zone zone_res (
                )

        Zpool zpool_oradata (
                PoolName = oradata
                )

        Zpool zpool_orahome (
                PoolName = orahome
                )

        Zpool zpool_zoneroot (
                PoolName = zoneroot
                )

        zone_res requires zpool_oradata
        zone_res requires zpool_orahome
        zone_res requires zpool_zoneroot
        zpool_oradata requires zpool_zoneroot
        zpool_orahome requires zpool_zoneroot

        // resource dependency tree
        //
        //      group zpoolgrp
--More--(85%)

       )

        Zone zone_res (
                )

        Zpool zpool_oradata (
                PoolName = oradata
                )

        Zpool zpool_orahome (
                PoolName = orahome
                )

        Zpool zpool_zoneroot (
                PoolName = zoneroot
                )

        zone_res requires zpool_oradata
        zone_res requires zpool_orahome
        zone_res requires zpool_zoneroot
        zpool_oradata requires zpool_zoneroot
        zpool_orahome requires zpool_zoneroot

        // resource dependency tree
        //
        //      group zpoolgrp
        //      {
        //      Zone zone_res
        //          {
        //          Zpool zpool_zoneroot
        //          Zpool zpool_oradata
        //              {
        //              Zpool zpool_zoneroot
        //              }
        //          Zpool zpool_orahome
        //              {
        //              Zpool zpool_zoneroot
        //              }
        //          }
        //      }
 

Operating Systems:

Comments 32 CommentsJump to latest comment

dariuszz's picture

updated main.cf .....still does not work, nothing in error logs either engine_A log or zone_ log

AttachmentSize
main.txt 2.12 KB
mikebounds's picture

A couple of extracts from the bundled agents guide that may help:

Extract from Monitor Agent function of Zpool agent:

 

If the ZFS pool contains a ZFS file system that a non-global zone uses, then
you need to import the pool before the zone boots up. After the zone boots
up, if the mountpoint property for this ZFS file system that the non-global
zone uses is not set to legacy, it mounts after the zone boots up.
If you have enabled the ChkZFSMounts in the Zpool resource, delay the
check inside the Monitor agent function because the zone resource is not
up yet, and the file systems are not mounted until the zone boots up.
The Zone resource depends on the Zpool resource for the non-global zone
scenario. In this case, you need to provide the ZoneResName attribute,
which indicates the name of the Zone resource. When the Zone resource is
in an ONLINE state, then ChkZFSMounts starts to check the mount status of
the ZFS file system pool that the non-global zone uses.
 
Also from the Zpool agent:
Limitations
The agent does not support the use of logical volumes in ZFS. If ZFS logical
volumes are in use in the pool, the pool cannot be exported, even with the -f
option. Sun does not recommend the use of logical volumes in ZFS due to
performance and reliability issues.
 
If this still doesn't help, can you provide extract from the engine log of when you try to online the service group.
 
Mike

 

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

dariuszz's picture

2013/05/01 02:42:45 VCS ERROR V-16-10001-5586 (solaris11-2) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:43:38 VCS ERROR V-16-10001-5586 (solaris11-1) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:43:44 VCS ERROR V-16-10001-5586 (solaris11-2) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:43:47 VCS ERROR V-16-10001-20014 (solaris11-1) Zpool:zpool_orahome:monitor:The value altroot for zpool orahome is not set. Resource state is UNKNOWN
2013/05/01 02:43:53 VCS ERROR V-16-10001-20014 (solaris11-1) Zpool:zpool_oradata:monitor:The value altroot for zpool oradata is not set. Resource state is UNKNOWN
2013/05/01 02:44:38 VCS ERROR V-16-10001-5586 (solaris11-1) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:44:44 VCS ERROR V-16-10001-5586 (solaris11-2) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:45:38 VCS ERROR V-16-10001-5586 (solaris11-1) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:45:42 VCS ERROR V-16-2-13067 (solaris11-2) Agent is calling clean for resource(zpool_zoneroot) because the resource became OFFLINE unexpectedly, on its own.
2013/05/01 02:45:42 VCS WARNING V-16-10001-20003 (solaris11-2) Zpool:zpool_zoneroot:clean:zpool export zoneroot failed. Try again using the force export -f option
2013/05/01 02:45:43 VCS INFO V-16-2-13716 (solaris11-2) Resource(zpool_zoneroot): Output of the completed operation (clean)
==============================================
cannot open 'zoneroot': no such pool
cannot open 'zoneroot': no such pool
==============================================

2013/05/01 02:45:43 VCS INFO V-16-2-13068 (solaris11-2) Resource(zpool_zoneroot) - clean completed successfully.
2013/05/01 02:45:44 VCS INFO V-16-1-10307 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (Not initiated by VCS)
2013/05/01 02:45:44 VCS ERROR V-16-1-54031 Resource root_mount (Owner: Unspecified, Group: zpoolgrp) is FAULTED on sys solaris11-2
2013/05/01 02:45:44 VCS NOTICE V-16-1-10446 Group zpoolgrp is offline on system solaris11-2
2013/05/01 02:45:44 VCS INFO V-16-6-15015 (solaris11-2) hatrigger:/opt/VRTSvcs/bin/triggers/resfault is not a trigger scripts directory or can not be executed
2013/05/01 02:45:45 VCS INFO V-16-6-15015 (solaris11-2) hatrigger:/opt/VRTSvcs/bin/triggers/resfault is not a trigger scripts directory or can not be executed
2013/05/01 02:46:38 VCS ERROR V-16-10001-5586 (solaris11-1) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:47:15 VCS INFO V-16-1-10307 Resource root_mount (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (Not initiated by VCS)
2013/05/01 02:47:25 VCS INFO V-16-1-10307 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (Not initiated by VCS)
2013/05/01 02:47:38 VCS ERROR V-16-10001-5586 (solaris11-1) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:48:38 VCS ERROR V-16-10001-5586 (solaris11-1) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:48:47 VCS ERROR V-16-10001-20014 (solaris11-1) Zpool:zpool_orahome:monitor:The value altroot for zpool orahome is not set. Resource state is UNKNOWN
2013/05/01 02:48:53 VCS ERROR V-16-10001-20014 (solaris11-1) Zpool:zpool_oradata:monitor:The value altroot for zpool oradata is not set. Resource state is UNKNOWN
2013/05/01 02:49:39 VCS ERROR V-16-10001-5586 (solaris11-1) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:50:39 VCS ERROR V-16-10001-5586 (solaris11-1) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.

 

 

 

2013/05/01 02:45:41 VCS WARNING V-16-10001-20003 (solaris11-2) Zpool:zpool_zoneroot:clean:zpool export zoneroot failed. Try again using the force export -f option
2013/05/01 02:45:41 VCS INFO V-16-2-13716 (solaris11-2) Resource(zpool_zoneroot): Output of the completed operation (clean)
==============================================
cannot open 'zoneroot': no such pool
cannot open 'zoneroot': no such pool
==============================================

2013/05/01 02:45:41 VCS INFO V-16-2-13068 (solaris11-2) Resource(zpool_zoneroot) - clean completed successfully.
2013/05/01 02:45:43 VCS INFO V-16-1-10307 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (Not initiated by VCS)
2013/05/01 02:45:43 VCS ERROR V-16-1-54031 Resource root_mount (Owner: Unspecified, Group: zpoolgrp) is FAULTED on sys solaris11-2
2013/05/01 02:45:43 VCS NOTICE V-16-1-10446 Group zpoolgrp is offline on system solaris11-2
2013/05/01 02:45:43 VCS INFO V-16-6-15015 (solaris11-2) hatrigger:/opt/VRTSvcs/bin/triggers/resfault is not a trigger scripts directory or can not be executed
2013/05/01 02:45:43 VCS INFO V-16-6-15015 (solaris11-2) hatrigger:/opt/VRTSvcs/bin/triggers/resfault is not a trigger scripts directory or can not be executed
2013/05/01 02:46:37 VCS ERROR V-16-10001-5586 (solaris11-1) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:47:14 VCS INFO V-16-1-50135 User root fired command: hares -clear root_mount  from localhost
2013/05/01 02:47:14 VCS INFO V-16-1-10307 Resource root_mount (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (Not initiated by VCS)
2013/05/01 02:47:24 VCS INFO V-16-1-50135 User root fired command: hares -clear zpool_zoneroot  from localhost
2013/05/01 02:47:24 VCS INFO V-16-1-10307 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (Not initiated by VCS)
2013/05/01 02:47:37 VCS ERROR V-16-10001-5586 (solaris11-1) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:48:11 VCS INFO V-16-1-50135 User root fired command: hagrp -online zpoolgrp  solaris11-1  from localhost
2013/05/01 02:48:32 VCS INFO V-16-1-50135 User root fired command: hagrp -online zpoolgrp  solaris11-1  from localhost
2013/05/01 02:48:37 VCS ERROR V-16-10001-5586 (solaris11-1) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:48:46 VCS ERROR V-16-10001-20014 (solaris11-1) Zpool:zpool_orahome:monitor:The value altroot for zpool orahome is not set. Resource state is UNKNOWN
2013/05/01 02:48:52 VCS ERROR V-16-10001-20014 (solaris11-1) Zpool:zpool_oradata:monitor:The value altroot for zpool oradata is not set. Resource state is UNKNOWN
2013/05/01 02:49:37 VCS ERROR V-16-10001-5586 (solaris11-1) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
2013/05/01 02:50:37 VCS ERROR V-16-10001-5586 (solaris11-1) Mount:root_mount:monitor:Mount point /zoneroot is busy with some other block device.
 

dariuszz's picture

root@solaris11-2:~# hatype -list | grep -i ZoneResName
root@solaris11-2:~#
 

mikebounds's picture

From the logs, no attempt is made to online the zone resource as this is dependent on Zpool and mount resouces, so these are the resources you need to fix.

ZoneResName is an attribute, not a type so you would need to run "hatype -display Zpool", or you can also use "hares -display  | grep ZoneResName".

I have not used ZFS before, but looking at the bundled agent guide it says:

 

ZFS’s automount feature mounts all its file systems by setting the mountpoint
property to something other than legacy
 
and the dependency diagram shows that mounts are NOT used.  So you may not need mount resource if mounts are automounted, but if you think you need the mount resource, then try to mount it manually as the VCS logs say:
Mount point /zoneroot is busy with some other block device
 
You also need to investigate error:
The value altroot for zpool orahome is not set
 
Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

dariuszz's picture

zpools are being mounted

root@solaris11-2:/etc/VRTSvcs/conf/config# zoneadm list -civ
  ID NAME             STATUS     PATH                           BRAND    IP
   0 global           running    /                              solaris  shared
   - z1               configured /zoneroot/zoneroot             solaris  excl
root@solaris11-2:/etc/VRTSvcs/conf/config# zpool list
NAME       SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
oradata   33.5G   150K  33.5G   0%  1.00x  ONLINE  /
orahome   33.5G   150K  33.5G   0%  1.00x  ONLINE  /
rpool       68G  20.0G  48.0G  29%  1.00x  ONLINE  -
zoneroot  33.5G   466M  33.0G   1%  1.00x  ONLINE  /
root@solaris11-2:/etc/VRTSvcs/conf/config#
 

zone is not starting

mikebounds's picture

Zone is not starting for your latest config as the mount resource is failing, so VCS does not try to start zone. I noticed that your first config did not have a Mount resource so this is probably a more correct config to try. Also I think you should be using ZoneResName, so you have:

 

Zpool zpool_zoneroot (
                PoolName = zoneroot
                ZoneResName = z1
                )
 
If it is still not working, then please provide new logs
 
Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

dariuszz's picture

VCS WARNING V-16-1-10575 Attribute ZoneResName not defined
root@solaris11-2:/var/VRTSvcs/log# tail -f engine_A.log
2013/05/01 08:52:44 VCS INFO V-16-2-13071 (solaris11-1) Resource(zpool_zoneroot): reached OnlineRetryLimit(0).
2013/05/01 08:52:45 VCS ERROR V-16-1-54031 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is FAULTED on sys solaris11-1
2013/05/01 08:52:45 VCS ERROR V-16-1-10205 Group zpoolgrp is faulted on system solaris11-1
2013/05/01 08:52:45 VCS NOTICE V-16-1-10446 Group zpoolgrp is offline on system solaris11-1
2013/05/01 08:52:45 VCS INFO V-16-1-10493 Evaluating solaris11-1 as potential target node for group zpoolgrp
2013/05/01 08:52:45 VCS INFO V-16-1-50010 Group zpoolgrp is online or faulted on system solaris11-1
2013/05/01 08:52:45 VCS INFO V-16-1-10493 Evaluating solaris11-2 as potential target node for group zpoolgrp
2013/05/01 08:52:45 VCS NOTICE V-16-1-10301 Initiating Online of Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 08:52:45 VCS INFO V-16-6-15015 (solaris11-1) hatrigger:/opt/VRTSvcs/bin/triggers/resfault is not a trigger scripts directory or can not be executed
2013/05/01 08:52:45 VCS WARNING V-16-10001-20001 (solaris11-2) Zpool:zpool_zoneroot:online:The AltRootPath attribute is not set for the resource zpool_zoneroot, setting the default value to /
2013/05/01 08:53:01 VCS ERROR V-16-10001-20011 (solaris11-2) Zpool:zpool_zoneroot:monitor:The ZoneResName attribute is set with an invalid value. Resource state is UNKNOWN
2013/05/01 08:53:02 VCS INFO V-16-2-13716 (solaris11-2) Resource(zpool_zoneroot): Output of the completed operation (monitor)
==============================================
VCS WARNING V-16-1-10260 Resource does not exist: z1
==============================================

 

also see attached

AttachmentSize
main.txt 2.14 KB
mikebounds's picture

Apologies, ZoneResName should be zone_res, not z1 as this should be the Zone RESOURCE name, not zone name. 

What command produced:

VCS WARNING V-16-1-10575 Attribute ZoneResName not defined

You still have Mount resource in your main.cf - this is the cause of the group faulting - you need to remove or fix this resource, otherwise, online of zone resource will never be attempted.

Mike

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

dariuszz's picture

I figured as much Re: zone_res and made changes accordingly

 

2013/05/01 09:30:07 VCS WARNING V-16-10001-20001 (solaris11-2) Zpool:zpool_zoneroot:online:The AltRootPath attribute is not set for the resource zpool_zoneroot, setting the default value to /
2013/05/01 09:30:23 VCS INFO V-16-1-10298 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is online on solaris11-2 (VCS initiated)
2013/05/01 09:30:23 VCS NOTICE V-16-1-10301 Initiating Online of Resource zpool_orahome (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 09:30:23 VCS NOTICE V-16-1-10301 Initiating Online of Resource zpool_oradata (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 09:30:23 VCS NOTICE V-16-1-10301 Initiating Online of Resource root_mount (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 09:30:23 VCS WARNING V-16-10001-20001 (solaris11-2) Zpool:zpool_orahome:online:The AltRootPath attribute is not set for the resource zpool_orahome, setting the default value to /
2013/05/01 09:30:23 VCS WARNING V-16-10001-20001 (solaris11-2) Zpool:zpool_oradata:online:The AltRootPath attribute is not set for the resource zpool_oradata, setting the default value to /
2013/05/01 09:30:23 VCS WARNING V-16-10001-5574 (solaris11-2) Mount:root_mount:online:The MountPoint </zoneroot> is already mounted
2013/05/01 09:30:42 VCS INFO V-16-1-10298 Resource zpool_oradata (Owner: Unspecified, Group: zpoolgrp) is online on solaris11-2 (VCS initiated)
2013/05/01 09:30:42 VCS INFO V-16-1-10298 Resource zpool_orahome (Owner: Unspecified, Group: zpoolgrp) is online on solaris11-2 (VCS initiated)
2013/05/01 09:32:24 VCS ERROR V-16-2-13066 (solaris11-2) Agent is calling clean for resource(root_mount) because the resource is not up even after online completed.
2013/05/01 09:32:25 VCS INFO V-16-2-13068 (solaris11-2) Resource(root_mount) - clean completed successfully.
2013/05/01 09:32:25 VCS INFO V-16-2-13071 (solaris11-2) Resource(root_mount): reached OnlineRetryLimit(0).
2013/05/01 09:32:25 VCS ERROR V-16-1-54031 Resource root_mount (Owner: Unspecified, Group: zpoolgrp) is FAULTED on sys solaris11-2
2013/05/01 09:32:25 VCS NOTICE V-16-1-10300 Initiating Offline of Resource zpool_oradata (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 09:32:25 VCS NOTICE V-16-1-10300 Initiating Offline of Resource zpool_orahome (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 09:32:26 VCS INFO V-16-6-15015 (solaris11-2) hatrigger:/opt/VRTSvcs/bin/triggers/resfault is not a trigger scripts directory or can not be executed
2013/05/01 09:32:28 VCS INFO V-16-1-10305 Resource zpool_oradata (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (VCS initiated)
2013/05/01 09:32:28 VCS INFO V-16-1-10305 Resource zpool_orahome (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (VCS initiated)
2013/05/01 09:32:28 VCS NOTICE V-16-1-10300 Initiating Offline of Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 09:32:31 VCS INFO V-16-1-10305 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (VCS initiated)
2013/05/01 09:32:31 VCS ERROR V-16-1-10205 Group zpoolgrp is faulted on system solaris11-2
2013/05/01 09:32:31 VCS NOTICE V-16-1-10446 Group zpoolgrp is offline on system solaris11-2
2013/05/01 09:32:31 VCS INFO V-16-1-10493 Evaluating solaris11-1 as potential target node for group zpoolgrp
2013/05/01 09:32:31 VCS INFO V-16-1-50010 Group zpoolgrp is online or faulted on system solaris11-1
2013/05/01 09:32:31 VCS INFO V-16-1-10493 Evaluating solaris11-2 as potential target node for group zpoolgrp
2013/05/01 09:32:31 VCS INFO V-16-1-50010 Group zpoolgrp is online or faulted on system solaris11-2
 

dariuszz's picture

2013/05/01 09:50:07 VCS WARNING V-16-10001-20001 (solaris11-2) Zpool:zpool_orahome:online:The AltRootPath attribute is not set for the resource zpool_orahome, setting the default value to /
2013/05/01 09:50:07 VCS WARNING V-16-10001-20001 (solaris11-2) Zpool:zpool_oradata:online:The AltRootPath attribute is not set for the resource zpool_oradata, setting the default value to /
2013/05/01 09:50:07 VCS WARNING V-16-10001-5574 (solaris11-2) Mount:root_mount:online:The MountPoint </zoneroot> is already mounted
2013/05/01 09:50:25 VCS INFO V-16-1-10298 Resource zpool_oradata (Owner: Unspecified, Group: zpoolgrp) is online on solaris11-2 (VCS initiated)
2013/05/01 09:50:26 VCS INFO V-16-1-10298 Resource zpool_orahome (Owner: Unspecified, Group: zpoolgrp) is online on solaris11-2 (VCS initiated)
2013/05/01 09:50:42 VCS NOTICE V-16-1-10022 Agent Mount stopped
2013/05/01 09:51:16 VCS INFO V-16-1-10307 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-1 (Not initiated by VCS)
2013/05/01 09:51:41 VCS NOTICE V-16-1-10166 Initiating manual online of group zpoolgrp on system solaris11-2
2013/05/01 09:51:41 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group zpoolgrp on all nodes
2013/05/01 09:51:41 VCS NOTICE V-16-1-10301 Initiating Online of Resource zone_res (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 09:56:43 VCS WARNING V-16-2-13012 (solaris11-2) Resource(zone_res): online procedure did not complete within the expected time.
2013/05/01 09:56:43 VCS ERROR V-16-2-13065 (solaris11-2) Agent is calling clean for resource(zone_res) because online did not complete within the expected time.
2013/05/01 09:56:50 VCS INFO V-16-2-13068 (solaris11-2) Resource(zone_res) - clean completed successfully.
2013/05/01 09:56:50 VCS INFO V-16-2-13071 (solaris11-2) Resource(zone_res): reached OnlineRetryLimit(0).
2013/05/01 09:56:53 VCS ERROR V-16-1-54031 Resource zone_res (Owner: Unspecified, Group: zpoolgrp) is FAULTED on sys solaris11-2
2013/05/01 09:56:53 VCS NOTICE V-16-1-10300 Initiating Offline of Resource zpool_oradata (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 09:56:53 VCS NOTICE V-16-1-10300 Initiating Offline of Resource zpool_orahome (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 09:56:54 VCS INFO V-16-6-15015 (solaris11-2) hatrigger:/opt/VRTSvcs/bin/triggers/resfault is not a trigger scripts directory or can not be executed
2013/05/01 09:56:56 VCS INFO V-16-1-10305 Resource zpool_oradata (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (VCS initiated)
2013/05/01 09:56:56 VCS INFO V-16-1-10305 Resource zpool_orahome (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (VCS initiated)
2013/05/01 09:56:56 VCS NOTICE V-16-1-10300 Initiating Offline of Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 09:56:59 VCS INFO V-16-1-10305 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (VCS initiated)
2013/05/01 09:56:59 VCS ERROR V-16-1-10205 Group zpoolgrp is faulted on system solaris11-2
2013/05/01 09:56:59 VCS NOTICE V-16-1-10446 Group zpoolgrp is offline on system solaris11-2
2013/05/01 09:56:59 VCS INFO V-16-1-10493 Evaluating solaris11-1 as potential target node for group zpoolgrp
2013/05/01 09:56:59 VCS INFO V-16-1-10493 Evaluating solaris11-2 as potential target node for group zpoolgrp
2013/05/01 09:56:59 VCS INFO V-16-1-50010 Group zpoolgrp is online or faulted on system solaris11-2
2013/05/01 09:56:59 VCS NOTICE V-16-1-10301 Initiating Online of Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) on System solaris11-1
2013/05/01 09:56:59 VCS WARNING V-16-10001-20001 (solaris11-1) Zpool:zpool_zoneroot:online:The AltRootPath attribute is not set for the resource zpool_zoneroot, setting the default value to /
2013/05/01 09:57:03 VCS WARNING V-16-10001-20002 (solaris11-1) Zpool:zpool_zoneroot:online:zpool import zoneroot failed. Try again using the force import -f option
2013/05/01 09:57:04 VCS INFO V-16-2-13716 (solaris11-1) Resource(zpool_zoneroot): Output of the completed operation (online)
==============================================
cannot mount 'zoneroot' on '/zoneroot': directory is not empty
cannot mount 'zoneroot' on '/zoneroot': directory is not empty
cannot mount 'zoneroot/zoneroot' on '/zoneroot/zoneroot': failure mounting parent dataset
cannot import 'zoneroot': a pool with that name is already created/imported,
and no additional pools with that name were found
==============================================

2013/05/01 09:57:15 VCS WARNING V-16-10001-20004 (solaris11-1) Zpool:zpool_zoneroot:monitor:Warning: The filesystem zoneroot with mountpoint /zoneroot is not mounted. Administrative action may be required
2013/05/01 09:58:15 VCS WARNING V-16-10001-20004 (solaris11-1) Zpool:zpool_zoneroot:monitor:Warning: The filesystem zoneroot with mountpoint /zoneroot is not mounted. Administrative action may be required
2013/05/01 09:59:15 VCS WARNING V-16-10001-20004 (solaris11-1) Zpool:zpool_zoneroot:monitor:Warning: The filesystem zoneroot with mountpoint /zoneroot is not mounted. Administrative action may be required
2013/05/01 09:59:16 VCS ERROR V-16-2-13066 (solaris11-1) Agent is calling clean for resource(zpool_zoneroot) because the resource is not up even after online completed.
2013/05/01 09:59:18 VCS INFO V-16-2-13068 (solaris11-1) Resource(zpool_zoneroot) - clean completed successfully.
2013/05/01 09:59:18 VCS INFO V-16-2-13071 (solaris11-1) Resource(zpool_zoneroot): reached OnlineRetryLimit(0).
2013/05/01 09:59:19 VCS ERROR V-16-1-54031 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is FAULTED on sys solaris11-1
2013/05/01 09:59:19 VCS ERROR V-16-1-10205 Group zpoolgrp is faulted on system solaris11-1
2013/05/01 09:59:19 VCS NOTICE V-16-1-10446 Group zpoolgrp is offline on system solaris11-1
2013/05/01 09:59:19 VCS INFO V-16-1-10493 Evaluating solaris11-1 as potential target node for group zpoolgrp
2013/05/01 09:59:19 VCS INFO V-16-1-50010 Group zpoolgrp is online or faulted on system solaris11-1
2013/05/01 09:59:19 VCS INFO V-16-1-10493 Evaluating solaris11-2 as potential target node for group zpoolgrp
2013/05/01 09:59:19 VCS INFO V-16-1-50010 Group zpoolgrp is online or faulted on system solaris11-2
2013/05/01 09:59:19 VCS INFO V-16-6-15015 (solaris11-1) hatrigger:/opt/VRTSvcs/bin/triggers/resfault is not a trigger scripts directory or can not be executed
 

mikebounds's picture

I would start this manually without VCS to check everything is setup right, so:

Import the 4 ZFS storage pools

Check if your mount is automounted and if not then mount it

Start the z1 zone.

If you are able to start all these manually, then if you probe resources in VCS (or wait 5 mins), then they should all report as online and if they do, then try to offline resources, one at a time and then re-online them one at a time.

Please post output of performing above.

Mike

 

UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

g_lee's picture

Mike's points are valid - one other thing to check as well if you haven't already:

Ensure the zpool altroot is set to / (if not, stop zone, export zpool, re-import it and set altroot on import)

AND

explicitly set the AltRootPath attribute for the zpool resource to "/"

# hares -modify <zpool_resource> AltRootPath /

(repeat above for other zpool resources)

Although the bundled agents guide indicates "/" is the default value, it doesn't seem to probe the resource correctly until the attribute is explicitly set via hares above (for 6.0.1 / 6.0.3 - even though you are using slightly older version it's still probably worth a try)

If this post has helped you, please vote or mark as solution

dariuszz's picture

I followed Mikes advise, well, first I set AltRootPath / on all zpool's, however the zone still did not come up.

Maybe I need to re-add the Mount resource.

 

Anyway,

 

AS per Mike, I stopped VCS and was able to boot the zone on node 1 (where it was installed originally), I then restarted VCS, here's the o/p:

root@solaris11-2:/var/VRTSvcs/log# hastatus -sum

-- SYSTEM STATE
-- System               State                Frozen

A  solaris11-1          UNKNOWN              0
A  solaris11-2          RUNNING              0

-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State

B  ClusterService  solaris11-1          Y          N               OFFLINE
B  ClusterService  solaris11-2          Y          N               ONLINE
B  zpoolgrp        solaris11-1          Y          Y               OFFLINE
B  zpoolgrp        solaris11-2          Y          N               OFFLINE

-- RESOURCES NOT PROBED
-- Group           Type                 Resource             System

E  ClusterService  IP                   webip                solaris11-1
E  ClusterService  NIC                  csgnic               solaris11-1
E  zpoolgrp        Zone                 zone_res             solaris11-1
E  zpoolgrp        Zpool                zpool_oradata        solaris11-1
E  zpoolgrp        Zpool                zpool_orahome        solaris11-1
E  zpoolgrp        Zpool                zpool_zoneroot       solaris11-1
root@solaris11-2:/var/VRTSvcs/log#
 

g_lee's picture

Check / set the AltRootPath of the zpool resource per the previous comment.

If it still doesn't probe, provide output:

# hares -display <zpool_resource>

 

If this post has helped you, please vote or mark as solution

dariuszz's picture

Now I am getting:

root@solaris11-2:~# hastatus -sum
VCS ERROR V-16-1-10600 Cannot connect to VCS engine
VCS WARNING V-16-1-11046 Local system not available
 

g_lee's picture

Did you stop VCS? (if so, why??)

# ps -ef |grep had

# gabconfig -a

If it's stopped - start VCS, set the attributes as instructed, check the logs; if it still doesn't work, provide the output.

If this post has helped you, please vote or mark as solution

dariuszz's picture

I dd not stop VCS other then when to (following Mike's advise), try and boot the zone outsode of VCS...

root@solaris11-2:/var/tmp# May  1 12:05:02 solaris11-2 last message repeated 1 time
ps -ef |grep had
    root  1460     1   0 12:00:48 ?           0:03 /opt/VRTSvcs/bin/had
    root  3960  1470   0 13:13:00 pts/1       0:00 grep had
    root  1463     1   0 12:00:48 ?           0:00 /opt/VRTSvcs/bin/hashadow
root@solaris11-2:/var/tmp# gabconfig -a
GAB Port Memberships
===============================================================
root@solaris11-2:/var/tmp# hastart
root@solaris11-2:/var/tmp#
 

dariuszz's picture

okay , I rebooted and gab is back up now, so here's the main.cf attached again for your kind perusal....

AttachmentSize
main.txt 1.91 KB
dariuszz's picture

root@solaris11-2:/var/VRTSvcs/log# ls
CmdServer-log_A.log  engine_A.log         hashadow_A.log       hashadow-err_A.log   hastart.log          HostMonitor_A.log    Mount_A.log          tmp                  Zone_A.log           Zpool_A.log
root@solaris11-2:/var/VRTSvcs/log# tail -f engine_A.log
2013/05/01 15:26:55 VCS INFO V-16-1-10307 Resource zone_res (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (Not initiated by VCS)
2013/05/01 15:27:02 VCS INFO V-16-1-50135 User root fired command: hares -clear zpool_zoneroot  from localhost
2013/05/01 15:27:02 VCS INFO V-16-1-10307 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-1 (Not initiated by VCS)
2013/05/01 15:27:24 VCS INFO V-16-1-50135 User root fired command: hagrp -online zpoolgrp  solaris11-2  from localhost
2013/05/01 15:27:24 VCS NOTICE V-16-1-10166 Initiating manual online of group zpoolgrp on system solaris11-2
2013/05/01 15:27:24 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group zpoolgrp on all nodes
2013/05/01 15:27:24 VCS NOTICE V-16-1-10301 Initiating Online of Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 15:27:42 VCS INFO V-16-1-10298 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is online on solaris11-2 (VCS initiated)
2013/05/01 15:27:42 VCS NOTICE V-16-1-10301 Initiating Online of Resource zpool_orahome (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 15:27:42 VCS NOTICE V-16-1-10301 Initiating Online of Resource zpool_oradata (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 15:27:59 VCS INFO V-16-1-10298 Resource zpool_oradata (Owner: Unspecified, Group: zpoolgrp) is online on solaris11-2 (VCS initiated)
2013/05/01 15:27:59 VCS INFO V-16-1-10298 Resource zpool_orahome (Owner: Unspecified, Group: zpoolgrp) is online on solaris11-2 (VCS initiated)
2013/05/01 15:27:59 VCS NOTICE V-16-1-10301 Initiating Online of Resource zone_res (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 15:33:00 VCS WARNING V-16-2-13012 (solaris11-2) Resource(zone_res): online procedure did not complete within the expected time.
2013/05/01 15:33:00 VCS ERROR V-16-2-13065 (solaris11-2) Agent is calling clean for resource(zone_res) because online did not complete within the expected time.
2013/05/01 15:33:07 VCS INFO V-16-2-13068 (solaris11-2) Resource(zone_res) - clean completed successfully.
2013/05/01 15:33:07 VCS INFO V-16-2-13071 (solaris11-2) Resource(zone_res): reached OnlineRetryLimit(0).
2013/05/01 15:33:10 VCS ERROR V-16-1-54031 Resource zone_res (Owner: Unspecified, Group: zpoolgrp) is FAULTED on sys solaris11-2
2013/05/01 15:33:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource zpool_oradata (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 15:33:10 VCS NOTICE V-16-1-10300 Initiating Offline of Resource zpool_orahome (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 15:33:10 VCS INFO V-16-6-15015 (solaris11-2) hatrigger:/opt/VRTSvcs/bin/triggers/resfault is not a trigger scripts directory or can not be executed
2013/05/01 15:33:12 VCS INFO V-16-1-10305 Resource zpool_orahome (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (VCS initiated)
2013/05/01 15:33:12 VCS INFO V-16-1-10305 Resource zpool_oradata (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (VCS initiated)
2013/05/01 15:33:12 VCS NOTICE V-16-1-10300 Initiating Offline of Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) on System solaris11-2
2013/05/01 15:33:16 VCS INFO V-16-1-10305 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-2 (VCS initiated)
2013/05/01 15:33:16 VCS ERROR V-16-1-10205 Group zpoolgrp is faulted on system solaris11-2
2013/05/01 15:33:16 VCS NOTICE V-16-1-10446 Group zpoolgrp is offline on system solaris11-2
2013/05/01 15:33:16 VCS INFO V-16-1-10493 Evaluating solaris11-1 as potential target node for group zpoolgrp
2013/05/01 15:33:16 VCS INFO V-16-1-10493 Evaluating solaris11-2 as potential target node for group zpoolgrp
2013/05/01 15:33:16 VCS INFO V-16-1-50010 Group zpoolgrp is online or faulted on system solaris11-2
2013/05/01 15:33:16 VCS NOTICE V-16-1-10301 Initiating Online of Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) on System solaris11-1
2013/05/01 15:33:19 VCS WARNING V-16-10001-20002 (solaris11-1) Zpool:zpool_zoneroot:online:zpool import zoneroot failed. Try again using the force import -f option
2013/05/01 15:33:21 VCS INFO V-16-2-13716 (solaris11-1) Resource(zpool_zoneroot): Output of the completed operation (online)
==============================================
cannot mount 'zoneroot' on '/zoneroot': directory is not empty
cannot mount 'zoneroot' on '/zoneroot': directory is not empty
cannot mount 'zoneroot/zoneroot' on '/zoneroot/zoneroot': failure mounting parent dataset
cannot import 'zoneroot': a pool with that name is already created/imported,
and no additional pools with that name were found
==============================================

2013/05/01 15:33:32 VCS WARNING V-16-10001-20004 (solaris11-1) Zpool:zpool_zoneroot:monitor:Warning: The filesystem zoneroot with mountpoint /zoneroot is not mounted. Administrative action may be required
 

Venkata Reddy Chappavarapu's picture

Looks to be some issue with the way you are configuring Zpool and using it for Zone's root.

 

Can you provide the following command's output?

#zfs get mountpoint

 

Based on the mount point is set or not for a pool, you need to configure a Mount resource to manage the mounting of the pool.

=========== excerpt from agent's guide========

 

When the value of the mountpoint property is one of the following:

■ If the value of the mountpoint property is something other than legacy, the

agent checks the mount status of the ZFS file systems.

■ If the value of the mountpoint property is legacy, then it does not check the

file system mount status. The agent assumes that you plan to use Mount

resources to manage and monitor the ZFS file systems.

=============================================

Please refer to Bundled Agents Reference Guide for Zpool behavior and configuration.

Also looks like the oradata and orahome pools are supposed to be mounted inside zone, so you need to configure ZoneResName attribute for the Zpool resources montioring these pools.

 

If the issue persists you may open a support case with Symantec.

Regards,

Venkat

Venkata Reddy Chappavarapu,

Sr. Manager,

Information Availability Group (VCS),

Symantec Corporation

===========================

PS: If you are happy with the answer provided, please mark the post as solution. You

dariuszz's picture

root@solaris11-2:/etc/VRTSvcs/conf/config# zfs get mountpoint zoneroot
NAME      PROPERTY    VALUE      SOURCE
zoneroot  mountpoint  /zoneroot  default
root@solaris11-2:/etc/VRTSvcs/conf/config# zfs get mountpoint oradata
NAME     PROPERTY    VALUE     SOURCE
oradata  mountpoint  /oradata  default
root@solaris11-2:/etc/VRTSvcs/conf/config# zfs get mountpoint orahome
NAME     PROPERTY    VALUE     SOURCE
orahome  mountpoint  /orahome  default
root@solaris11-2:/etc/VRTSvcs/conf/config#
 

sajith_cr's picture

I have following comments:

As per the zoneadm list -cv o/p, the zone root is /zoneroot/zoneroot. This requires zpool named zoneroot to be mounted at /zoneroot/zoneroot. For this you need to change the AltRootpath attribute of resource zpool_zoneroot to /zoneroot.

Also I assume you want orahome and orahome to be mounted inside local zone.

If this is the case, you need to configure these datasets in zone's xml configuration file.

Then set ZoneresName attribute for zpool_oradat and zpool_orahome resources only.

Let me know if this works

 

~Sajith

Regards,

Sajith

If this post has helped you, please vote or mark as solution.

dariuszz's picture

nope, still does not work, please see attached main.cf

 

2013/05/02 20:20:14 VCS INFO V-16-6-15002 (solaris11-1) hatrigger:hatrigger executed /opt/VRTSvcs/bin/internal_triggers/dump_tunables solaris11-1 1   successfully
2013/05/02 20:20:15 VCS INFO V-16-1-10304 Resource webip (Owner: Unspecified, Group: ClusterService) is offline on solaris11-1 (First probe)
2013/05/02 20:20:15 VCS INFO V-16-10001-20902 (solaris11-1) Zone:Zone:imf_init:pid is 1432
2013/05/02 20:20:15 VCS INFO V-16-10001-20903 (solaris11-1) Zone:Zone:imf_init:/opt/VRTSamf/bin/amfinit -i -rZone 1432
2013/05/02 20:20:17 VCS INFO V-16-1-10304 Resource zpool_zoneroot (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-1 (First probe)
2013/05/02 20:20:17 VCS INFO V-16-1-10304 Resource zpool_orahome (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-1 (First probe)
2013/05/02 20:20:17 VCS INFO V-16-1-10304 Resource zpool_oradata (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-1 (First probe)
2013/05/02 20:20:19 VCS NOTICE V-16-1-10438 Group VCShmg has been probed on system solaris11-1
2013/05/02 20:20:19 VCS NOTICE V-16-1-10435 Group VCShmg will not start automatically on System solaris11-1 as the system is not a part of AutoStartList attribute of the group.
2013/05/02 20:20:19 VCS INFO V-16-1-10304 Resource z1 (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-1 (First probe)
2013/05/02 20:20:19 VCS NOTICE V-16-1-10438 Group zpoolgrp has been probed on system solaris11-1
2013/05/02 20:20:19 VCS NOTICE V-16-1-10442 Initiating auto-start online of group zpoolgrp on system solaris11-2
2013/05/02 20:20:19 VCS NOTICE V-16-1-10233 Clearing Restart attribute for group zpoolgrp on all nodes
2013/05/02 20:20:22 VCS NOTICE V-16-1-10438 Group ClusterService has been probed on system solaris11-1
2013/05/02 20:21:02 VCS WARNING V-16-2-13012 (solaris11-2) Resource(zpool_oradata): online procedure did not complete within the expected time.
2013/05/02 20:21:02 VCS WARNING V-16-2-13012 (solaris11-2) Resource(zpool_zoneroot): online procedure did not complete within the expected time.
2013/05/02 20:21:02 VCS WARNING V-16-2-13012 (solaris11-2) Resource(zpool_orahome): online procedure did not complete within the expected time.
2013/05/02 20:21:02 VCS ERROR V-16-2-13065 (solaris11-2) Agent is calling clean for resource(zpool_oradata) because online did not complete within the expected time.
2013/05/02 20:21:02 VCS ERROR V-16-2-13065 (solaris11-2) Agent is calling clean for resource(zpool_orahome) because online did not complete within the expected time.
2013/05/02 20:21:02 VCS ERROR V-16-2-13065 (solaris11-2) Agent is calling clean for resource(zpool_zoneroot) because online did not complete within the expected time.
2013/05/02 20:22:03 VCS ERROR V-16-2-13006 (solaris11-2) Resource(zpool_zoneroot): clean procedure did not complete within the expected time.
2013/05/02 20:22:03 VCS ERROR V-16-2-13006 (solaris11-2) Resource(zpool_oradata): clean procedure did not complete within the expected time.
2013/05/02 20:22:03 VCS ERROR V-16-2-13006 (solaris11-2) Resource(zpool_orahome): clean procedure did not complete within the expected time.
2013/05/02 20:24:04 VCS ERROR V-16-2-13027 (solaris11-2) Resource(zpool_orahome) - monitor procedure did not complete within the expected time.
2013/05/02 20:24:04 VCS ERROR V-16-2-13027 (solaris11-2) Resource(zpool_oradata) - monitor procedure did not complete within the expected time.
2013/05/02 20:24:04 VCS ERROR V-16-2-13027 (solaris11-2) Resource(zpool_zoneroot) - monitor procedure did not complete within the expected time.
 

 

root@solaris11-2:/etc/zones# cat index
# ident "%Z%%M% %I%     %E% SMI"
# Copyright 2005 Sun Microsystems, Inc.  All rights reserved.
# Use is subject to license terms.
#
# DO NOT EDIT: this file is automatically generated by zoneadm(1M)
# and zonecfg(1M).  Any manual changes will be lost.
#
global:installed:/
z1:configured:/zoneroot/zoneroot:
root@solaris11-2:/etc/zones# cat z1.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE zone PUBLIC "-//Sun Microsystems Inc//DTD Zones//EN" "file:///usr/share/lib/xml/dtd/zonecfg.dtd.1">
<!--
    DO NOT EDIT THIS FILE.  Use zonecfg(1M) instead.
-->
<zone name="z1" zonepath="/zoneroot/zoneroot" autoboot="false" brand="solaris" ip-type="exclusive" bootargs="-m verbose">
  <filesystem special="oradata/oradata" directory="/oradata" type="zfs"/>
  <filesystem special="orahome/orahome" directory="/orahome" type="zfs"/>
</zone>
root@solaris11-2:/etc/zones#
 

AttachmentSize
main.txt 1.67 KB
g_lee's picture

dariuszz,

zpool zoneroot is/contains the zone's zonepath - DON'T set the ZoneResName to z1 for the zpool_zoneroot resource as it will try to mount/monitor the zpool inside the zone, which it can't/shouldn't do as it's the path for the zone itself. The existing dependency (z1 requires zpool_zoneroot) is sufficient.

Back to basics:

# zpool list
# zfs list
# df -h
# ls -la /zoneroot /zoneroot/zoneroot
 

If this post has helped you, please vote or mark as solution

dariuszz's picture

Okay, I reinstalled everything, but am having the same issue.

 

 

2013/05/09 23:06:40 VCS ERROR V-16-10001-20014 (solaris11-chi-1) Zpool:zoneroot_res:monitor:The value altroot for zpool z1root is not set. Resource state is UNKNOWN
2013/05/09 23:06:56 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oradata_res:monitor:Warning: The filesystem oradata with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:07:28 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oraclehome_res:monitor:Warning: The filesystem orahome with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:07:40 VCS INFO V-16-1-10304 Resource zone_res (Owner: Unspecified, Group: zpoolgrp) is offline on solaris11-chi-2 (First probe)
2013/05/09 23:07:41 VCS ERROR V-16-10001-20014 (solaris11-chi-1) Zpool:zoneroot_res:monitor:The value altroot for zpool z1root is not set. Resource state is UNKNOWN
2013/05/09 23:07:56 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oradata_res:monitor:Warning: The filesystem oradata with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:08:28 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oraclehome_res:monitor:Warning: The filesystem orahome with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:08:41 VCS ERROR V-16-10001-20014 (solaris11-chi-1) Zpool:zoneroot_res:monitor:The value altroot for zpool z1root is not set. Resource state is UNKNOWN
2013/05/09 23:08:56 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oradata_res:monitor:Warning: The filesystem oradata with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:09:28 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oraclehome_res:monitor:Warning: The filesystem orahome with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:09:41 VCS ERROR V-16-10001-20014 (solaris11-chi-1) Zpool:zoneroot_res:monitor:The value altroot for zpool z1root is not set. Resource state is UNKNOWN
2013/05/09 23:09:56 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oradata_res:monitor:Warning: The filesystem oradata with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:10:28 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oraclehome_res:monitor:Warning: The filesystem orahome with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:10:41 VCS ERROR V-16-10001-20014 (solaris11-chi-1) Zpool:zoneroot_res:monitor:The value altroot for zpool z1root is not set. Resource state is UNKNOWN
2013/05/09 23:10:56 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oradata_res:monitor:Warning: The filesystem oradata with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:11:28 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oraclehome_res:monitor:Warning: The filesystem orahome with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:11:41 VCS ERROR V-16-10001-20014 (solaris11-chi-1) Zpool:zoneroot_res:monitor:The value altroot for zpool z1root is not set. Resource state is UNKNOWN
2013/05/09 23:11:56 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oradata_res:monitor:Warning: The filesystem oradata with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:12:28 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oraclehome_res:monitor:Warning: The filesystem orahome with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:12:41 VCS ERROR V-16-10001-20014 (solaris11-chi-1) Zpool:zoneroot_res:monitor:The value altroot for zpool z1root is not set. Resource state is UNKNOWN
2013/05/09 23:12:56 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oradata_res:monitor:Warning: The filesystem oradata with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:13:27 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oraclehome_res:monitor:Warning: The filesystem orahome with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:13:40 VCS ERROR V-16-10001-20014 (solaris11-chi-1) Zpool:zoneroot_res:monitor:The value altroot for zpool z1root is not set. Resource state is UNKNOWN
2013/05/09 23:13:55 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oradata_res:monitor:Warning: The filesystem oradata with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:14:27 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oraclehome_res:monitor:Warning: The filesystem orahome with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:14:40 VCS ERROR V-16-10001-20014 (solaris11-chi-1) Zpool:zoneroot_res:monitor:The value altroot for zpool z1root is not set. Resource state is UNKNOWN
2013/05/09 23:14:56 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oradata_res:monitor:Warning: The filesystem oradata with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:15:28 VCS WARNING V-16-10001-20004 (solaris11-chi-1) Zpool:oraclehome_res:monitor:Warning: The filesystem orahome with mountpoint / is not mounted. Administrative action may be required
2013/05/09 23:15:40 VCS ERROR V-16-10001-20014 (solaris11-chi-1) Zpool:zoneroot_res:monitor:The value altroot for zpool z1root is not set. Resource state is UNKNOWN
 
root@solaris11-chi-1:~# zpool list
NAME      SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
oradata  33.5G   134K  33.5G   0%  1.00x  ONLINE  /
orahome    34G   134K  34.0G   0%  1.00x  ONLINE  /
rpool    67.5G  14.8G  52.7G  21%  1.00x  ONLINE  -
z1root     34G   452M  33.6G   1%  1.00x  ONLINE  -
 
root@solaris11-chi-1:~#  hastatus -sum
 
-- SYSTEM STATE
-- System               State                Frozen
 
A  solaris11-chi-1      RUNNING              0
A  solaris11-chi-2      RUNNING              0
 
-- GROUP STATE
-- Group           System               Probed     AutoDisabled    State
 
B  zpoolgrp        solaris11-chi-1      N          N               PARTIAL
B  zpoolgrp        solaris11-chi-2      Y          N               OFFLINE
 
-- RESOURCES NOT PROBED
-- Group           Type                 Resource             System
 
E  zpoolgrp        Zpool                oraclehome_res       solaris11-chi-1
E  zpoolgrp        Zpool                oradata_res          solaris11-chi-1
E  zpoolgrp        Zpool                zoneroot_res         solaris11-chi-1
 
 
root@solaris11-chi-1:~# zfs list
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
oradata                                134K  33.0G    31K  /
oradata/data                            31K  33.0G    31K  /z1root/z1root/root/data
orahome                                134K  33.5G    31K  /
orahome/orahome                         31K  33.5G    31K  /z1root/z1root/root/orahome
rpool                                 15.1G  51.3G  73.5K  /rpool
rpool/ROOT                            4.87G  51.3G    31K  legacy
rpool/ROOT/solaris                    4.87G  51.3G  2.67G  /
rpool/ROOT/solaris/var                2.11G  51.3G  2.07G  /var
rpool/VARSHARE                          43K  51.3G    43K  /var/share
rpool/dump                            8.19G  51.6G  7.94G  -
rpool/export                            63K  51.3G    32K  /export
rpool/export/home                       31K  51.3G    31K  /export/home
rpool/swap                            2.06G  51.4G  2.00G  -
z1root                                 452M  33.0G    32K  /z1root
z1root/z1root                          451M  33.0G    33K  /z1root/z1root
z1root/z1root/rpool                    451M  33.0G    31K  /z1root/z1root/root/rpool
z1root/z1root/rpool/ROOT               451M  33.0G    31K  legacy
z1root/z1root/rpool/ROOT/solaris       451M  33.0G   423M  /z1root/z1root/root
z1root/z1root/rpool/ROOT/solaris/var  28.0M  33.0G  27.3M  /z1root/z1root/root/var
z1root/z1root/rpool/VARSHARE            39K  33.0G    39K  /z1root/z1root/root/var/share
z1root/z1root/rpool/export              63K  33.0G    32K  /z1root/z1root/root/export
z1root/z1root/rpool/export/home         31K  33.0G    31K  /z1root/z1root/root/export/home
root@solaris11-chi-1:~#
 

 

AttachmentSize
main.txt 1.01 KB
dariuszz's picture

 

root@solaris11-chi-1:/etc/VRTSvcs/conf/config# ls -al /z1root
total 9
drwxr-xr-x   3 root     root           3 May  9 22:23 .
drwxr-xr-x  26 root     root          30 May  9 23:40 ..
drwx------   4 root     root           5 May  9 23:39 z1root
 
 
root@solaris11-chi-1:/etc/VRTSvcs/conf/config# ls -al /z1root/z1root
total 15
drwx------   4 root     root           5 May  9 23:39 .
drwxr-xr-x   3 root     root           3 May  9 22:23 ..
drwxr-xr-x   2 root     sys            2 May  9 22:41 dev
drwxr-xr-x   2 root     root           2 May  9 22:40 root
-rw-r--r--   1 root     root         840 May  9 23:39 SUNWdetached.xml
root@solaris11-chi-1:/etc/VRTSvcs/conf/config#
 
dariuszz's picture

Allright, I stripped down my zone.xml files, removed the oradata and orahome mounts from VCS and from the zone.xml files. Now I am able to boot the zone via cluster control on the node on which it was created. 

I also increased debug level of messages, then, when the zone was running on node1(where it was first installed), I did a:

hagrp -switch zpoolgrp -to solaris11-chi-2

and I saw in the engine logs:

 

2013/05/10 00:41:40 VCS NOTICE V-16-1-10301 Initiating Online of Resource zone_res (Owner: Unspecified, Group: zpoolgrp) on System solaris11-chi-2
2013/05/10 00:41:40 VCS DBG_1 V-16-10001-0 (solaris11-chi-2) Zone:zone_res:online:None of the keys have a valid value. Configuration is not enabled for DR
2013/05/10 00:41:40 VCS DBG_1 V-16-10001-0 (solaris11-chi-2) Zone:zone_res:online:Attribute WorkLoad not found in the argument list.
2013/05/10 00:41:42 VCS DBG_3 V-16-10001-0 (solaris11-chi-2) Zone:zone_res:online:Zone [z1] is in [configured] state. Performing zoneadm attach operation..
2013/05/10 00:41:42 VCS DBG_5 V-16-10001-0 (solaris11-chi-2) Zone:zone_res:online:Attaching zone [z1] with -F option
2013/05/10 00:41:51 VCS DBG_3 V-16-10001-0 (solaris11-chi-2) Zone:zone_res:online:Command [/usr/sbin/zoneadm -z "z1" boot 2>&1] exited with output [zone 'z1': ERROR: no active dataset.
zone 'z1': ERROR: Unable to mount zone root dataset.
zoneadm: zone 'z1': call to zoneadmd failed
] and exitcode [1]
May  9 23:40:23 solaris11-chi-1 last message repeated 1 time
 
Please see updated main.cf
 
AttachmentSize
main.txt 940 bytes
dariuszz's picture

 

I have concluded that there's something incorrect in my zpool configuration and that VCS is working fine....I get this when I online the zone via VCS on node 1 - where it was installed and booted first - I then simply copied over the index file and the zone.xml file to node  2, please scroll down to see how the zpool gets mounted, when I switch the resource to node 2
 
root@solaris11-chi-1:/etc/VRTSvcs/conf/config# hagrp -online zpoolgrp -sys solaris11-chi-1
root@solaris11-chi-1:/etc/VRTSvcs/conf/config# zfs list
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
oradata                                134K  33.0G    31K  /
oradata/data                            31K  33.0G    31K  /data
orahome                                134K  33.5G    31K  /
orahome/orahome                         31K  33.5G    31K  /orahome
rpool                                 15.1G  51.3G  73.5K  /rpool
rpool/ROOT                            4.87G  51.3G    31K  legacy
rpool/ROOT/solaris                    4.87G  51.3G  2.67G  /
rpool/ROOT/solaris/var                2.11G  51.3G  2.07G  /var
rpool/VARSHARE                          57K  51.3G    57K  /var/share
rpool/dump                            8.19G  51.6G  7.94G  -
rpool/export                            63K  51.3G    32K  /export
rpool/export/home                       31K  51.3G    31K  /export/home
rpool/swap                            2.06G  51.4G  2.00G  -
z1root                                 473M  33.0G    32K  /z1root
z1root/z1root                          473M  33.0G  34.5K  /z1root/z1root
z1root/z1root/rpool                    473M  33.0G    31K  /rpool
z1root/z1root/rpool/ROOT               472M  33.0G    31K  legacy
z1root/z1root/rpool/ROOT/solaris       472M  33.0G   439M  /z1root/z1root/root
z1root/z1root/rpool/ROOT/solaris/var  28.0M  33.0G  27.4M  /z1root/z1root/root/var
z1root/z1root/rpool/VARSHARE            39K  33.0G    39K  /var/share
z1root/z1root/rpool/export              63K  33.0G    32K  /export
z1root/z1root/rpool/export/home         31K  33.0G    31K  /export/home
root@solaris11-chi-1:/etc/VRTSvcs/conf/config#
 
root@solaris11-chi-1:/etc/VRTSvcs/conf/config# hagrp -switch zpoolgrp -to solaris11-chi-2
 
root@solaris11-chi-2:/etc/zones# zpool list
NAME     SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool   67.5G  9.44G  58.1G  13%  1.00x  ONLINE  -
z1root    34G   474M  33.5G   1%  1.00x  ONLINE  /
root@solaris11-chi-2:/etc/zones# zfs list
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
rpool                                 9.62G  56.8G  73.5K  /rpool
rpool/ROOT                            3.50G  56.8G    31K  legacy
rpool/ROOT/solaris                    3.50G  56.8G  2.67G  /
rpool/ROOT/solaris/var                 767M  56.8G   728M  /var
rpool/VARSHARE                        82.5K  56.8G  82.5K  /var/share
rpool/dump                            4.06G  56.9G  3.94G  -
rpool/export                            63K  56.8G    32K  /export
rpool/export/home                       31K  56.8G    31K  /export/home
rpool/swap                            2.06G  56.9G  2.00G  -
z1root                                 473M  33.0G    32K  /z1root
z1root/z1root                          473M  33.0G    35K  /z1root/z1root
z1root/z1root/rpool                    473M  33.0G    31K  /rpool
z1root/z1root/rpool/ROOT               472M  33.0G    31K  legacy
z1root/z1root/rpool/ROOT/solaris       472M  33.0G   439M  /
z1root/z1root/rpool/ROOT/solaris/var  28.0M  33.0G  27.4M  /var
z1root/z1root/rpool/VARSHARE            39K  33.0G    39K  /var/share
z1root/z1root/rpool/export              63K  33.0G    32K  /export
z1root/z1root/rpool/export/home         31K  33.0G    31K  /export/home
root@solaris11-chi-2:/etc/zones#
 
 
g_lee's picture

I then simply copied over the index file and the zone.xml file to node  2, please scroll down to see how the zpool gets mounted, when I switch the resource to node 2

It is generally NOT good practice to copy the index file from 1 node onto the other node as this contains state info for the zone(s). If you want to configure the zones with the same details on the second/other nodes, use zonecfg to configure on the remaining nodes (ie: with the same details you used on node1) so the zone is in configured state.

Is there a particular reason you are nesting the zoneroot mount point? I think this is contributing to the confusion.

ie: if you are only running 1 zone per (zoneroot) zpool (zone z1 is the only zone in zpool z1root), why is the zonepath /z1root/z1root instead of just /z1root?

Can you try creating a zpool z1root (altroot / ) and build the zone with zonepath as /z1root. DON'T set ZoneResName for zoneroot_zpool_res, just add the resource dependency, and then provide:

# hares -display <zoneroot_zpool_res>
# zfs get all <zpool>
# zonecfg -z <zone> info

If this post has helped you, please vote or mark as solution