Video Screencast Help

Local zone went configured state when offlined

Created: 26 Mar 2014 | 8 comments

For some reason one of the local zone configured in VCS went down. Noticed the zone moved to "configured "state. And we were unable to boot it from outside VCS using zoneadm command attach & boot. However, it came online using VCS command hagrp -online SG ...

Would like to know the reason why the zone turn to configured while offlined (detach the local zone root path) ? Is it to failover the zone resource to alternate Node?

If so, In our environement we have our local zones resource running on both active & passive nodes.

Operating Systems:

Comments 8 CommentsJump to latest comment

mikebounds's picture

See extracts from 6.0 Bundled agents guide for the Zone agent:

Note: Solaris 10 Update 3 or later enables attach and detach functionality for
zones. Since the Zone agent supports this feature, you can patch a node where
the service group that contains the zone resource is offline.

If you don't want Zone detached then set DetachZonePath attibute to 0:


If disabled, the Zone agent skips detaching the Zone root during
zone resource offline and clean. DetachZonePath is enabled (1)
by default.


UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

sureshpeters's picture

Is the detach & attach functionality for zones is enabled to failover the zone resources to failover node and attach there?

Or only if zone is detached then the respective service groups of that zones will be failover?

See the output below Zone resource is online on both nodes:-

B  123X-SG     123a             Y          N               ONLINE
B  123Z-SG     123b             Y          N               ONLINE

mikebounds's picture

You have the zone resource in a parallel group - but the zone agent will still detach the zone when the resouce is offlined regardless if the resource is in a failover or parallel group.  So if you offline 123Z-SG on system 123b, then the zone will be detached and as per the bundled agents guide, this would allow you to patch zone on system 123b if you wanted to.


UK Symantec Consultant in VCS, GCO, SF, VVR, VxAT on Solaris, AIX, HP-ux, Linux & Windows

If this post has answered your question then please click on "Mark as solution" link below

sureshpeters's picture

We are using zfs live upgrade for patching. Create a new BE and apply a patch then boot with new BE.

We are running parallel zone resource on a & b node do we still need the option enabled to detach the zone root, as we dont require the zone resource to get failover to other node if it is faulted. But the rest of the data service/resorce groups on top of the zone group has to be failover to other node.

Is it something that zone root has to be detached to get the data servce groups failover to other node?

sureshpeters's picture

Appreciate if you have any update on the above request.

Venkata Reddy Chappavarapu's picture

Where is the file system you are using for Zone root? Is it ZFS?

Which Operating System you have? Solaris 10 or Solaris 11?

With Solaris 11, only ZFS is supported as Zone root file system. If you are doing patch upgrade you need make sure that the repository is accessible inside the zone as well.

You do not need to detach the zone to make the parent group to failover across nodes when your zone is configured in a parallel service group.

Can you share your

Setting DetachZonePath to 1 will detach the zone during offline or clean operations which is the default behavior. If you do not want the Zone agent detach the zoneroot set the attribuet to 0 as suggested above.



Venkata Reddy Chappavarapu,

Sr. Manager,

Information Availability Group (VCS),

Symantec Corporation


PS: If you are happy with the answer provided, please mark the post as sol

sajith_cr's picture

From the descriptions, your major concern is that why manual commands zoneadm attach & boot is not working and not why zone root is detached. Is that correct?

For the first isuue, one of the possiblity is that the ZFS pool is not in imported state when you have tried the zoneadm attach. VCS may be doing it correctly because of correct dependency set to zone resource and its storage for zone root file system.

Your and output of zoneadm attach will help to figure out the actual issue.




If this post has helped you, please vote or mark as solution.

AlexeyL's picture

Hi sureshpeters,

If you failover just resources you don't need to detach the zones. We keep zones on each cluster node up and running during failover of Service Groups. As it's mentioned by Mike your zone resources are parallel groups. The detach portion is only useful when you want the same zone mounted on another cluster node. For example you have one zone located on a shared LUN. You can offline all resources and detach the zone using the cluster commands.Then mount that shared LUN on another node and attach the zone. Due to amount of time it takes to detach/attach and single zone being single point of failure I would prefer for failover purposes leave zones up and running on each host.

For patching you can still have zones running if you use ZFS. You create alternate boot environment which will include your zones as well. When you finish patching you would activate ABE and reboot the host (using init 6). Zones will be updated with the latest patch.  Then you can failover to that host and patch standby node using the same method.