Low Latency Transport / Global Atomic Broadcast frequently asked questions

Article:TECH18949  |  Created: 2009-01-14  |  Updated: 2009-01-14  |  Article URL http://www.symantec.com/docs/TECH18949
Article Type
Technical Solution




Low Latency Transport / Global Atomic Broadcast frequently asked questions


Q:  What is a cluster interconnect?

A:  A cluster interconnect is a data path between nodes in a cluster for purposes of interchanging information about managed resources, as well as maintaining cluster membership. In the past the cluster interconnect was incorrectly termed a “heartbeat link”. This term is incorrect because it only refers to a small piece of the actual data traffic on an interconnect. In a VCS cluster the cluster interconnect carries information between nodes on node startup known as a snapshot, information on any change in resource status on any node in the cluster, as well as node to node heartbeat. When CFS or RAC components are added, these components also utilize the cluster interconnect for data traffic such as CFS metadata and RAC Cache Fusion.

Q:  What is the recommended interconnect configuration?

A:  Symantec recommends a minimum of two dedicated (private) 100 Megabit links (Gigabit links for RAC/CFS) between cluster nodes. In this configuration the network links are for the sole use of VCS for interconnect traffic. In addition to 2 dedicated interconnect links, use of a low priority link is also recommended to provide further redundancy.
At a minimum, Symantec recommends one dedicated/private interconnect link and one low priority link sharing infrastructure with other data center functions. In all configurations, each interconnect link should be on completely independent infrastructure from all other links so the failure of one component cannot effect multiple interconnects.

Configuring multiple interconnects to share any infrastructure is not recommended. Configurations such as running two interconnects to the same hub or switch, or using a single virtual local area network (VLAN) to trunk between two switches, induce a single point of failure in the architecture. The simplest guideline is "No single failure, such as power, network equipment or cabling can disable two interconnects."

As a best practice it is also recommended that interconnect interfaces be kept on separate components.  For example, given a system with an on-board interface ce0 and a quad-fast Ethernet card (interfaces qfe0-3), it's recommended that one link be implemented on the ce0 interface and one on a qfe interface.

Q:  How do you handle multiple clusters?

A:  In any environment with more than one VCS cluster, the operator must explicitly set the cluster ID to a unique value in /etc/llttab. Multiple clusters with the same ID can cause significant issues if any cluster can effectively “see” another cluster on any interconnect. This is especially important when using low priority links on shared networks.
Best practice recommendations are to fully manage cluster Id's within data centers to ensure a unique number is assigned to each cluster

Q:  Can multiple clusters share the same infrastructure??

A:  Assuming the user fully manages cluster Id's as detailed above, then sharing of interconnect infrastructure is perfectly acceptable. For example, a user may have several clusters sharing a single switch for interconnect 1, and a separate switch for interconnect 2 for each cluster.

Q:  Can Low Latency Transport (LLT) be run over a VLAN?

A:  Yes, as long as the following rules are met:
* Individual VLAN's must be fully independent just like the dedicated switches they replace. This means no single point of failure or single point of commonality, such as shared inter switch links, or even VLAN information servers.
* The VLAN connects the machines at layer 2 (see LLT over UDP for other configurations)

Q:  Can you place LLT links on a switch?

A:  Yes. By default LLT operates at network layer 2 and will function perfectly on a switch.  When using VLANs, each LLT link should be placed in its own VLAN.  Symantec recommends the use of at least two complete independent interconnects for all cluster configurations.  So if switches are used, the interconnects must run on completely independent switch infrastructures.

Q:  Can LLT be routed?

A:  LLT is a network layer 2 protocol and has no layer 3 (Network Layer/IP address level) information, therefore it cannot be routed. As of VCS 4.x, Symantec also supports LLT over UDP for specific configurations where layer 2 connection is not possible. The recommended configuration remains native LLT on layer 2.

Q:  How far apart can nodes in a cluster be?

A:  Cluster distance is governed by a number of factors, with the primary factor being storage latency. Storage performance considerations typically limit "Campus Clusters" to approximately 80-100Km in cable distance. LLT has a much higher tolerance for latency, however it makes no sense to create a cluster at any distances greater than can be supported by the underlying storage for synchronous mirroring or synchronous replication, For any greater distances or for asynchronous replication methods Symantec requires shifting to a “global cluster” configuration.
Maximum supported latency in any configuration must be less than 500 milliseconds round trip (with expected latency to be 1 millisecond or less).  

Q:  Do interconnect links require additional IP addresses?

A:  No.  In default configurations, LLT operates at layer 2 and does not need any IP address. In  specific cases where LLT on UDP is used, LLT will require IP addresses on each interconnect interface. .

Q:  How many nodes should be set in the GAB configuration (/etc/gabtab)?

A:  Symantec recommends setting gabconfig parameters to the total number of systems in the cluster. If you have a five-node cluster, GAB should not automatically seed until five nodes are present.  Based on this configuration, the following is the proper entry in /etc/gabtab:
/sbin/gabconfig -c -n 5

Q:  I have blade servers, with just two shared NICs for each chassis.  What's the best way to configure my interconnect?

A:  Symantec recommends dedicating one of the shared NICs to a private LLT connection, and using the second shared NIC as the public interface, with a lowpri LLT link connection configured.

Q:  How does I/O fencing figure into cluster interconnects?

A:  I/O fencing provides added prevention against data corruption in the event that all cluster interconnects are lost.  Symantec recommends always configuring I/O fencing, especially where network integrity or reliability may be in question.  I/O fencing requires the use of disk hardware capable of SCSI-3 persistent reservations.

Q:  What are specific recommendations when I have a large number of clusters sharing a common interconnect infrastructure?

A:  Some method of rigid cluster ID assignment control is essential.  This method of cluster assignment control must be exercised each time a new cluster is deployed on the interconnect infrastructure.

Q:  I'm implementing Oracle RAC with VCS.  Are there any recommendations specific to that product?

A:  Yes.  Symantec requires the use of Gigabit Ethernet for RAC configurations, as well as any other CFS implementations.  All other recommendations still apply.

Legacy ID


Article URL http://www.symantec.com/docs/TECH18949

Terms of use for this information are found in Legal Notices