In the article Commoditizing High Availability and Storage using Flexible Storage Sharing I described my first attempt to create a two node cluster based on Flexible Storage Sharing within Symantec Cluster File System and the nice results that I got. My next step was to increase the node count as I wanted to move to an architecture where a database will be running in each of the nodes. The first step here was to add a new node to the cluster.
Many times I get the question about how easy or difficult is to add a node to the cluster. This was a good opportunity to document what I did here. Our engineers at the Common Product Installer (CPI) group have done a great job over the years and now adding a node to the cluster can be done with a few easy steps.
The first thing to do is deploy the packages on the new server. There are several ways to do this. You can use Yum as described in this article, you can use the Installer, or as I did, you can use a new 6.1 feature called Deployment Server. This is a server (I used my first node in the cluster) where the packages are stored and it is a central location to install and distribute the software across any supported UNIX or Linux operating system.
In my configuration I already have two nodes (down and up) and I want to add a new node named strange. From any node I just need to invoke the installer using the –addnode flag and the name of the new node:
down:/opt/VRTS/install> ./installsfcfsha61 –addnode strange
And we check we comply with all the pre-requisites:
We enter the name of any node of the cluster (either down or up in our case). This step will be used to collect the configuration from the current cluster.
The installer will check that the communication with the new node is correct and if needed, system clocks will be synchronized.
Provide the confirmation to add the node to the cluster.
We enter the private networks that will be used (it should automatically detect them) and verify it is correct:
The configuration script will detect there are shared volumes already mounted in the cluster and will allow the new node to mount them. This is a shared nothing configuration that is using Flexible Storage Sharing. That means that although the new node does not have direct connectivity to that storage, it can mount it and use it as local. From this point in time, the new node can have access to the global name space provided by the cluster.
Once we answer yes, the file systems will be mounted in the new node and the add node operation will be completed successfully.
The next figure shows a more graphical representation of the new configuration. Previous to this change I had a two node cluster sharing four mount points. Local storage is being used coming from nodes down and up. Node strange has been added to the cluster and it can now mount the four available mount points. Those mount points are available as a global name space across all the cluster members.
In this configuration I can use that third node just as a simple compute resource where I can run some analytics or backup operations. If that server has any internal storage (that is my case) I can use it to add a third mirror to my volumes to provide extra resiliency (one unique FSS capability compared to other vendors), or I can redistribute my workload so that I will have an even distribution across all the nodes. That is what I will be describing in my next article, were a database instance will be deployed in each of those servers, keeping two copies of data across the cluster for resiliency.