Video Screencast Help
Storage & Clustering Community Blog
Showing posts in English
TonyGriffiths | 25 Jul 2014 | 0 comments

Symantec Storage Foundation and High Availability Solutions (SFHA) 6.1.1 is now available for AIX, Solaris(SPARC), Red Hat Enterprise Linux, and SUSE Linux.

6.1.1 is a Maintenance Release (MR), refer to the Release Notes for more information.

SORT links are below

Use SORT notification to receive updates on new patches and documentation



Rank Product Release type Patch name Release date
1 Storage Foundation HA 6.1 Maintenance Release sfha-sol11_sparc-6.1.1 2014-07-23
2 Storage Foundation HA 6.1 Maintenance Release sfha-sol10_sparc-6.1.1...
S_D | 14 Jul 2014 | 0 comments

Business Challenges

Businesses and organizations of all sizes have adopted virtualization as a core technology within their data centers. Both new and legacy applications are moving to virtualization because it reduces cost, improves hardware utilization and simplifies management. Originally, proprietary virtualization platforms were the dominant forces in the industry. Lately, customers have realized that open virtualization platforms provide higher performance, better functionality and are more cost effective.

The rapid adoption of virtualization has driven demand for support of High Availability (HA) and Disaster Recovery (DR) functionality. Businesses rely on an "always on" IT infrastructure in order to provide competitive advantage and create unique customer values. Applications built on virtualized environments must deliver appropriate levels of availability with minimal or no downtime.

Executive Overview

Red Hat and Symantec...

ccarrero | 26 Jun 2014 | 0 comments

Flexible Storage Sharing provides great capabilities to reduce capital and operational expenditures. In a previous blog entry I was describing how to commoditize high availability and storage using Flexible Storage Sharing. Later we saw in this article how to add an extra node to the cluster. In this blog entry I am going to describe my next step that was to have a database instance running in each cluster node. My idea here was to provide resiliency by having a mirror of my data and redo logs in at least two servers. With this approach, each node will have a local copy of the database that is running plus a mirror for another instance. This will be the architecture I will be using:


Kimberley | 24 Jun 2014 | 0 comments

Here is the list of top Storage and Clustering community Support Technotes for the past quarter. Please make sure to browse the list first before you post a question here on Connect, as you may find an answer to the product issue you are seeing. Hope you find your answer here.

Examples of using Iperf to diagnose network issues

How to Check Interface Duplex and Speed Using ndd/kstat on Solaris.

How to connect to the console of the server (Session ID=0) on Windows 2003 by using Remote Desktop during Storage Foundation installations.

How to run an AppCritical Network Analysis test


ccarrero | 09 Jun 2014 | 0 comments

In the article Commoditizing High Availability and Storage using Flexible Storage Sharing I described my first attempt to create a two node cluster based on Flexible Storage Sharing within Symantec Cluster File System and the nice results that I got. My next step was to increase the node count as I wanted to move to an architecture where a database will be running in each of the nodes. The first step here was to add a new node to the cluster.

Many times I get the question about how easy or difficult is to add a node to the cluster. This was a good opportunity to document what I did here. Our engineers at the Common Product Installer (CPI) group have done a great job over the years and now adding a node to the cluster can be done with a few easy steps.

The first thing to do is deploy the packages on the new server. There are several ways to do...

john_klann | 09 Jun 2014 | 0 comments

Intel has 3 models depending on your workload plans starting at $500. Symantec Smart I/O delivers the fastest storage performance with no single point of failure. Fusion-io is delivering PCIe cards with up to 2.6GB/s throughput. Smart I/O can be used to deliver highly available applications. Here are the top benefits:

  1. Increased Application Performance – bring data closer to the application and avoid the SAN bottleneck
  2. Reduce Storage Costs – decouple IOPs and capacity, you can now use tier 2 and 3 storage for your most demanding applications
  3. Improved Storage Utilization – shared storage deployments with Cluster File System, you can even implement DAS and use Symantec Flexible Storage Sharing feature

Intel Debuts NVMe Enteprise PCIe SSDs

The computer chipmaking giant targets the data center with three new...

ccarrero | 27 May 2014 | 2 comments

It was during this year VISION conference in Las Vegas when I got a very interesting question from Mike. He was running my Flexible Storage Sharing (FSS) lab and he asked me: “So Carlos, with FSS I can use internal HDDs and provide a highly available service, commoditizing the HW and avoiding any SAN need”. Then I started talking about the work I have being doing in the lab for the last months. And it was when I realized I had to start writing and publishing about that work, so here we go.

During FSS deployment we all were very excited about the capabilities to bring any application working close to the CPU, especially when using internal SSDs. We worked very closely with Intel and we published the white paper Remove the Rust: Unlock DAS and go SAN-Free. This white paper described how I could increase by four times the performance of a database. But what happens if I do not...

starflyfly | 21 May 2014 | 0 comments

Sometimes, we need to stop or start VCS/SFCFS/SFRAC.Here is a good article to help you finish that.

How to stop and start VCS, SFCFS and SFRAC (Cluster Server, Storage Foundation Cluster File System and Storage Foundation for Oracle RAC) for UNIX & Linux

Table of Contents
Using the install script
Manually starting and stopping...