Video Screencast Help
Storage & Clustering Community Blog
Showing posts tagged with Cluster Server
Showing posts in English
TonyGriffiths | 07 Nov 2013 | 0 comments

Storage Foundation and High Availability Solutions (SFHA) 6.0.4 is now available for the operating systems below:

 

SUSE Linux Enterprise 11 SP2 & SP3

The Veritas Cluster Server (VCS) component additionally supports Oracle Linux 6 Update 4 (No Storage Foundation support)

 

The SFHA 6.0.4 documentation is available on SORT:

https://sort.symantec.com/documents/doc_details/sf...

 

The SFHA 6.0.4 software is available on FileConnect (Serial key required)

Use SORT notifications to receive updates on new patches and documentation.

 

cheers

tony

TonyGriffiths | 02 Sep 2013 | 0 comments

 

 

SFHA 5.1SP1RP4 is now available on SORT:

https://sort.symantec.com/patch/finder

 

 

11 Veritas Storage Foundation HA 5.1SP1PR3 Rolling Patch sfha-sol10_x64-5.1SP1PR3RP4 2013-08-21
12 Veritas Storage Foundation HA 5.1SP1 Rolling Patch sfha-sol10_x64-5.1SP1RP4 2013-08-21
13 Veritas Storage Foundation HA 5.1SP1PR3 Rolling Patch sfha-sol_sparc-5.1SP1PR3RP4 2013-08-21
14 Veritas Storage Foundation HA 5.1SP1 Rolling Patch sfha-...
Setu Gupta | 25 Jul 2013 | 0 comments

DCIG (Data Center Infrastructure Group Inc.) released their High Availability and Clustering Software Buyer's Guide (attached to this email) that weights, scores and ranks over 60 features on 13 different software solutions from 10 different software providers. Symantec's Veritas Cluster Server (VCS) achieved the "Best-in-Class" ranking and earned the top spot in this inaugural DCIG High Availability and Clustering Software Buyer's Guide.

VCS earned the only “Best-in-Class” ranking, and for good reason. VCS ranked “Best-in-Class” and/or “Excellent” in every single category that DCIG evaluated. Categories ranged from operating environment to management capabilities. DCIG mentioned, “Making Symantec’s achievement so impressive was that it’s in a highly competitive space where most high availability and clustering software packages only focus on a few or even only one operating...

sai_mukundan | 03 Jul 2013 | 0 comments

Executive Summary

Symantec conducted a survey on IT disaster recovery and high availability and customer responses revealed the following key aspects:

   - Business Continuity market opportunity exists for a distinct product that focuses on Disaster Recovery (DR) across multiple data center sites

   - Many customers have multivendor environments and there is a market need for standardized virtualization agnostic solutions (20% of respondents)

   - Per Virtual Machine and Per CPU licensing models constitute 90% of current DR product licensing schemes

 

Introduction

When a disaster, natural or otherwise impacts normal operations of a business, Disaster Recovery (DR) processes kick in to ensure that services remain available. The effectiveness of a plan is measured by how long it takes the business to recover the critical services (Recovery Time Objective or RTO) and how much data is lost in the...

Kimberley | 04 Jun 2013 | 0 comments

More and more information is becoming readily available to companies and they are leveraging it to advance business and give consumers what they want. This practice known as “big data” has the potential to yield big opportunities, as well as its share of management headaches if not approached strategically. In the latest CIO Digest article “Big Data Without Big Headaches,” Symantec lays out three basic strategies that IT leaders should consider when leveraging big data. Getting the strategies right will enable companies to make effective decisions and realize the business value of big data. But get them wrong, you can end up with incomplete information and introduce a New Coke. With these data-driven strategies in place, companies like Asia Pacific Telecom, Telefonica de Espana, and Teradata have seen results in both financial and operational performance. Indeed, embracing big data doesn’t have to come with big headaches. To get more in-depth...

bpascua | 21 May 2013 | 0 comments

Having worked with Clustering for nearly fifteen years I believe this still qualifies me as  a total novice. It’s a bit like saying I work with cars, if I drive formula one car I won’t have much knowledge about stock car racing.  When we talk about cluster computing it normally refers to a number of computers working in some coordinated fashion. These will typically fall into two types,  shared disk and shared nothing.

Shared disk is the most recognized architecture and as the name would suggest simply means that all storage is available to all nodes in the cluster. Examples of this could be Oracle RAC or Storage Foundation Cluster Filesystem.  In both these instances a lock manager is required to manage to the coordinated access to the data. Shared disk architecture offers the highest levels of availability. Depending on the application it can often scale very badly in true parallel shared clusters. I often see this in Oracle RAC environments...

sai_mukundan | 25 Apr 2013 | 1 comment

 

Veritas Cluster Server (VCS) has added support for Fusion-io directCache through a VCS agent. Fusion’s directCache software transforms their ioMemory storage into a block-based storage cache by intelligently placing the most frequently accessed data in flash memory for increased performance. The VCS agent can check the status of directCache and enable/disable directCache for use by any application. The Fusion-io VCS agent supports the monitor, online and offline entry points. The agent supports VCS 5.1 and VCS 6.0 versions on RHEL 5 & RHEL6. For more information on the agent, please visit SORT @ https://sort.symantec.com/agents (select 'Partner agent' under Agent type) or http://support.fusionio.com.

Theresa LaVeck | 18 Apr 2013 | 0 comments

Check out the recording of our super session at Vision. Jeff Hausman discusses the importance of business continuity and how to keep your business up and running.

Kimberley | 03 Apr 2013 | 1 comment

 

SUNY’s Upstate Medical University is both a medical center and education institution. With a goal to improve its community through education, patient care, and research, the medical center’s 8,900 employees rely on an electronic medical record (EMR) system that helps manage and streamline patient records. With an EMR system in place, a solution that helps maintain an extremely high level of availability is imperative. To facilitate this, SUNY Upstate Medical University turned to Symantec. In the 16 months since deploying the high availability solution from Symantec, SUNY reports 99.9994% availability. Additional results include performing automated failover in less than one minute, reducing time to detect failure of a host system by up to 95 percent, and reducing time to apply operating system patches by up to 92 percent. To read more about how Symantec’s high availability solutions are helping SUNY Upstate Medical University, check out this customer...

Mike Reynolds PMM | 02 Apr 2013 | 0 comments

 

Disaster recovery testing is not always a favorite topic of IT professionals.  For administrators it often requires late nights or weekend work and for managers it can be costly and disruptive to business operations.  Ironically, organizations put HA/DR solutions and plans into effect to keep applications and business services up and available, while the testing of these plans can mean downtime.  Another issue is simply keeping the secondary systems or the DR site in compliance or configured properly in relation to the primary systems or location.  Simple configuration changes on the primary systems or site, if not mirrored on the secondary systems, can have major consequences on the organization's capability to failover and recover properly if an incident, whether it is a major catastrophe or a simple error, occurs.  Oh and don't forget the speed of recovery.  Some applications require very...