Video Screencast Help
Storage and Availability Management
Showing posts tagged with Cluster Server
Showing posts in English
Mike Reynolds PMM | 18 Jul 2012 | 1 comment

Disasters happen.  In fact, according to the Munchener Ruckversicherungs-Gesellschaft, Geo Risks Research, 355 events, from floods to tornados to tsunamis and earthquakes occurred in 2011.  IT organizations need protection from such horrific events, but also those that are more common place, for instance power outages and equipment failures.  

Here are a few interesting statistics from the latest Symantec State of the Data Center report (available later this year):

  • 70% of organizations experience downtime from power failure (11.3 hours per outage)
  • 63% experience cyber attacks (52.7 hours per outage)
  • 26% conducted a power outage and failure impact assessment
  • Median downtime per outage in the last 12 months was 5 hours
  • Organizations experienced an average of 4 downtime incidents in the past 12 months

When most people hear the term "Disaster Recovery", they immediately think of data...

Eric.Hennessey | 18 Jun 2012 | 0 comments

Last week, Symantec and Microsoft announced a joint effort to deliver Disaster Recovery as a Service (DRaaS). This hybrid approach will involve using Symantec Storage Foundation HA for Windows on the customer premises to provide data replication and failover management to Microsoft's Azure cloud services. This is a pretty big deal.

While our largest customers have the luxury of multiple data centers spread across a large area, most companies don't. With multiple data centers, a company can provision additional capacity in each one to host another data center's critical applications in the event of a site failure. But in the absence of additional real estate, a company's options for disaster recovery are more limited. And this is where DRaaS comes into play.

This service will allow smaller organizations to acquire virtual real estate in the form of Microsoft's...

Raissa_T | 13 Jun 2012 | 2 comments

Are you setting up Veritas Cluster Server in VMware vSphere?  Below is a technical guide to walk you through the process such as configuring Veritas Cluster Server clusters across ESX hosts and the compatibility of VCS with VMware HA. Check out the new Application Note available now.

Raissa_T | 30 May 2012 | 0 comments

VMworld 2012 is around the corner - August 26-30th, 2012 (US) && October 9-11th (EMEA).  What do you want to see, hear, & learn at #VMworld 2012? Call for Papers Voting is open! Vote for your favourite topics. bit.ly/JnkmdG

Eric.Hennessey | 03 May 2012 | 1 comment

A funny thing happened the other day when I went to one of my favorite sites to look up a word...I saw the above message, but looking at the redirect URL, it says "failover-namechangedtoprotecttheinnocent.com". Now, far be it from me to be correcting the vocabulary of a site whose stock in trade is word definitions, but in the high availability biz, that is NOT what we mean by "failover". A failover is when you have a service under HA cluster control, and in the event of a failure, that service is relocated to another server in the cluster. In its entirety.

Clearly there was some attempt made at protecting the service, otherwise my browser would have just thrown an error saying the site wasn't available. But it's just as clear that protection wasn't extended to the entire business service. We introduced Virtual Business Services to Veritas Cluster...

Eric.Hennessey | 18 Apr 2012 | 0 comments

I've been blogging over the past couple of weeks under the theme "Everything you think you know about clustering is wrong". It's sort of a tongue-in-cheek theme, but the misconceptions I was trying to dispel are real and held by enough people that I felt they were worth addressing. But now I want to shift gears a little bit.

In my last post, I mentioned how a lot of people can't seem to break out of the late-1990s mindset of 2-node active/passive failover HA:

While large clusters meant we no longer needed two nodes for every critical application, many people's mindsets were still stuck in 1997 and they continued to view HA clustering in a 2-node, active/passive context.

To be sure, that's definitely not the case with all of our customers, especially...

Eric.Hennessey | 13 Apr 2012 | 4 comments

 

A couple days ago, I blogged about the related myths of complexity and unreliability regarding high availability (HA) clustering. Today I want to spend a little time on the myth that clustering is expensive.
 
Early high availability clustering was as simple as it was primitive. Shared  storage was generally limited to two nodes via dual-attached SCSI disks and  communication between nodes typically consisted of each node just pinging the other periodically to check its state. If the standby node decided the active node was dead, it would respond to that failure by firing local copies of the  failed node's startup scripts to restart applications that had been running there.
 
But SAN and NAS technologies which allowed many more nodes to share...
Eric.Hennessey | 10 Apr 2012 | 1 comment

 

In my last post I mentioned a few common misconceptions about HA clustering that I'd be debunking; namely that it's unreliable, complex, and expensive. There are others that we'll get to in later posts, but for this one I want to tackle the myths of unreliability and complexity, since they kind of go hand-in-hand.
 
The vast majority of our customers using Veritas Cluster Server (VCS) for high availability have been using it for quite some time and are completely happy with it. But we do hear from time to time from customers who say they've used HA clustering in the past - either VCS or Brand X - and stopped using it because it "broke". Frankly, this reaction baffles me. As an IT guy who's been in the business for - well, let's just say a long time, OK? - I learned early on that if something...
Eric.Hennessey | 05 Apr 2012 | 0 comments

We're making some pretty big changes in how we deliver high availability and disaster recovery, and to do that, we have to change how we look at HA & DR. But in order to do that, we first need to debunk a few myths about clustering that seem to have crept into a lot of people's heads over the years.

 

This is the first in a series of posts I'll be putting up here at Symantec Connect over the coming week or so in which I'll lay out some common misconceptions of HA clustering and explain why they're wrong. Here's an example of what some people believe about clustering and which I'll refute over the coming days:
  • Clustering is unreliable
  • Clustering is too complex
  • Clustering is expensive
Sure, like any other myth these have their origins in someone's actual experience, but just like the mythology...
Theresa LaVeck | 13 Mar 2012 | 0 comments

Discover the power of the Storage and Availability Track at Symantec Vision 2012. It’s a unique opportunity for storage and server IT professionals to spend time with other users, and spend time with our top technical experts through one-on-one meetings, technical sessions, product deep dives and hands-on labs covering relevant topics like:

  • Multi-tier application recovery
  • Disaster Recovery automation
  • Deduplication and compression for primary storage
  • How your peers are using Storage Foundation High Availability 6.0

In addition, you’ll have the opportunity to meet with our product managers and engineers to learn more about and influence the future direction of our storage and availability offerings. Don't forget to take advantage of the Early Bird discount by March 30, 2012.