Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Storage and Availability Management
Showing posts tagged with Cluster Server
Showing posts in English
Mike Reynolds PMM | 15 Aug 2012 | 3 comments

Virtualization is everywhere, but IT organizations are still looking for ways to cut costs.  One way to do that, is to…  virtualize even more applications!  However, the easy stuff has been done.  File and print servers, cafeteria menus and the company picnic blog have been virtualized.  The next step is to move up the chain and virtualize business and mission critical applications.  But the risks of virtualization are still there.  How do I monitor and protect my application and business services in the event of a failure, or even a site wide disaster?  What happens to the performance or availability of my data when a path connecting a virtual machine to the storage fails?  These are the roadblocks that are preventing IT organizations from virtualizing more of their business critical services and applications.  The good news is Symantec can help!  

Symantec is announcing a new version of...

Mike Reynolds PMM | 20 Jul 2012 | 0 comments

The Storage and Availability Management Group has created a few new videos on SymantecTV! These videos show our customers how Symantec is providing solutions that help them holistically recover from a disaster, improve management of storage through visibility and detailed reporting, and make their applications and data in VMware environments highly available through dynamic multi-pathing.  Here is a brief description and links to these videos:

Disaster Recovery

This Disaster Recovery video on SymantecTV explains how you can detect a fault, take action and recover applications automatically when your site goes down.

Data Insight

This Data Insight video on SymantecTV...

Mike Reynolds PMM | 18 Jul 2012 | 1 comment

Disasters happen.  In fact, according to the Munchener Ruckversicherungs-Gesellschaft, Geo Risks Research, 355 events, from floods to tornados to tsunamis and earthquakes occurred in 2011.  IT organizations need protection from such horrific events, but also those that are more common place, for instance power outages and equipment failures.  

Here are a few interesting statistics from the latest Symantec State of the Data Center report (available later this year):

  • 70% of organizations experience downtime from power failure (11.3 hours per outage)
  • 63% experience cyber attacks (52.7 hours per outage)
  • 26% conducted a power outage and failure impact assessment
  • Median downtime per outage in the last 12 months was 5 hours
  • Organizations experienced an average of 4 downtime incidents in the past 12 months

When most people hear the term "Disaster Recovery", they immediately think of data...

Eric.Hennessey | 18 Jun 2012 | 0 comments

Last week, Symantec and Microsoft announced a joint effort to deliver Disaster Recovery as a Service (DRaaS). This hybrid approach will involve using Symantec Storage Foundation HA for Windows on the customer premises to provide data replication and failover management to Microsoft's Azure cloud services. This is a pretty big deal.

While our largest customers have the luxury of multiple data centers spread across a large area, most companies don't. With multiple data centers, a company can provision additional capacity in each one to host another data center's critical applications in the event of a site failure. But in the absence of additional real estate, a company's options for disaster recovery are more limited. And this is where DRaaS comes into play.

This service will allow smaller organizations to acquire virtual real estate in the form of Microsoft's...

Raissa_T | 13 Jun 2012 | 2 comments

Are you setting up Veritas Cluster Server in VMware vSphere?  Below is a technical guide to walk you through the process such as configuring Veritas Cluster Server clusters across ESX hosts and the compatibility of VCS with VMware HA. Check out the new Application Note available now.

Raissa_T | 30 May 2012 | 0 comments

VMworld 2012 is around the corner - August 26-30th, 2012 (US) && October 9-11th (EMEA).  What do you want to see, hear, & learn at #VMworld 2012? Call for Papers Voting is open! Vote for your favourite topics. bit.ly/JnkmdG

Eric.Hennessey | 03 May 2012 | 1 comment

A funny thing happened the other day when I went to one of my favorite sites to look up a word...I saw the above message, but looking at the redirect URL, it says "failover-namechangedtoprotecttheinnocent.com". Now, far be it from me to be correcting the vocabulary of a site whose stock in trade is word definitions, but in the high availability biz, that is NOT what we mean by "failover". A failover is when you have a service under HA cluster control, and in the event of a failure, that service is relocated to another server in the cluster. In its entirety.

Clearly there was some attempt made at protecting the service, otherwise my browser would have just thrown an error saying the site wasn't available. But it's just as clear that protection wasn't extended to the entire business service. We introduced Virtual Business Services to Veritas Cluster...

Eric.Hennessey | 18 Apr 2012 | 0 comments

I've been blogging over the past couple of weeks under the theme "Everything you think you know about clustering is wrong". It's sort of a tongue-in-cheek theme, but the misconceptions I was trying to dispel are real and held by enough people that I felt they were worth addressing. But now I want to shift gears a little bit.

In my last post, I mentioned how a lot of people can't seem to break out of the late-1990s mindset of 2-node active/passive failover HA:

While large clusters meant we no longer needed two nodes for every critical application, many people's mindsets were still stuck in 1997 and they continued to view HA clustering in a 2-node, active/passive context.

To be sure, that's definitely not the case with all of our customers, especially...

Eric.Hennessey | 13 Apr 2012 | 4 comments
A couple days ago, I blogged about the related myths of complexity and unreliability regarding high availability (HA) clustering. Today I want to spend a little time on the myth that clustering is expensive.
 
Early high availability clustering was as simple as it was primitive. Shared  storage was generally limited to two nodes via dual-attached SCSI disks and  communication between nodes typically consisted of each node just pinging the other periodically to check its state. If the standby node decided the active node was dead, it would respond to that failure by firing local copies of the  failed node's startup scripts to restart applications that had been running there.
 
But SAN and NAS technologies which allowed many more nodes to share a common...
Eric.Hennessey | 10 Apr 2012 | 1 comment
In my last post I mentioned a few common misconceptions about HA clustering that I'd be debunking; namely that it's unreliable, complex, and expensive. There are others that we'll get to in later posts, but for this one I want to tackle the myths of unreliability and complexity, since they kind of go hand-in-hand.
 
The vast majority of our customers using Veritas Cluster Server (VCS) for high availability have been using it for quite some time and are completely happy with it. But we do hear from time to time from customers who say they've used HA clustering in the past - either VCS or Brand X - and stopped using it because it "broke". Frankly, this reaction baffles me. As an IT guy who's been in the business for - well, let's just say a long time, OK? - I learned early on that if something worked yesterday...