Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Storage and Availability Management
Showing posts tagged with Storage Foundation
Showing posts in English
phil samg | 20 Mar 2009 | 0 comments

Creating highly available Oracle databases with immediate failover is expensive, though sometimes justifiable. However, organizations whose SLA includes near-minute failover can consider a Veritas Clustered File System (CFS) solution. CFS is an option within Storage Foundation (SF); SF users need a simple license key to turn it on. Application clustering to ensure high availability of databases without Cluster File System results in failover times that become increasingly longer as more disks, volumes, and file systems are added into the configuration. Furthermore, if a file system corruption occurrs, the failover time will be dramatically impacted while the file system recovers.

Cluster File System enables the Veritas Oracle Disk Manager (ODM) interface, providing near raw disk performance with all the benefits of a file system. This not only improves the speed of your data base when running on a file system, it improves failover times by reducing the time it takes for...
phil samg | 12 Mar 2009 | 1 comment

 Even as IT organizations face significant budget pressure, business carries on and continues to drive storage growth. So, how can IT organizations meet organizational needs without buying more storage? Let’s do a little math.

First, two baseline numbers are important. Industry analysts peg average storage utilization at 35% and the compound annual growth rate (CAGR) for storage at 50%. We can now apply these two numbers whatever assumption we wish. To make the calculation easy, assume that we have a total capacity of 100 TB with a 35% utilization (35 TB). With a 50% CAGR, we would utilize 52.5 TB (52.5%) and the end of 12 months and 78.75 TB (78.75%) after 24 months. Thus, the “average” organization can survive up to two years without buying another byte of storage, if only they can find a way to utilize what they have. If you know your utilization and CAGR, you can easily apply this calculation yourself to see how long you can survive without...
Rishi Manocha | 10 Mar 2009 | 1 comment

A group of DBAs that manage large databases at a large federal government agency had the challenge of migrating a mission-critical 35 TB database from a Fujitsu 2500 running Solaris to an IBM P595 running AIX within a 4 hour maintenance window. Using the tools that the database vendor offered, the estimated time to move that much data was in the neighborhood of 3 weeks, which was unacceptable. Database migration tools from another company were evaluated but they were cost prohibitive.

A Symantec Sales Engineer suggested that these DBAs use the Portable Data Container (PDC) feature within Storage Foundation, which was already deployed in their infrastructure. He explained that with PDC, instead of moving the data from one storage location to another, the data can be unmounted from the Solaris system and mounted on the AIX system. The entire process would take no more than 30 minutes.

The PDC feature was tested in the customer’s lab environment and was put in...

davidnoy | 07 Jan 2008 | 0 comments
Please find the scalabiltiy white paper which was refered to in the previous post here:
 
 
davidnoy | 12 Dec 2007 | 1 comment
 
 
 
 
 
 
 
The question: Is CFS scalable? What performance hit is there from running VxFS in a clustered configuration?
 
Often times in sales situations, we are asked what the performance implications are of running CFS.
 
Customers are eager to know what the performance hit would be from operating in a clustered environment. This is particularly interesting to customers who are considering deploying our CFS HA solution as an upgrade to the regular Storage Foundation HA solution. They want want to know what CFS is going to cost in performance, and so did we.
 
The test: Run a workload on 1, 2, 4, 8, and 16 nodes and measure throughput.
 
With the outstanding efforts of the Performance Enigneering Group, we were able to measure...
davidnoy | 12 Dec 2007 | 0 comments
 
 
 
 
Hello all! Welcome to the Cluster File System blog.
 
 
This blog will serve as a sounding board for engineering and product management to discuss their views on cluster file systems:
 
  • What are they good for?
  • Where do we feel they can provide the most benefit?
  • What are some of the interesting use cases we have seen?
  • What notable improvements have we made in our product?
  • Where do we see the technology going? How would we like to shape the future of CFS?
 
The first entry will discuss some of our recent scalability findings which will be published shortly in both the form of a white paper and a press release. We are very excited about the results. So please, read on...
 

Message Edited by davidnoy on 12-12-2007 05:45...

charmer | 06 Aug 2007 | 3 comments

Last week Symantec published some benchmark results comparing Storage Foundations and ZFS that suggest VxFS is around 3 times faster than ZFS for workloads  typical of many commercial applications.  These results  contrast sharply with some benchmark results published by Sun which  suggest that VxFS is about 1/3 the speed of ZFS.

I'm sure this is going to leave a lot of people scratching their heads and asking "how can the results be so different?".   The complete answer to that question is quite long, but I can try to offer a summary.  Unfortunately, that will leave out many important details.  I hope to address those in another article.

The short answer is that Symantecs'...

Ameya | 03 Aug 2007 | 1 comment
Array Policy Module (APM)

=====================

The APM framework was introduced in Volume Manager 4.0 release. The 4.0 release brought about a major change in DMP architecture - the introduction of APM. As the name suggests, the Array Policy Module (APM) is specific to an array type and defines the policies for an array type. Analogous to its Array Support Library (ASL) counterpart in user space which enables the DDL to identify the array completely, the APM enables DMP kernel to perform array specific operations such as failover, NDU (Non-Disruptive Upgrade), STPG (Set Target Port Groups) and even an I/O policy.

The APM makes it possible for DMP to dynamically add kernel support for an array. The support for enabling an APM is completely online and does not require a reboot. An APM is essentially a dynamically loadable kernel module that is validated and loaded by DMP whenever DMP detects the array type support exported by that APM. In other words, the DDL...

charmer | 17 Jul 2007 | 7 comments
Some engineers at Sun promoting ZFS have been publishing comparisons between VxFS and ZFS that are rather unflattering to VxFS. You can read the most recent white papers they've published comparing ZFS with VxFS, ext3, and Window's NTFS as well as some blog entries comparing the performance of VxFS and ZFS.

The comparisons with VxFS appear to be objective, but in fact the performance comparisons are chosen quite selectively. In addition, the most recent white paper contains a few significant errors.

Going through the most recent white paper from beginning to end, the first thing to strike me were some significant errors in the discussion of file...

Mandar Bhide | 18 Jun 2007 | 0 comments
Storage capacity requirements are growing at an explosive rate, complicating data and storage management in mission-critical and compliance-driven environments. Enterprises need to securely store more information and more information types. Data must be safely secured and available for rapid recovery in the near term, while also meeting long-term archival and compliance regulations. These complex issues have created a variety of manageability, storage availability and price performance challenges, ranging from missed service levels to operational risks.
 
Recent industry trend reports by analysts show that the IT budgets are growing at six percent a year; but data under management is growing between 50 and 70 percent or more. Keeping up with data growth while reducing the cost of data management, requires deep analysis and an understanding of underlying storage delivery infrastructure.
 
To ensure the financial benefits...