Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Storage and Availability Management
Showing posts tagged with Storage Foundation
Showing posts in English
phil samg | 14 Apr 2009 | 0 comments

To make better use of storage resources, organizations can leverage storage management technologies. Storage resource management (SRM), for example, enables IT to navigate the storage environment and identify old or non-critical data that can be moved to less expensive storage. These tools can also be used to predict future capacity requirements.

Managing storage without an SRM tool is like going on a journey without a map. Having a clear plan and objective before taking action is the best assurance of rapid progress and success. Storage managers should ask some questions before cost-cutting:

  • What is the average utilization rate?
  • What is the utilization rate by application?
  • Which applications are growing fastest? Slowest?

SRM technology can help companies make an assessment and provide an enterprise-wide view of the storage environment, which helps identify problem areas, consolidation opportunities, and to create a priority...

Rishi Manocha | 02 Apr 2009 | 0 comments

The following services have just been updated. Please find below new functionalities now available to you:

Veritas Installation Assessment Service [VIAS]

  • Combined Notification Center – Users can create user defined ad-hoc or environment specific notifications for new patches, ASL/APM releases, new versions of the HCL and updates to Veritas Operations Services (VOS) from one easy-to-use web application. Both the notification center and VIAS reports now connect directly with Patch Central allowing the applications to synergistically cross-leverage customer data
  • Windows Support (Beta) – Support for the SFWHA “ConfigChecker” application to pre-qualify Windows environments

Find Out More
Link to VIAS

...

phil samg | 20 Mar 2009 | 0 comments

Creating highly available Oracle databases with immediate failover is expensive, though sometimes justifiable. However, organizations whose SLA includes near-minute failover can consider a Veritas Clustered File System (CFS) solution. CFS is an option within Storage Foundation (SF); SF users need a simple license key to turn it on. Application clustering to ensure high availability of databases without Cluster File System results in failover times that become increasingly longer as more disks, volumes, and file systems are added into the configuration. Furthermore, if a file system corruption occurrs, the failover time will be dramatically impacted while the file system recovers.

Cluster File System enables the Veritas Oracle Disk Manager (ODM) interface, providing near raw disk performance with all the benefits of a file system. This not only improves the speed of your data base when running on a file system, it improves failover times by reducing the time it takes for...
phil samg | 12 Mar 2009 | 1 comment

 Even as IT organizations face significant budget pressure, business carries on and continues to drive storage growth. So, how can IT organizations meet organizational needs without buying more storage? Let’s do a little math.

First, two baseline numbers are important. Industry analysts peg average storage utilization at 35% and the compound annual growth rate (CAGR) for storage at 50%. We can now apply these two numbers whatever assumption we wish. To make the calculation easy, assume that we have a total capacity of 100 TB with a 35% utilization (35 TB). With a 50% CAGR, we would utilize 52.5 TB (52.5%) and the end of 12 months and 78.75 TB (78.75%) after 24 months. Thus, the “average” organization can survive up to two years without buying another byte of storage, if only they can find a way to utilize what they have. If you know your utilization and CAGR, you can easily apply this calculation yourself to see how long you can survive without...
Rishi Manocha | 10 Mar 2009 | 1 comment

A group of DBAs that manage large databases at a large federal government agency had the challenge of migrating a mission-critical 35 TB database from a Fujitsu 2500 running Solaris to an IBM P595 running AIX within a 4 hour maintenance window. Using the tools that the database vendor offered, the estimated time to move that much data was in the neighborhood of 3 weeks, which was unacceptable. Database migration tools from another company were evaluated but they were cost prohibitive.

A Symantec Sales Engineer suggested that these DBAs use the Portable Data Container (PDC) feature within Storage Foundation, which was already deployed in their infrastructure. He explained that with PDC, instead of moving the data from one storage location to another, the data can be unmounted from the Solaris system and mounted on the AIX system. The entire process would take no more than 30 minutes.

The PDC feature was tested in the customer’s lab environment and was put in...

davidnoy | 07 Jan 2008 | 0 comments
Please find the scalabiltiy white paper which was refered to in the previous post here:
 
 
davidnoy | 12 Dec 2007 | 1 comment
 
 
 
 
 
 
 
The question: Is CFS scalable? What performance hit is there from running VxFS in a clustered configuration?
 
Often times in sales situations, we are asked what the performance implications are of running CFS.
 
Customers are eager to know what the performance hit would be from operating in a clustered environment. This is particularly interesting to customers who are considering deploying our CFS HA solution as an upgrade to the regular Storage Foundation HA solution. They want want to know what CFS is going to cost in performance, and so did we.
 
The test: Run a workload on 1, 2, 4, 8, and 16 nodes and measure throughput.
 
With the outstanding efforts of the Performance Enigneering Group, we were able to measure...
davidnoy | 12 Dec 2007 | 0 comments
 
 
 
 
Hello all! Welcome to the Cluster File System blog.
 
 
This blog will serve as a sounding board for engineering and product management to discuss their views on cluster file systems:
 
  • What are they good for?
  • Where do we feel they can provide the most benefit?
  • What are some of the interesting use cases we have seen?
  • What notable improvements have we made in our product?
  • Where do we see the technology going? How would we like to shape the future of CFS?
 
The first entry will discuss some of our recent scalability findings which will be published shortly in both the form of a white paper and a press release. We are very excited about the results. So please, read on...
 

Message Edited by davidnoy on 12-12-2007 05:45...

charmer | 06 Aug 2007 | 3 comments

Last week Symantec published some benchmark results comparing Storage Foundations and ZFS that suggest VxFS is around 3 times faster than ZFS for workloads  typical of many commercial applications.  These results  contrast sharply with some benchmark results published by Sun which  suggest that VxFS is about 1/3 the speed of ZFS.

I'm sure this is going to leave a lot of people scratching their heads and asking "how can the results be so different?".   The complete answer to that question is quite long, but I can try to offer a summary.  Unfortunately, that will leave out many important details.  I hope to address those in another article.

The short answer is that Symantecs'...