Video Screencast Help

Storage & Clustering Community Blog

Showing posts tagged with Storage Foundation
Showing posts in English
phil samg | 20 Apr 2009 | 0 comments

Data deduplication is another technology that has gained wide acceptance as a tool to streamline the backup process. Deduplication eliminates duplicate data even when such data is unrelated, greatly reducing the data multiplier effect on data.

For example, if a Microsoft PowerPoint presentation is stored on different file servers multiple times, deduplication ensures that only one copy is stored no matter how many full or incremental backups occur. Organizations may consider specialized appliances to provide backup-to-disk and deduplication functions. However, these appliances add complexity to the data center with more devices to manage and actually add capacity to the environment rather than using what already exists more efficiently.

To be continued...

Read Part 1: A State of Neglect
...

phil samg | 14 Apr 2009 | 0 comments

To make better use of storage resources, organizations can leverage storage management technologies. Storage resource management (SRM), for example, enables IT to navigate the storage environment and identify old or non-critical data that can be moved to less expensive storage. These tools can also be used to predict future capacity requirements.

Managing storage without an SRM tool is like going on a journey without a map. Having a clear plan and objective before taking action is the best assurance of rapid progress and success. Storage managers should ask some questions before cost-cutting:

  • What is the average utilization rate?
  • What is the utilization rate by application?
  • Which applications are growing fastest? Slowest?

SRM technology can help companies make an assessment and provide an enterprise-wide view of the storage environment, which helps identify problem areas, consolidation opportunities, and to create a priority...

M.Pozzi | 08 Apr 2009 | 2 comments

Introduction:

This test regards the new smartmove feature introduced with MP3:

Enviroment:

OS AIX 5300-07-02-0806
Storage Fondation 5.0MP3RP1
Storage 2 LUNS XP24K 100GB

Test:
Mirror an empty filesystem

Initial situation:

diskgroup with a volume of  97.615 GB

root@/ #vxprint -Qqthg smartdg
dg smartdg      default      default  3000     1239178875.70.tx088sd1
dm disk0        xp12k0_0     auto     65536    204713728 -
dm disk1        xp12k0_1     auto     65536    204713728 -
v  smartvol0    -            ENABLED  ACTIVE...

Rishi Manocha | 02 Apr 2009 | 0 comments

The following services have just been updated. Please find below new functionalities now available to you:

Veritas Installation Assessment Service [VIAS]

  • Combined Notification Center – Users can create user defined ad-hoc or environment specific notifications for new patches, ASL/APM releases, new versions of the HCL and updates to Veritas Operations Services (VOS) from one easy-to-use web application. Both the notification center and VIAS reports now connect directly with Patch Central allowing the applications to synergistically cross-leverage customer data
  • Windows Support (Beta) – Support for the SFWHA “ConfigChecker” application to pre-qualify Windows environments

Find Out More
Link to VIAS

...

Kimberley | 01 Apr 2009 | 0 comments

For those of you who like to get your information delivered, here is a quick list of RSS feeds available for the Storage Management community:

All Storage Management Content (Forums, Blogs, Articles, Videos, Events)
http://www.symantec.com/connect/item-feeds/all/701...

Storage Management Forums
http://www.symantec.com/connect/storage-management...

Products Forums
CommandCentral: http://www.symantec.com/connect/commandcentral/for......

M.Pozzi | 31 Mar 2009 | 0 comments

This is a practical example on how to create and manage a linked snapshot in Storage Foundation.
Initial situation:

Source dg: dg_src (with dco)

root@ / #vxprint -Qqthg dg_src
dg dg_src       default      default  6000     1238408559.174.tx088sd1
dm lun_src      xp12k0_11    auto     65536    204713728 -
v  src_vol      -            ENABLED  ACTIVE   20971520 SELECT    -        fsgen
pl src_vol-01   src_vol      ENABLED  ACTIVE   20971520 CONCAT    -        RW
sd lun_src-01   src_vol-01 ...

phil samg | 20 Mar 2009 | 0 comments

Creating highly available Oracle databases with immediate failover is expensive, though sometimes justifiable. However, organizations whose SLA includes near-minute failover can consider a Veritas Clustered File System (CFS) solution. CFS is an option within Storage Foundation (SF); SF users need a simple license key to turn it on. Application clustering to ensure high availability of databases without Cluster File System results in failover times that become increasingly longer as more disks, volumes, and file systems are added into the configuration. Furthermore, if a file system corruption occurrs, the failover time will be dramatically impacted while the file system recovers.

Cluster File System enables the Veritas Oracle Disk Manager (ODM) interface, providing near raw disk performance with all the benefits of a file system. This not only improves the speed of your data base when running on a file system, it improves failover times by reducing the time it takes for...
phil samg | 12 Mar 2009 | 1 comment

 Even as IT organizations face significant budget pressure, business carries on and continues to drive storage growth. So, how can IT organizations meet organizational needs without buying more storage? Let’s do a little math.

First, two baseline numbers are important. Industry analysts peg average storage utilization at 35% and the compound annual growth rate (CAGR) for storage at 50%. We can now apply these two numbers whatever assumption we wish. To make the calculation easy, assume that we have a total capacity of 100 TB with a 35% utilization (35 TB). With a 50% CAGR, we would utilize 52.5 TB (52.5%) and the end of 12 months and 78.75 TB (78.75%) after 24 months. Thus, the “average” organization can survive up to two years without buying another byte of storage, if only they can find a way to utilize what they have. If you know your utilization and CAGR, you can easily apply this calculation yourself to see how long you can survive without...
Rishi Manocha | 10 Mar 2009 | 1 comment

A group of DBAs that manage large databases at a large federal government agency had the challenge of migrating a mission-critical 35 TB database from a Fujitsu 2500 running Solaris to an IBM P595 running AIX within a 4 hour maintenance window. Using the tools that the database vendor offered, the estimated time to move that much data was in the neighborhood of 3 weeks, which was unacceptable. Database migration tools from another company were evaluated but they were cost prohibitive.

A Symantec Sales Engineer suggested that these DBAs use the Portable Data Container (PDC) feature within Storage Foundation, which was already deployed in their infrastructure. He explained that with PDC, instead of moving the data from one storage location to another, the data can be unmounted from the Solaris system and mounted on the AIX system. The entire process would take no more than 30 minutes.

The PDC feature was tested in the customer’s lab environment and was put in...