Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Storage and Availability Management
Showing posts in English
phil samg | 28 Apr 2009 | 2 comments

Thin provisioning and data deduplication are strategies for reducing the growth rate and space consumption of new data or finding more efficient ways of storing it. These strategies must be combined with addressing unnecessary data storage in order to fully utilize existing assets. The largest container of unnecessary and obsolete data is unstructured data.

Email is the biggest unstructured information pain point today and a top target for data reduction via archiving. The Radicati Group estimates that the volume of email will increase by 30 percent from 2006 to 2010. Although storage costs continue to fall on a per-unit basis, email is often stored many times in the email server, on the user’s PC, in a Microsoft Exchange or IBM Lotus Notes file, on file servers, saved in SharePoint, and in backups. Because of the excessive storage consumed, the cost of power and cooling is also commensurately higher.

Across all business industries and public sector...

Eric.Hennessey | 27 Apr 2009 | 0 comments

A very useful - yet often overlooked - feature of Veritas Cluster Server is Limits and Prerequisites. This feature is often used in conjunction with Service Group Workload Management (SGWM), but can also be implemented on its own. In this post, I'll describe what this feature does and how you can put it to use.

Limits and Prerequisites are attributes in VCS. Limits are system attributes applied to the cluster member nodes, while Prerequisites are attributes applied to service groups. Both are key/value type attributes defined by the user. To better understand how these two attributes work together, it's best to use a common scenario as an example.

Let's say I have a VCS cluster consisting of four nodes and four Oracle database service groups. Let's assume that (1) each node is provisioned identically with the same processor and memory power and (2) each service group places about the same amount of load on each system. Implementing SGWM will automatically keep...

phil samg | 20 Apr 2009 | 0 comments

Data deduplication is another technology that has gained wide acceptance as a tool to streamline the backup process. Deduplication eliminates duplicate data even when such data is unrelated, greatly reducing the data multiplier effect on data.

For example, if a Microsoft PowerPoint presentation is stored on different file servers multiple times, deduplication ensures that only one copy is stored no matter how many full or incremental backups occur. Organizations may consider specialized appliances to provide backup-to-disk and deduplication functions. However, these appliances add complexity to the data center with more devices to manage and actually add capacity to the environment rather than using what already exists more efficiently.

To be continued...

Read Part 1: A State of Neglect
...

phil samg | 14 Apr 2009 | 0 comments

To make better use of storage resources, organizations can leverage storage management technologies. Storage resource management (SRM), for example, enables IT to navigate the storage environment and identify old or non-critical data that can be moved to less expensive storage. These tools can also be used to predict future capacity requirements.

Managing storage without an SRM tool is like going on a journey without a map. Having a clear plan and objective before taking action is the best assurance of rapid progress and success. Storage managers should ask some questions before cost-cutting:

  • What is the average utilization rate?
  • What is the utilization rate by application?
  • Which applications are growing fastest? Slowest?

SRM technology can help companies make an assessment and provide an enterprise-wide view of the storage environment, which helps identify problem areas, consolidation opportunities, and to create a priority...

phil samg | 06 Apr 2009 | 0 comments

During periods of economic growth, organizations may be tempted to take the “quick fix” to storage management problems. The incremental cost of adding storage is relatively small and can be absorbed by the budget. Such a short-cut may facilitate faster project roll-out, but it also leads to underutilized storage. Many organizations operate at only 30 to 40 percent utilization. According to InfoPro, the average is 35 percent.

Accurate storage allocation is difficult because data growth rate information is incomplete or unavailable. Consequently, storage allocation does not correlate to consumption. New applications, with no historical trend data, receive storage allocation on a “best estimate” basis. If the allocated capacity is too high, then the excess capacity may languish unused for the life of the array.

Needless spending is the primary consequence of benign neglect. Having an array only 50 percent utilized is like paying...

Rishi Manocha | 02 Apr 2009 | 0 comments

The following services have just been updated. Please find below new functionalities now available to you:

Veritas Installation Assessment Service [VIAS]

  • Combined Notification Center – Users can create user defined ad-hoc or environment specific notifications for new patches, ASL/APM releases, new versions of the HCL and updates to Veritas Operations Services (VOS) from one easy-to-use web application. Both the notification center and VIAS reports now connect directly with Patch Central allowing the applications to synergistically cross-leverage customer data
  • Windows Support (Beta) – Support for the SFWHA “ConfigChecker” application to pre-qualify Windows environments

Find Out More
Link to VIAS

...

phil samg | 20 Mar 2009 | 0 comments

Creating highly available Oracle databases with immediate failover is expensive, though sometimes justifiable. However, organizations whose SLA includes near-minute failover can consider a Veritas Clustered File System (CFS) solution. CFS is an option within Storage Foundation (SF); SF users need a simple license key to turn it on. Application clustering to ensure high availability of databases without Cluster File System results in failover times that become increasingly longer as more disks, volumes, and file systems are added into the configuration. Furthermore, if a file system corruption occurrs, the failover time will be dramatically impacted while the file system recovers.

Cluster File System enables the Veritas Oracle Disk Manager (ODM) interface, providing near raw disk performance with all the benefits of a file system. This not only improves the speed of your data base when running on a file system, it improves failover times by reducing the time it takes for...
phil samg | 12 Mar 2009 | 1 comment

 Even as IT organizations face significant budget pressure, business carries on and continues to drive storage growth. So, how can IT organizations meet organizational needs without buying more storage? Let’s do a little math.

First, two baseline numbers are important. Industry analysts peg average storage utilization at 35% and the compound annual growth rate (CAGR) for storage at 50%. We can now apply these two numbers whatever assumption we wish. To make the calculation easy, assume that we have a total capacity of 100 TB with a 35% utilization (35 TB). With a 50% CAGR, we would utilize 52.5 TB (52.5%) and the end of 12 months and 78.75 TB (78.75%) after 24 months. Thus, the “average” organization can survive up to two years without buying another byte of storage, if only they can find a way to utilize what they have. If you know your utilization and CAGR, you can easily apply this calculation yourself to see how long you can survive without...
Rishi Manocha | 10 Mar 2009 | 1 comment

A group of DBAs that manage large databases at a large federal government agency had the challenge of migrating a mission-critical 35 TB database from a Fujitsu 2500 running Solaris to an IBM P595 running AIX within a 4 hour maintenance window. Using the tools that the database vendor offered, the estimated time to move that much data was in the neighborhood of 3 weeks, which was unacceptable. Database migration tools from another company were evaluated but they were cost prohibitive.

A Symantec Sales Engineer suggested that these DBAs use the Portable Data Container (PDC) feature within Storage Foundation, which was already deployed in their infrastructure. He explained that with PDC, instead of moving the data from one storage location to another, the data can be unmounted from the Solaris system and mounted on the AIX system. The entire process would take no more than 30 minutes.

The PDC feature was tested in the customer’s lab environment and was put in...