Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Storage and Availability Management
Showing posts tagged with Best Practice
Showing posts in English
D Thomson | 08 Jan 2010 | 0 comments

Finding ways to substantially cut costs and complexity in an active, vibrant Data Centre is not easy. CIO’s have spent years trimming here and there and, to many, further demand from the business to reduce CAPEX/OPEX can seem like asking the impossible.

Nonetheless, significant additional reductions and effeciencies can be made possible where companies take a broader look at the way IT serves the business in a broad context. Key to this approach is the bringing together of “IT” and “Business” people in a combined effort to rationalise applications, processes, core infrastructure and operations. I have recently facilitated “IT/Business Alignment” workshops, forced both sides of the house to work together productively and have been amazed as to the opportunities that drop out of this exercise.  Well managed and facilitated workshops are a great way from stakeholders to express ideas, test theory and find “1+1=3...

nicole_kim | 15 Oct 2009 | 0 comments

“Best Practices for Storage Management and High Availability in your Microsoft Data Center” is now available online in the Things You Need to Know section of the Windows IT Pro website and is also in the September 2009 issue of Windows IT Pro.

Published by the Veritas Storage Foundation for Windows product team, 6 best practices for storage management and high availability are outlined for IT managers and system administrators to abide by. Enjoy!

mobilegleed | 23 Sep 2009 | 1 comment

A few weeks back a large number of Product Managers, Technical Product Managers, and Sales Engineers (SEs) gathered for our annual SE Symposium in Las Vegas.  VCS One was definitely on everyone's mind, especially with the upcoming 5.0 release.  Here's a summary of the Q & A from the VCS One sessions I attended.

Question:   How many Policy Masters (PMs) are needed for a VCS One Cluster?
Answer:      One PM per cluster.  Larger environments will implement multiple VCS One Clusters per data center.

Question:  Does the PM have to run in an active/passive cluster?
Answer:     Do to the critical role of the PM dedicating hardware is best to ensure performance, reliability, and availability.   

Question:  Can we run other apps on the PM cluster?
Answer:  ...

phil samg | 22 Jul 2009 | 0 comments

Major changes in the health care industry are almost certain as organizations drive toward greater electronic records management (EMR) efficiency and regulatory mandates loom on the horizon. Two counter-trends exist: a need to invest in infrastructure to meet the changes as well as the need to reduce long-term costs in the face of price pressure.

Symantec recently engaged Greg Schulz, Founder and Senior Analyst of Storageio, to examine the file serving requirements of healthcare organizations and the suitability of Storage Foundation Scalable File Server to meet those needs. In his findings, Schulz emphasized flexibility to meet unforeseen needs, performance to serve the large files that medical images create and “pay as you grow” affordability. To read the full report, go to Veritas Storage Foundation Scalable File Server (SFS)...

nicole_kim | 29 Apr 2009 | 0 comments

Check out Going from ‘Fat’ to ‘Thin’ Isn’t an Automatic in the Virtual World Either, the latest blog on thin provisioning by DCIG lead analyst Jerome Wendt. He breaks down the various scenarios for when the benefits of thin provisioning are easily realized and when it is not so readily recognized, as there can be issues with deployment and utilization in certain circumstances. He discusses how Veritas Storage Foundation’s SmartMove feature can help larger enterprises maximize their thin provisioning investment.

Jerome Wendt is the President and Lead Analyst of DCIG Inc., an independent storage analyst and consulting firm. Since founding the company in 2006, Mr. Wendt has published extensively in data storage publications and journals covering all facets of storage.

phil samg | 28 Apr 2009 | 2 comments

Thin provisioning and data deduplication are strategies for reducing the growth rate and space consumption of new data or finding more efficient ways of storing it. These strategies must be combined with addressing unnecessary data storage in order to fully utilize existing assets. The largest container of unnecessary and obsolete data is unstructured data.

Email is the biggest unstructured information pain point today and a top target for data reduction via archiving. The Radicati Group estimates that the volume of email will increase by 30 percent from 2006 to 2010. Although storage costs continue to fall on a per-unit basis, email is often stored many times in the email server, on the user’s PC, in a Microsoft Exchange or IBM Lotus Notes file, on file servers, saved in SharePoint, and in backups. Because of the excessive storage consumed, the cost of power and cooling is also commensurately higher.

Across all business industries and public sector...

Eric.Hennessey | 27 Apr 2009 | 0 comments

A very useful - yet often overlooked - feature of Veritas Cluster Server is Limits and Prerequisites. This feature is often used in conjunction with Service Group Workload Management (SGWM), but can also be implemented on its own. In this post, I'll describe what this feature does and how you can put it to use.

Limits and Prerequisites are attributes in VCS. Limits are system attributes applied to the cluster member nodes, while Prerequisites are attributes applied to service groups. Both are key/value type attributes defined by the user. To better understand how these two attributes work together, it's best to use a common scenario as an example.

Let's say I have a VCS cluster consisting of four nodes and four Oracle database service groups. Let's assume that (1) each node is provisioned identically with the same processor and memory power and (2) each service group places about the same amount of load on each system. Implementing SGWM will automatically keep...

phil samg | 20 Apr 2009 | 0 comments

Data deduplication is another technology that has gained wide acceptance as a tool to streamline the backup process. Deduplication eliminates duplicate data even when such data is unrelated, greatly reducing the data multiplier effect on data.

For example, if a Microsoft PowerPoint presentation is stored on different file servers multiple times, deduplication ensures that only one copy is stored no matter how many full or incremental backups occur. Organizations may consider specialized appliances to provide backup-to-disk and deduplication functions. However, these appliances add complexity to the data center with more devices to manage and actually add capacity to the environment rather than using what already exists more efficiently.

To be continued...

Read Part 1: A State of Neglect
...

phil samg | 14 Apr 2009 | 0 comments

To make better use of storage resources, organizations can leverage storage management technologies. Storage resource management (SRM), for example, enables IT to navigate the storage environment and identify old or non-critical data that can be moved to less expensive storage. These tools can also be used to predict future capacity requirements.

Managing storage without an SRM tool is like going on a journey without a map. Having a clear plan and objective before taking action is the best assurance of rapid progress and success. Storage managers should ask some questions before cost-cutting:

  • What is the average utilization rate?
  • What is the utilization rate by application?
  • Which applications are growing fastest? Slowest?

SRM technology can help companies make an assessment and provide an enterprise-wide view of the storage environment, which helps identify problem areas, consolidation opportunities, and to create a priority...

phil samg | 06 Apr 2009 | 0 comments

During periods of economic growth, organizations may be tempted to take the “quick fix” to storage management problems. The incremental cost of adding storage is relatively small and can be absorbed by the budget. Such a short-cut may facilitate faster project roll-out, but it also leads to underutilized storage. Many organizations operate at only 30 to 40 percent utilization. According to InfoPro, the average is 35 percent.

Accurate storage allocation is difficult because data growth rate information is incomplete or unavailable. Consequently, storage allocation does not correlate to consumption. New applications, with no historical trend data, receive storage allocation on a “best estimate” basis. If the allocated capacity is too high, then the excess capacity may languish unused for the life of the array.

Needless spending is the primary consequence of benign neglect. Having an array only 50 percent utilized is like paying...