Video Screencast Help
Storage & Clustering Community Blog
Showing posts in English
jmartin | 20 Dec 2012 | 0 comments

 

On December 18, 2012, Symantec completed another release of Symantec Operations Readiness Tool (SORT)! With SORT’s focus of improving the total customer experience for Storage Foundation and NetBackup customers, we’ve added the following features and improvements to the website:

Storage Foundation High Availability Solutions

Pat Coggins | 20 Dec 2012 | 0 comments

Few would argue against the importance of robust disaster recovery plans for your applications and services. Events over the last year or so have reminded us how easily service outages can impact an organization's revenue and undermine consumer confidence. At a basic level, the efficient recovery of services is dependent on the availability of the data, and one of the most common ways to achieve this is through data replication to a remote site.

Veritas Volume Replicator (VVR) from Symantec is a replication solution over IP that maintains a consistent copy of your application data at remote sites. Synchronous and asynchronous modes of replication are supported and a bunker option provides a zero recovery point objective for asynchronous configurations. As a host based replication solution, VVR is independent of both application and storage and supports up to 32 remote locations (at any distance) making it a truly flexible and scalable solution.

VVR is an extension...

dennis_wenk | 18 Dec 2012 | 0 comments

Operational risk is everywhere in the business environment, every decision has its share of uncertainty.  Nothing is a sure thing, yet we when we make important decision we certainly want to “keep the odds in our favor”.  I have often heard the terms like ‘risk appetite’, ‘risk tolerance’, or ‘risk aversion’ used in reference to making forward-looking choices about operational risk as if we can rationally and effectively manage risk based on our subjective feelings.  These terms, however, provide little guidance and position risk-management in the domain of oracles and soothsayers.  Business is not a game of chance based on our subjective ‘feelings’ regarding operational risk. 

The stakes are too high relative to operational risk to leave it to subjective guesses or ‘gut’...

dennis_wenk | 13 Dec 2012 | 0 comments

Data is the foundation of information; information leads to knowledge, and knowledge is power.  There can be little disagreement that data then has value.  Digital data has become the new world currency and protecting these valuable assets is a central concern of business continuity management.   Data loss, data unavailability and data corruption will all have an adverse economic impact the organization.  Not only do we need to ensure that data is usable and available but we also need to ensure that it is protected from unauthorized use.  While protecting digital data doesn’t sound particularly challenging; it typically begins with a simple task, make an extra copy.  Making and managing these extra copies, however, remains one of the most common pain points for any organization.        

There are three fundamental aspects of data protection that lead to increasing complexities.

  1. ...
D White | 11 Dec 2012 | 1 comment

I could point readers at the myriad of articles on a certain lack of certification for x86 hypervisors or open up the proverbial can of worms by digging into the complex details of licensing or support policies.....but I'll resist that temptation for now as there are actually some technical concerns that would prevent me from running production databases on an x86 hypervisor platform.

Thinking about it logically, databases have long been designed to manage memory allocations and features such as AMM (Automatic Memory Management) were designed around the principle of having dedicated RAM managed by database technologies. In a virtualised environment, hypervisors have their own memory management tools which are largely designed around the principal that the sum total of the virtual machine memory will exceed that of the physical memory in the server so features such as memory reservations, shares and "balooning" are designed to enable over-...

nick.kenny | 11 Dec 2012 | 0 comments

 

The IDC expects data generated by enterprises to grow by 48% this year and for 90% of that to be unstructured data.  At this rate, demand is far outpacing the reduction in the cost of storage.  With an ever increasing squeeze being put on IT department budgets, how is this problem being tackled?

The issue draws interesting parallels to the energy crisis governments around the world are currently facing, with demands for more energy offset against rising fuel costs and environmental targets.  There are some interesting parallels in the solutions being used by both as well and perhaps a few tips to be learnt.
 

Saying it with a smile

Power boards in the US have been using some well documented psychological principles to reduce consumption rates, with impressive results and best of all - they're incredibly simple to implement. It likely started with an experiment in 2004 conducted by Roberto...

bpascua | 11 Dec 2012 | 0 comments

Over the past six months I have seen a resurgence in interest in the Veritas Cluster Server brand, this is due to a number of factors. Firstly I strongly believe that Data Centre's are finally beginning to virtualise their critical applications. Secondly the latest version fo Veritas Cluster Server now includes Vmware support for seamless failover using VMDK disks. I have recently been involved in three projects where customers have been testing Veritas Cluster Server against Oracle and Websphere to make sure it can actually do what it says on the tin. I am pleased to announce that yes it actually works and all have been successful. As anticipated the ability to use the native VMDK as a unit of failover is making customers considerably less nervous about virtualising their applications. This is great news for both Symantec and our loyal customers.

I was also delighted to see that Symantec have also released a support statement for our Cluster Filesystem in Vmware...

sai_mukundan | 10 Dec 2012 | 1 comment

 

Are you part of an IT organization that needs to manage business continuity for its users? If so, we would greatly appreciate 10 minutes of your time to understand your organization’s Business Continuity needs and critical purchase factors. 5 lucky participants will receive a $50 Amazon gift card each. This survey will go a long way in helping us build solutions that fit and grow with your business.
 
 
 
dennis_wenk | 10 Dec 2012 | 0 comments

Difficult economic conditions lead to fiscal belt tightening, however, the ever-increasing demand for data continues; accelerating the requirement for hardware to manage the data.  Big data and its appetite for hardware become prominent line items which appear like ripe, low hanging fruit to many cost-cutters.  Buying low priced, ‘good enough’ or mediocre equipment starts to emerge as an opportunity to reduce a burgeoning budgetary item.  Price of the hardware however, is only one part of the total cost equation.

Low-cost gear costs less not just because of limited functionality; it is lower cost because a number of engineering short cuts are taken during manufacturing.  For example, using lower-tolerance components that have higher failure-rates or removing redundant components are common ways to reduce production cost.   These short-cuts, however, negatively impact overall reliability and increase the failure rate....

dennis_wenk | 10 Dec 2012 | 0 comments

The benefits of data center consolidation are apparent; they save millions of dollars and improve the overall quality of service.  It is easy to see that too many data centers adds unnecessary costs, it chips away at manageability, increases complexity and contributes to a number of operating inefficiencies.  Realizing the economic benefits of data center consolidation can be elusive, the challenge is to circumvent the potential pitfalls that complicated the transformation process.

Data center consolidations involve much more than just moving servers or data from one location into another.  Data centers have become a conglomeration of disparate technologies running on combination of virtual platforms, physical platforms, and clustered platforms that operate an assortment of systems and access a range of data-tiers that are stored on multiple arrays from a whole host of hardware vendors.  

In addition to the...