Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Storage & Clustering Community Blog
Showing posts in English
Kimberley | 29 Jan 2013 | 0 comments

 

As Fujian Mobile in China rapidly grew from 2 million customers in 2003 to nearly 30 million today, the need for a secure, highly available, and efficient billing system became ever more important. Previous solutions suffered, becoming increasingly slow. It was simply unacceptable. After evaluating its options, Fujian selected Veritas Storage Foundation High Availability for Windows from Symantec. Consequently, Fujian witnessed unprecedented success with the new solution. Results include up to eightfold faster processing, the ability to re-assign 15 IT staff to more valuable tasks, a 50 percent reduction in storage needed for billing and reclaiming more than 10 terabytes. To learn more about how Symantec’s cluster file storage system drastically improved Fujian Mobile’s billing system, follow this link http://bit.ly/V29aY8   

bpascua | 23 Jan 2013 | 0 comments

When we talk about TRIM we aren’t referring to losing a few founds after Christmas, although I could certainly use that. TRIM refers to Solid State Drive and Flash technologies,  which are now becoming more prevalent in Data Centres as well as our consumer world. If you look to buy a laptop these days there is a strong case for opting for a solid state drive to give you faster boot up times and performance. Similarly there are many options in the Enterprise market from true flash arrays like Violin to the PCI accelerator cards like Fusion IO. In order to understand TRIM you need to have an idea of how Flash storage works. SSD’s use NAND memory to store and transfer information in pages.  A collection of pages  makes up a blocks. You cannot delete a page, you can only delete a chunk of pages (block) So when you delete a file it actually just gets marked for deletion and at a later time when enough pages are available they are deleted. This practice slows...

dennis_wenk | 17 Jan 2013 | 0 comments

A principal challenge many enterprises face is identifying exposures to their complex IT infrastructures.   There are considerable business dependencies on this strategic resource and weaknesses within the IT-infrastructure may lead to serious business interruptions.   It is not enough however, to merely identify weaknesses; the impact of those weaknesses must also be clearly understood and quantified.  This is an important point because it is difficult to know how much to invest to strengthen the infrastructure unless there is a sense of the size of the risk to the organization.

IT infrastructures can fail due to a wide range of events.  These events can be as simple as a process failure or as catastrophic as a full system crash.  It is unrealistic and costly to eliminate each and every harmful event; therefore, a priority ranking based on the consequence of the event is very useful.  Ranking the potentially damaging events based on...

dennis_wenk | 17 Jan 2013 | 0 comments

Oh yes you DO need to know Probability! Many of Professional and Thought-Leaders have said that ‘there is no reason to know probabilities to know that a big risk exists’ and that it should be intuitively obvious that losing a datacenter would be very bad.  So then, if the IT-infrastructure risk is so self-intuitive then value does not lie in identifying the most serious risks, these risks are self-evident. The value lies in determining the optimal ‘investment’ to mitigate the most serious risks.  In this context, optimal means to allocate the organization’s resources to those actions that will yield the best overall performance.

So even if these self-intuitive, gut feeling about the risks are right, it is not the most effective way to justify the appropriate level of investment.  The fact, this is the reason that many in IT find it difficult to provide a valid ROI for HA/DR solutions; because they fail to understand the value that...

dennis_wenk | 09 Jan 2013 | 0 comments

A real crisis is happening now and if we really want to reduce losses for our organization then we will need to adjust our focus.  We don’t have to wait for any pandemic or catastrophe to strike; organizations are experiencing losses that range between $35 billion and $500 billion per month.  If these losses are the result of best practices that are intended to protect our organizations from crisis, then some might even consider these regulations and best practices to be gravely dysfunctional.   

Compliance with federal, state, and international privacy and security laws and regulations often is more an interpretive art than an empirical science—and it is frequently a matter for negotiation.  When business metrics are applied to compliance, many companies decide to deploy as little technology or process as possible—or to ignore the governing laws and regulations completely. Every company weighs the cost of...

dennis_wenk | 08 Jan 2013 | 0 comments

Stakeholders are becoming increasingly concerned about accountability and management of operational risks.  Regulations like HIPAA, Sarbanes-Oxley, and Basel II are placing requirements that are more stringent on corporate governance.  More and more high technology is embedded in the operating fabric of the organization and, in many respects, technology is the organization.  Amazon and eBay are outstanding examples of businesses created by and totally dependent on technology.  It is this reliance on technology and escalating dependency on interconnected infrastructures that has elevated the exposure to business interruptions.  These interdependencies ripple through an organization, as well as outside to major stakeholders:  customers, suppliers, lenders, and partners.

Simultaneously, non-conventional threats such as, denial of service, hacking, and September 11th 2001 changed the very nature of operational risk instantaneously and on a...

Mike Reynolds PMM | 07 Jan 2013 | 0 comments

From application recovery to data protection, Symantec provides solutions to help customers keep their business on-line.  One recent event that has everyone thinking and talking about disaster recovery planning is Hurricane Sandy.  We asked a panel of disaster recovery experts from Symantec to discuss best practices and real life examples of disaster recovery strategies that they have seen first hand.  The recording of that webcast entitled "Surviving the Wrath of Sandy: Is your IT organization really ready in the event of a disaster?" is now available on Virtualization Review.  Please register and learn how your peers are approaching disaster recovery planning. 

dennis_wenk | 07 Jan 2013 | 0 comments

There are many business benefits to the efficiencies that IT provides and the vast majority of functions have been automated.  Today, businesses do more transactions, of greater value, faster than ever before.  This intense dependence on technology has also introduced new risks and vulnerabilities that have large consequences.  One of the primary missions, therefore, for any modern organization is to manage the inherent risk within this complex infrastructure. The only rational reason for spending money to reduce operational risk is the expectation that the benefits outweigh the costs.  

Subjective measures such as risk-tolerance or risk appetite can lead to serious errors of fact, in the form of excessive fear of small risks and neglect of large ones.  The stakes are too great for organizations to rely on intuitive judgments that are error-prone.  Creating infrastructures that increase resiliency requires methods that provide better...

c3lsius | 04 Jan 2013 | 0 comments

It has been a busy end of 2012 indeed. Last November and December, a few of my colleagues and I represented the Storage and Availability Management team at the back-to-back NetApp Insight conferences in Las Vegas, Dublin and Macau. At this technically-oriented conference,we promoted our latest product partnership with NetApp, specifically for:

(1) Dynamic Multi-Pathing for VMware (vDMP)- provides performance, availability and visibility for block-attached storage for VMware ESX.

(2) Symantec Data Insight- helps customers improve data governance through data owner identification and visibility into data usage and access permissions.

Through conversations with NetApp's partners and product management employees from around the world, it is evident that there is a big opportunity for vDMP and Data Insight integrated with NetApp storage systems.

Besides from discussions about recent product launches at NetApp and key partnerships formed, some keynote...