Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Storage & Clustering Community Blog
Showing posts in English
RyanJancaitis | 19 Dec 2014 | 0 comments

Deploying All Flash Arrays is today's gold standard in any enterprise data center to drive the maximium performance for applications that are hitting the limits of traditional hard-drive based architectures. Low latencies and high througput characteristics of these arrays take workloads with varying read-write characteristics and push them to new highs in any type of benchmark. Use cases around transactional databases, data wherehouses, virtual desktops, and the like are abundant on the internet and in practice. This tier-0 performance, however, comes at a huge cost.

SDS_Arch.jpg

Symantec and Intel have taken the best in new technologies, Software Defined Storage and NVMe Flash, and provided these same performance benefits at the fraction of the cost of All Flash Array. By utilizing commidity x86 components with certified software solutions, applications can bring the...

sdighe | 09 Dec 2014 | 0 comments

SmartAssist: Workload analysis tool

  Many applications use expensive memory to copy in (cache) important data items to accelerate the application performance. This is a well-known technique for optimizing throughput performance from rotating media. With the introduction of server-based flash/SSD, caching based on such devices is becoming more cost effective compared to expensive memory (RAM) based caching.

  Since flash is expensive than rotating media proper mechanism is required to balance the cost of flash storage and application performance needs. Lot of focus has been on optimizing the caching algorithms that build knowledge of which data to bring into the cache, when to evacuate it from the cache and how to update it back to low speed storage. But real problem for administrator/system engineers is determining how much flash is required for any given workloads on prospective systems. This can be called as ...

dennis_wenk | 04 Dec 2014 | 0 comments

An IT Service Strategy is based on a service provision model, or Service Classes supports an organization’s transformation to cloud computing. The concept revolves around allocating the IT infrastructure components in a manner that meets current and future business requirements.  Not all business requirements warrant the same level of functionality and appropriately matching IT resources to business requirements will not only increase user satisfaction but will also ensure the optimal use of valuable, scarce IT assets by reducing surplus assets caused by over provisioning.

Any utility supply is based upon defined service level promises, sometime referred to a Service-Level-Agreements (SLA). Any service class provided by IT must be defined by an agreed service level that ensure the business obligations are met and will continue to be met as the environment grows.  A service catalog can be designed by utilizing the following five service-class criteria to define...

dennis_wenk | 04 Dec 2014 | 0 comments

IT organizations have undergone rapid, organic growth and organizations continually scramble to meet the ever-increasing demands of the business. New applications, emerging technologies and alternative solutions have mushroomed. Mergers and acquisitions have added to the proliferation of these resources. The result is a landscape of multiple data centers, large and small, scattered across the enterprise, each with a significant population of grossly underutilized technology assets. Some of those assets are even be located outside of the data centers — in branch offices, storage closets, or employees’ homes. This rampant decentralization has inevitably resulted in a complex process that is fragmentation.  This has led to increased interest in data center consolidation

CONSOLIDATION/MIGRATION BACKGROUND

  • Risk and uncertainty shadow every project but loom large over complex projects such as data center consolidation/migrations because...

dennis_wenk | 04 Dec 2014 | 0 comments

Deciding what to do and where to invest to achieve higher levels of availability and more resilient recovery is a lot harder than it might first appear.  When it comes to resiliency there is not a single variant to control, stuff comes in a number of gradations; gradations of risk, gradations of solutions, and gradations of cost.  Making rational choices about all these trade-offs is not simple. There are too many things that could cause a service interruption, there are a large number of solution choices, and there is a variety of price-points. 

Even though resiliency is complex, it is extremely important for every organization to get it right.  Customers demand uninterrupted access to information and immediate availability of core business applications. Regulations demand operational resilience and high availability, placing tremendous pressure on organizations to ensure the availability of data and business processes 24x7, 365 days a year.  The...

dennis_wenk | 04 Dec 2014 | 0 comments

Theoretically, a risk matrix would seem to be a reasonable way of prioritizing risk management actions. Unfortunately, there are three major problems with the matrices as they are currently being use

  1. The first is that either the cost nor the effectiveness of proposed mitigation actions can be evaluated against the identified risks.
  2. The second problem is the evaluation scale is inadequate to represent the real world and provides no rational decision-making intelligence. such as What does provide a rational meter for a 'High Risk' mean or a meter for a 'Severe' impact.  These measurements are simply left to one's subjective judgment.
  3. Finally, the risk matrix does not take into account the fact that each risk may have different impacts on various business systems.  It is quite possible that a risk could not impact on one system and a sever impact on another. 

...

dennis_wenk | 03 Dec 2014 | 0 comments

Few markets have grown as quickly or have caused as must disruption as cloud computing. According to IDC, the cloud computing market will surged by 25% in 2014.  The market will continue to expand rapidly as enterprise organizations realize the significant impact that embracing the cloud can bring in terms of productivity, agility, and competitiveness.

Security has often been cited as a reason NOT to adopt cloud services. In response to this concern Cloud Service Providers (CSPs) are investing more in security, as a result secuirty is now becoming baked into the cloud service.  This fundamental shift caused by cloud computing continues to disrupt traditional security markets. This disruptive force is different than the usual competitive rivalry.  In an effort to get a better understanding of ‘why’ cloud computing is such a market disruptor a quick a “Five Forces” analysis is in order.

...

dennis_wenk | 03 Dec 2014 | 0 comments

A deduplicated backup copy of data will certainly save on storage costs but does it provide the actual protection or does it just provide a false sense of security?

Most enterprise IT organizations are managing at least 100 TBs of data and some might even have a little bit more data these days.  So 100 TBs seems to be a reasonable amount of data to scratch out a few calculations with regards to data transfer time and cost.

It takes 6 minutes to transfer 1 TB of data over Fiber Channel and over 2 hours (~133 minutes) to move 1 TB over GigE[1].  That means it takes 10 hours to transfer 100TBs over Fiber Channel and ~9 days (~222 hours) over GigE[2]

Deduplication of the data can reduce bandwidth and storage during the backup and that is a good thing.  However, it takes an hour to rehydrate 1.2TBs of deduplicated data back to usable format.  At that rehydration-rate, it will take...

jdangelo_symc | 02 Dec 2014 | 0 comments

I am pleased to announce the publication of a joint white paper between Symantec Information Availability and GE Healthcare.  This document represents a commitment by GEHC to leverage Symantec technology (specifically ApplicationHA)  for the purposes of bringing high availability to their virtualized Picture, Archiving and Communication System (PACS).  Representing more than a $3 Billion global market segment, GEHC is recognized as the leader in PACS installations nationwide.

This white paper provides insight into how GEHC took full advantage of the many facets of Symantec HA solutions for VMware as well as the benefits a highly available Centricity PACS and Centricity PACS 3-D solution can bring to healthcare professionals. 

For more information on how Symantec provides data center security and information management solutions to the Healthcare industry, please visit www.symatnec.com/...