Video Screencast Help
Storage & Clustering Community Blog
Showing posts in English
Hari Krishna Vemuri | 29 Dec 2014 | 1 comment

There are many I/O caching solutions that employ Solid State Drives (SSDs) to bridge the performance gap between the CPUs and the virtual infrastructure’s I/O sub-system. However Symantec Storage Foundation’s SmartIO along with Dynamic Multi-pathing’s SmartPool employs a ‘SMART’ way for achieving this.

In virtual environments, the virtual infrastructure administrator gets requests to provision virtual machines specifying the amount of physical resources – CPU power, memory size and storage space, that are required for the virtual machine to operate. The storage space requirement is typically met by assigning one or more virtual storage devices depending on the storage’s characteristics such as performance, flexibility etc.

Once the resources are provisioned, it’s left to the system administrator to choose how to assign these to various applications that they plan to run on the virtual machine and the virtual infrastructure administrator has no visibility into it....

RyanJancaitis | 19 Dec 2014 | 0 comments

Deploying All Flash Arrays is today's gold standard in any enterprise data center to drive the maximum performance for applications that are hitting the limits of traditional hard-drive based architectures. Low latencies and high throughput characteristics of these arrays take workloads with varying read-write characteristics and push them to new highs in any type of benchmark. Use cases around transactional databases, data warehouses, virtual desktops, and the like are abundant on the internet and in practice. This tier-0 performance, however, comes at a huge cost.


Symantec and Intel have taken the best in new technologies, Software Defined Storage and NVMe Flash, and provided these same performance benefits at the fraction of the cost of All Flash Array. By utilizing commodity x86 components with certified software solutions, applications can bring the...

sdighe | 09 Dec 2014 | 0 comments

SmartAssist: Workload analysis tool

  Many applications use expensive memory to copy in (cache) important data items to accelerate the application performance. This is a well-known technique for optimizing throughput performance from rotating media. With the introduction of server-based flash/SSD, caching based on such devices is becoming more cost effective compared to expensive memory (RAM) based caching.

  Since flash is expensive than rotating media proper mechanism is required to balance the cost of flash storage and application performance needs. Lot of focus has been on optimizing the caching algorithms that build knowledge of which data to bring into the cache, when to evacuate it from the cache and how to update it back to low speed storage. But real problem for administrator/system engineers is determining how much flash is required for any given workloads on prospective systems. This can be called as ...

dennis_wenk | 04 Dec 2014 | 0 comments

An IT Service Strategy is based on a service provision model, or Service Classes supports an organization’s transformation to cloud computing. The concept revolves around allocating the IT infrastructure components in a manner that meets current and future business requirements.  Not all business requirements warrant the same level of functionality and appropriately matching IT resources to business requirements will not only increase user satisfaction but will also ensure the optimal use of valuable, scarce IT assets by reducing surplus assets caused by over provisioning.

Any utility supply is based upon defined service level promises, sometime referred to a Service-Level-Agreements (SLA). Any service class provided by IT must be defined by an agreed service level that ensure the business obligations are met and will continue to be met as the environment grows.  A service catalog can be designed by utilizing the following five service-class criteria to define...

dennis_wenk | 04 Dec 2014 | 0 comments

IT organizations have undergone rapid, organic growth and organizations continually scramble to meet the ever-increasing demands of the business. New applications, emerging technologies and alternative solutions have mushroomed. Mergers and acquisitions have added to the proliferation of these resources. The result is a landscape of multiple data centers, large and small, scattered across the enterprise, each with a significant population of grossly underutilized technology assets. Some of those assets are even be located outside of the data centers — in branch offices, storage closets, or employees’ homes. This rampant decentralization has inevitably resulted in a complex process that is fragmentation.  This has led to increased interest in data center consolidation


  • Risk and uncertainty shadow every project but loom large over complex projects such as data center consolidation/migrations because...

dennis_wenk | 04 Dec 2014 | 0 comments

Deciding what to do and where to invest to achieve higher levels of availability and more resilient recovery is a lot harder than it might first appear.  When it comes to resiliency there is not a single variant to control, stuff comes in a number of gradations; gradations of risk, gradations of solutions, and gradations of cost.  Making rational choices about all these trade-offs is not simple. There are too many things that could cause a service interruption, there are a large number of solution choices, and there is a variety of price-points. 

Even though resiliency is complex, it is extremely important for every organization to get it right.  Customers demand uninterrupted access to information and immediate availability of core business applications. Regulations demand operational resilience and high availability, placing tremendous pressure on organizations to ensure the availability of data and business processes 24x7, 365 days a year.  The...

dennis_wenk | 04 Dec 2014 | 0 comments

Theoretically, a risk matrix would seem to be a reasonable way of prioritizing risk management actions. Unfortunately, there are three major problems with the matrices as they are currently being use

  1. The first is that either the cost nor the effectiveness of proposed mitigation actions can be evaluated against the identified risks.
  2. The second problem is the evaluation scale is inadequate to represent the real world and provides no rational decision-making intelligence. such as What does provide a rational meter for a 'High Risk' mean or a meter for a 'Severe' impact.  These measurements are simply left to one's subjective judgment.
  3. Finally, the risk matrix does not take into account the fact that each risk may have different impacts on various business systems.  It is quite possible that a risk could not impact on one system and a sever impact on another. 


dennis_wenk | 03 Dec 2014 | 0 comments

Few markets have grown as quickly or have caused as must disruption as cloud computing. According to IDC, the cloud computing market will surged by 25% in 2014.  The market will continue to expand rapidly as enterprise organizations realize the significant impact that embracing the cloud can bring in terms of productivity, agility, and competitiveness.

Security has often been cited as a reason NOT to adopt cloud services. In response to this concern Cloud Service Providers (CSPs) are investing more in security, as a result secuirty is now becoming baked into the cloud service.  This fundamental shift caused by cloud computing continues to disrupt traditional security markets. This disruptive force is different than the usual competitive rivalry.  In an effort to get a better understanding of ‘why’ cloud computing is such a market disruptor a quick a “Five Forces” analysis is in order.