Video Screencast Help
Symantec to Separate Into Two Focused, Industry-Leading Technology Companies. Learn more.
Security Response
Showing posts tagged with IT Risk Management
Showing posts in English
Jeremy Ward | 06 Sep 2007 07:00:00 GMT | 0 comments

At the Open Group meeting in Austin a couple of weeks ago, I attended the workshops on IT risk assessment. Pretty dull, eh? In fact, this topic produced some of the liveliest debate I’ve ever had at a conference.

Unless you specialize in this area, you may think that risk assessment is pretty well sewn-up. You couldn’t be more wrong. Get 50 practitioners in a room and you will have 50 different methodologies for assessing IT risk. The trouble is that nearly all of them will be subjective – the outcome of any risk assessment exercise is most likely to be ‘high’, medium’ or ‘low’. Even when it’s an apparently objective number -- 54,821, for example – you don’t learn all that much. Try going to your board and telling them that their IT risk is 54,821 and their eyes are likely to glaze over very quickly! Any attempt to calculate ‘annual loss expectancy’, although valiant, only results in trouble when the degree of variability is larger than the sum itself!

So we urgently...

Ken Gonzalez | 05 Sep 2007 07:00:00 GMT | 0 comments

As I mentioned in my last blog entry, the version that most today know as ITIL® (often referred to as ITIL v2), is defined within the two Office of Government Commerce (OGC, U.K.) publications – Service Delivery (the “Red book”) and Service Support (the “Blue book”). In these publications, the 10 core ITIL processes and Service Desk functions are described in (more or less) self-contained blocks. In this world, things were relatively simple. I’ll start off our examination of ITIL v3 from the (more familiar) process-centric perspective.

As of now, there is no official count of authoritative list from OGC of which processes should be considered as the ITIL v3 core. Unfortunately, this pushes that responsibility on to the readers’ shoulders and I assure you that this is not an easy task...

Jeremy Ward | 04 Sep 2007 07:00:00 GMT | 0 comments

Is the public sector bothered about IT risk? Although it’s a hot topic, as we saw at RSA in February, surely the public sector is more worried about saving money and meeting government targets? Well, yes – but one of the best ways of doing this is to ensure your IT systems operate efficiently and can deliver the services the public want, when they want them, not just when your offices are open. Shared services save money too – but mean sharing the security pain as well as the productivity gain. All this means more IT risk.

Symantec recently released the latest in-depth study taken from its IT Risk Management Report. This is a mini-report on findings from the public sector. The report looks at how IT professionals in the public sector view sources of IT risk and the effectiveness of the controls used to manage it. The report is based on feedback from 77 IT professionals in...

Ken Gonzalez | 23 Aug 2007 07:00:00 GMT | 0 comments

Over the past twenty-some years, ITIL® (IT Infrastructure Library) has gone from just another good idea to the development of a major movement within the IT universe. The version that most people know today as ITIL (often referred to as ITIL v2), is defined within the two Office of Government Commerce (OGC, U.K.) publications – Service Delivery (the “Red book”) and Service Support (the “Blue book”). In these publications the 10 core ITIL processes and Service Desk functions are described in (more or less) self-contained blocks. In this world, things were relatively simple. Process areas roughly mapped on to how many organizations could structure their job roles and thus make parts of the framework operational relatively quickly. As a result, many organizations adopted ITIL as their framework of choice and in a very real...

Michael Smith | 09 Aug 2007 07:00:00 GMT | 0 comments

Firewalls, intrusion detection and prevention systems, antivirus – they’re all old tricks of the trade that IT has traditionally deployed to maintain the security of large and complex networks.

But are they enough? Threat volume is rising, propagation speed is increasing, and attacks are becoming more advanced and elusive. Luckily, there are innovative new ways to complement the traditional approach. And security’s bright side may be on the ‘dark’ side.

A growing number of organizations are leveraging darknets to increase their security intelligence and, in turn, enhance their security posture. A darknet is an area of routed IP address space in which no active services reside.

IT is increasingly using this ‘dark’ network as a powerful security tool. Because no legitimate packets should be sent to or from a darknet, the majority are likely sent by malware that scans for vulnerable devices with open ports in order to download, launch, and propagate malicious code...

Stuart Smith | 24 May 2007 07:00:00 GMT | 0 comments

As with my last blog, the topic this time is behavioral detection, and the various trade-offs involved. We already covered some of the issues in the use of virtual environments for the detection of threats, and this time we’ll cover some of the issues involved in classifying behavior and mitigating damage.

Whatever your approach is to generating and tracking behavior, you need the ability to classify it. There are challenges to tracking behavior, but once you have a profile of behavior, determining what is malicious is a harder problem. Some security products solve this by handing off the problem to the user. Most don’t. The real problem in profiling is that the definition of what is malicious has changed over time. Is tracking your activity as you surf a web page malicious? If you say yes, what about the wonderful “suggest” features that use historical data? Is any program that downloads silently with no GUI malicious? What about Windows Update or Live Update? Something...

Stuart Smith | 23 May 2007 07:00:00 GMT | 0 comments

The amount of new malware in the wild is growing quickly. While this is not a new observation, I have seen some claims that behavioral detection may be the answer to this ever-increasing amount of malware. Unlike more traditional types of detection that look at static attributes inherent in a piece of software, such as unique data, code, etc., behavioral detection involves running a possible threat, tracking its behavior with various monitors, and then using the information gathered to determine if it is malicious. As more behavioral detection products emerge, one article asked “Is Desktop Antivirus Dead?” [1]. Hardly, but it is worth a look at why the question even comes up.

Behavioral detection holds out the promise of more zero-day detections, and it reduces the number of updates you need to make to your antivirus software. Note that you cannot safely eliminate updates, since the definition of malicious behavior changes over time. The history of malware, from viruses and...

Luis Navarro | 25 Apr 2007 07:00:00 GMT | 0 comments

In a recent blog entry, I talked about creating a strong password. But what are passwords used for? They are, among other things, a mechanism for ensuring that sensitive data is accessed only by authorized persons. Some of that sensitive data may be personal data that can be used to uniquely identify a person, such as their Social Security Number or driver’s license number. If a person obtains sufficient personal data on an individual, they can perform identity theft, impersonating that individual in order to fraudulently open accounts, obtain credit cards, etc. It can take the individual whose identity was stolen a long time to get things straightened out, and during that time their credit history is tarnished.

Personal data is collected during normal business transactions. Even organizations that may not collect personal data from customers will still have personal data for their employees. This data must be protected from unauthorized disclosure. Depending on where you...

Luis Navarro | 26 Feb 2007 08:00:00 GMT | 0 comments

I recently received a call from a friend who had set up an online payment reception service with a well-known provider so he could receive payments through his Web site. "I’ve got a question – there is a charge for $300 for some computer equipment that I did not order, what’s happening?" After going through the more obvious questions, I asked him: "What is your password?" It turns out his password was, literally, “password.” Someone just entered his account name, guessed the password, and now could use his account for online shopping. This is a rather extreme example, but it illustrates very well the need for strong passwords.

Adherence to stated password policies is something I get asked about quite a bit by clients looking to implement a Security Awareness Program. A weak password can disable a reasonable security infrastructure, effectively bypassing other security measures that have been implemented. Although other methods for user authentication...

Jeremy Ward | 22 Feb 2007 08:00:00 GMT | 0 comments

If 2006 was the year of NAC, then 2007 is already shaping up to be the year of Risk Management. Perhaps you missed many of the analyst and expert New Year’s predictions of information security evolving into IT Risk Management this year, but a brief walk through RSA’s show floor and a perusal of the product news coverage would have only confirmed 2007’s focus on IT risk.

Similar to NAC’s challenges, there seems to be a good deal of confusion regarding the definition of IT Risk Management and how it is practiced. Fortunately—nearly one year later and after 500+ in-depth interviews with IT executives and business professionals worldwide—Symantec released the results of a new study, the IT Risk Management Report. The report is designed to cut through some of the industry noise and help organizations understand the fundamental elements of IT...