Endpoint Protection

 View Only

Behavioral Vulnerabilities and Trust Abuse in Phishing 

Oct 03, 2006 03:00 AM

Markus Jakobsson is a computer science professor at Indiana University and has done some excellent work on understanding phishing attacks. I’ve blogged about some of Markus’ research in the past and I thought I’d share some information about some recent work of his that focuses on the question: What causes people to fall for phishing attacks?

Markus and his group completed a study at Indiana University where the subjects were shown various types of stimuli, such as Web pages and emails. Some of these were legitimate and others were based on phishing attacks. The subject group was asked to rate (on a scale of one to five) how authentic the stimulus was. If a participant marked a score of one, it was thought that the stimulus was taken from a phishing attack. A score of five meant that it appeared legitimate.

To make things interesting, the subjects were asked to explain their thought process out loud. Keep in mind that in this case, the subjects obviously knew they were being tested—which is different from what happens in a real phishing attack, where the victims are unaware. So, you have to weigh in this difference when trying to interpret the results (i.e., subjects who know they are part of a test might be more careful and conservative than people who are blindsided). Another important point is that there were no computer science majors in the subject group (the purpose was to get a sense of how a typical, usually non-technical, user would view these sites).

The study provided the following conclusions:
• Subjects looked at the URLs and knew how to do “mouseovers”, which seems to suggest that the subjects were wary of what was going on. Also, if the URL consisted of an IP address, then it got the user’s attention. This is promising behavior because many phishing URLs involve raw IP addresses. On the flip side, there is the danger that users might put too much trust in otherwise “normal looking” URLs. For example, many subjects thought that "www.citibank-login.com" was associated with Citibank and thought it was reasonable to expect that "www.bankofamerica.pin-update.com" was legitimately associated with doing pin updates with Bank of America.
• Trust indicators do not matter. If people see a seal from an independent party associated with “trustworthiness” (as with the Better Business Bureau), they do not let that influence their decision about whether a site or email is actually legitimate.
• People judge relevance before authenticity. Any mention of money or financial transactions tends to be a major red flag. Also, any type of unusual password request or threat (for example, threats like “if you don’t do XYZ, your account will be suspended” didn’t work too well).
• Subjects were inclined to trust emails that simply seemed informative. Of course, the danger is that some users might follow links in such messages.
• People might not understand the difference between a meaningful authentication cue and a more-or-less useless one. For example, if your credit card company mentions the last four digits of your card number in an email or Web site, then you should have more confidence that you are dealing with a legitimate party since few parties will know this information. On the other hand, if instead you see the first four digits of your number, then that is pretty meaningless. All holders of any credit card coming from the same provider will share the first four digits—which means that this is not a good authentication cue. Unfortunately, the subjects Markus tested were not able to make this distinction.
• Spelling matters a lot! Subjects were less inclined to trust a site with spelling errors.
• A really good way to gain a user’s trust was to use a favicon (a “favicon” is one of those icons that appears in the address bar next to the URL) in the shape of a lock, along with legitimate-sounding URL. Of course, this demonstrates that a phisher can just as easily use a favicon and a reasonable-sounding URL.
• People care about design. The subjects gave a legitimate site a relatively low score because the login and password boxes were different sizes!

Here are some of the insights Markus shared during the conference:
Education does not work. People will make poor judgments when it comes to determining the legitimacy of sites. (Markus also admitted that this was a bold statement to be making, especially considering that he is a professor.) He did mention that some forms of education could work, but that he felt it was better to focus on technical countermeasures to phishing.
Independent channels create trust. For example, a site that says “Call us if you are worried…” seems more trustworthy. (Although, we are starting to see phishing attacks use the lure of independent channels as well, where they ask you to call a rogue telephone number that they own.)
Personalization creates trust. If a site seems personalized—even if that personalization data is otherwise publicly available—then users trust it more. This means that phishers who include social context in their attacks are more likely to be successful. (I wrote a blog about this topic in the past.)

I believe that Markus’ work provides some solid evidence that says we can’t expect typical users to know what to look for. On one hand, I think there is always room for educating users further and some users are exhibiting smarter behaviors (like doing mouseovers and checking for URLs that contain IP addresses). On the other hand, I believe that it is too much to expect typical users to know (without the assistance of any relevant technologies) what the real risks and threats are. For example, many users don’t know that there is nothing secret about the first four digits of their credit card number.

To understand many of these risks, you have to understand what is and isn’t easy for phishers to do. That requires knowledge of technology, various operating procedures, and the tools phishers can use, among other things. Trying to get a grip on all these considerations would be a full time job—I should know! If users had to understand all that, they wouldn’t have time for anything else.

Further reading:

The Anti-Phishing group at Indiana University: http://www.stop-phishing.com

Statistics
0 Favorited
0 Views
0 Files
0 Shares
0 Downloads

Tags and Keywords

Related Entries and Links

No Related Resource entered.