Decision strategies and susceptibility to phishing
The second Symposium on Usable Privacy and Security (SOUPS 2006) was held July 12-14, 2006 at Carnegie Mellon. The symposium focuses on bringing usability back into the equation when designing security technologies. That is to say that ultimately, any system providing security is only as secure as its weakest link. Unfortunately, that weakest link often turns out to be the human being using the system.
One particular paper from the conference proceedings that (naturally) caught my attention was “Decision Strategies and Susceptibility to Phishing” by Julie Downs, Mandy Holbrook, and Lorrie Cranor (all of Carnegie Mellon). The paper describes the results of a mental model interview/study with 20 non-expert computer users, in an effort to better understand the user decision-making process upon encountering suspicious emails and Web sites.
The study found that while the participants were aware of traditional risks such as malicious code, they were less aware of social engineering risks such as what one might see in a phishing attack. Also, while the participants picked up on the existence of various cues that might indicate whether a given email or Web site was malicious, they did not always interpret these cues correctly. For example, few users know that there is a tangible difference between having a lock icon in the browser chrome and one in the body of the Web page. The former indicates that the page was sent encrypted in an SSL tunnel and the latter has no real security implications—it just means that the Web page designer felt like including an image of a lock on his Web page. Also, few users were aware of the URLs associated with Web pages, or, more specifically, whether these URLs seemed legitimate. Granted, the study only involved non-expert users, but I would say that a large percentage of users fall into this category.
For anyone who has witnessed usability studies for security-oriented technologies, these results are not too surprising. Nonetheless, the paper does make some meaningful contributions by including some of the statistical data collected during the study. This paper suggests that while user education is an important part of defeating social engineering attacks, the user’s mental model of “what is happening” is far too different from what is actually going on in order for us to rely on education alone.
As a result, I believe we need more automated solutions to help guide users towards safer behavior online. Furthermore, such solutions need to work with extremely high accuracy (or users will start to ignore them). For example, this is part of the philosophy behind Norton Confidential, which is currently out in beta release. Symantec employed a number of techniques that thoroughly analyze Web sites to determine whether or not users can safely transact with them.
Generally speaking, the problem of usable security has become increasingly important, especially with the prevalence of social engineering threats. Overall, the SOUPS technical program looked great. There was a user studies workshop, as well as technical sessions on access control, passwords, risk transparency, and of course, phishing. I encourage you to check out the SOUPS conference Web page, especially since all of the papers can be downloaded.