As we stand here in the middle of 2006, it’s already become a little tired to mention the shift in the threat landscape from the digital graffiti of the past to the outright criminal pursuits that dominate the industry today. The dramatic impact of this shift has left a dense fog in its wake—hanging over the industry—obscuring other important changes that have taken place during the same timeframe. Some of the more interesting trends have been specifically related to the concept of “Web 2.0”: the new genre of Web technologies and models that have emerged, like a phoenix, from the ashes of the dotcom meltdown. Let’s take a look at a few Web 2.0 trends and see what impact they have on security.
Blogs are first to leap to mind here, but there are certainly other notable areas where the content creation responsibilities have shifted from the traditional publisher into the hands of the people. Check out the spate of new online video sites following the success of YouTube (i.e., Bix, Guba, Stickam, etc.). The security implications of the “user as publisher” model have already been realized: blogs and other user-driven content areas can host browser exploits, become distribution points for malware and spyware, host unwanted ads (“splogs”), or even host links to fraudulent sites. The attackers seek to exploit the implicit trust granted to “grassroots” content, allowing them to sneak under a potential victim’s defenses. A case in point: a banner ad for “deckoutyourdeck.com” on a series of MySpace profiles recently exploited the WMF vulnerability in order to foist adware on unsuspecting visitors. At this point, we have solid knowledge of the “bad neighborhoods” of the Internet (that is, Web sites hosting cracks, wares, porn, etc.), but you wouldn’t expect unsavory characters to hang out in your neighbor’s tree house.
Screen scraping is for the birds. I remember creating a database application in Delphi that pulled together search results from HotBot, Alta Vista, Yahoo, and about five other search engines. Every time I thought I had the parsing figured out, at least one of the search services would change something in their presentation of the results, leaving me with a jumbled mess of useless characters in my tables. Today, I would only have to make calls to the published Web services for the search engines and safely parsed, well-structured XML, efficiently tucking away the search engine results I was aiming to analyze. Moreover, my job would have been made even easier, thanks to Web Services Description Language (WSDL), which provides a lingua franca for publishing Web services and how to use them.
As Web services, RSS and syndicated content models have swept over the Internet; we’ve seen the beginnings of the resultant security issues along with the considerable benefits. Perl.Santy was one of the first instances of malware to effectively leverage a Web service (Google Search) to locate vulnerable hosts, which it would then infect. Others samples of malware have followed in its wake, demonstrating that powerful online services have not escaped the bad guys as potential tools for inventorying the Web and identifying vulnerable hosts (something security assessment pros have known and leveraged for some time). Standards such as WSDL, which make developers’ lives easier, also make Web services-oriented malware development and automation a snap. There has also been discussion of Web service man-in-the-middle attacks and fraudulent RSS feeds, which would lure victims to bogus sites; but, this type of attack has not seen any form of broad adoption and may not ever reach scale.
Rich user experiences
Finally, after years of dealing with clunky Java applets and clumsy interfaces, we now have fully functional Web-based applications built on AJAX. There are already a number of online resources devoted to AJAX security, as well as at least one speech coming up on the topic at the Black Hat Briefings in Las Vegas, so I won’t attempt to tackle the topic in-depth here. To start with, the potential amount of data AJAX-based applications can store client-side leaves your average cookie in the dust in terms of potential privacy issues. Of equal or greater concern is the amount of business logic that sits on the client-side in the AJAX model, allowing for greater scrutiny of your code by the bad guys. Vulnerable AJAX code can allow for injection attacks of many flavors, including as well as cross-site scripting. While it can certainly be overblown in terms of risk, cross-site scripting can get nastier in an AJAX world, where attackers may be able to use it to siphon data more silently than before. And, while AJAX allows attackers more visibility into your code, it also allows them to develop their own potentially malicious sites to be more invasive and effective than was previously possible. There’s more; but, you get the picture. In a nutshell, robust Web applications via AJAX are great, but they up the ante for secure coding, where programming safety has typically taken a back seat when it comes to getting a cool new application out as quickly as possible.
I’ll leave other Web 2.0 topics of tagging, user trust models, and decentralization for another day. There’s a lot more going on than phishing, zero-day vulnerabilities, and rootkits: the Web is emerging as a comprehensive platform and is predictably bringing classic platform security issues with it. The solutions to Web 2.0 security and privacy issues will not require mystical new products or “gee-whiz” techniques, but instead, they will need the seemingly elusive qualities of discipline, foresight, and preparedness, which are at odds with the excitement that accompanies any rapidly emerging trend.
(Special thanks to Mark C. for providing me with his take on these topics.)