Apps it seems are everywhere now, and they continue to spread like wildfire. It’s a ‘technology on the go’ world we inhabit, where we are using apps for everything from social media, banking, gaming, making payments and a host of other things, at any time or day of the week. The convenience afforded by smart phones and other mobile devices have fast become the platform for serious business and consumer alike.
Global mobile app store downloads are forecast to surpass 45.6 billion in 2012, with free downloads accounting for 40.1 billion (89%) and paid-for downloads totalling 5 billion, according to research analysts Gartner.
So it’s alarming to hear that some Android developers are failing abysmally when it comes to securing their apps. The problem has become so bad that one group of researchers – acting as ‘undercover cops’ – claimed recently they were able to steal bank details, email and social media accounts during a sweep that they made of 13,500 free apps. Not good news for all you app users out there and anyone seeking to keep their systems secure.
How worried should you be by their findings? Very, is the answer. The researchers – from the Leibniz University of Hannover and the Philipp University of Marburg, Germany – found that, of the 13,500 apps analysed, 1,074 contained code that was potentially vulnerable to a man-in-the-middle (MITM) attack. This is an attack that intercepts regular traffic, then inspects and/or modifies the data before passing it on to its intended destination, without either of the original parties being aware.
What will concern you even more is the level of credentials they were able to capture – many from sites most of us will being using on a daily basis, such as bank accounts, Facebook, Twitter, Google, Yahoo, American Express, Diners Club, PayPal, Microsoft Live ID, Box (simplifies online file storage), remote control servers, arbitrary email accounts and IBM Sametime, amongst others.
Of the 41 apps, one was an antivirus app, which, although it asked for a certificate when downloading virus signatures, would accept any certificate presented to it. Also, on the assumption that the connection was secure, the app made no attempt to validate the file presented to it.
The researchers were able to feed their own signature file to the antivirus engine. First, they sent an empty signature database that was accepted, effectively turning off the antivirus protection without informing the user. In a second attack, a virus signature was created for the antivirus app itself and sent to the phone. This signature was accepted by the app, which then recognised itself as a virus and recommended to delete itself, which it promptly did.
What these errors clearly highlight is the poor coding practices of some app developers. But it doesn’t stop there. Just as concerning, you might feel, are the inherent failures within the Android operating system itself. What the researchers also discovered is that, unlike the padlock symbol on most modern browsers, there is no visual feedback to indicate whether a secure SSL channel has been established within an application.
"Apps are also not required to signal this themselves and there is nothing stopping an app from displaying wrong, misguided or simply no information," state the researchers. An example of this, you might want to be aware, includes Google's own Play store. Using an invalid SSL certificate (which can be easily tested by changing the system clock) provided no notification to the user that there was a security issue, instead stating that there was no connection.
The researchers surveyed 754 users, with an average age of 24 years, 62% of who identified themselves as being non-IT experts. When given the survey without an SSL connection, 47.5% of the non-IT experts incorrectly thought that they were secure. However, the figure was not much lower for those who identified themselves as IT experts: 34.7% believed they were safe when they were not.
You will be relieved to hear that a notable exception to all of this bad news is the default Android browser itself, which the researchers hailed as being "exemplary in its SSL use”. However, even though it clearly displays meaningful warnings in the event of a potential security issue, and provides visual aids as to whether and when an SSL connection is established, users still had difficulty in discerning secure and unsecure connections.
Of those who wrongly assumed they had a secure connection, 47.7% thought that it was because their provider was trustworthy; 22.7% simply trusted their phone; and 21.6% thought the address URL started with ‘https://’, even though it didn't during the survey.
All a bit unfortunate, you might think; that the survey happened to home in on the worst aspects of apps, putting them in a worse light than they deserve. Maybe not. Another survey, by ViaForensics, examined apps for Google’s Android system and Apple’s iOS, and found that 76% of these actually store usernames with no encryption and 10% do not encrypt passwords
The message is all too clear: if you play with apps, you may well be playing with fire. What, then, are the lessons to be learned from this and how can things be improved? If companies are really intent on eradicating security breaches in applications – both mobile and non-mobile – they will have to start thinking about ways to make good security effortless and intuitive for both developers and customers.
The designers of applications need to make it much easier to implement SSL and TLS correctly, and far more difficult to fall wide of that mark. Development teams have to make SSL/TLS certificate validation a requirement and QA teams need to test for it. Only that way will the apps you choose to use be worth the name.
For more information on how to keep your apps safe, visit this blog post: http://www.symantec.com/connect/blogs/ssl-apps