The release of the Apple iPhone immediately raised the eyebrows of those in security. The iPhone's operating system is based on OS X and thus, some observers assumed malicious code would be possible and potentially rampant.
However, these concerns were a bit premature. Steve Jobs has confirmed that consumers will not be allowed to install just any third party applications. “These are devices that need to work, and you can’t do that if you load any software on them,” he said. “That doesn’t mean there’s not going to be software to buy that you can load on them coming from us. It doesn’t mean we have to write it all, but it means it has to be more of a controlled environment.” [New York Times]
The lack of the ability to install just any software will greatly mitigate the risk of malicious code on Apple iPhones. Can malicious software exist? Will malicious software exist? Probably, but the amount of malicious software will definitely not be on the scale as it is today with Windows and likely not reach the levels of current malware for current mobile devices.
The likely vectors of infection will be via any vulnerabilities on the device that allow code to execute. Unfortunately, just a single malware writer taking advantage of a single vulnerability could cause havoc, but for the most part such attacks will be limited.
The other likely security exposure will be home-brew hackers. While the iPhone is a closed device, a network of home-brew hackers will likely find methods to run their own code on the device. Once they install and execute unknown code on their device, there is always a chance of executing malicious code. This scenario happened in the past with the Sony PSP and Trojan.PSPBrick.
Nevertheless, if the iPhone remains a closed device with not even Java applications or widgets let alone native code, the risk of infection becomes orders of magnitude lower. Of course, once the Apple iPhone does come out, I'll need to get Symantec to buy me one in the name of research.