Two recent security incidents, WireLurker and Masque Attack, highlight both the ease and difficulty of slipping malware onto iOS. But they also show the way in which Apple may have infantilized its audience into not knowing the right choice to make when presented with a genuine security flaw.
WireLurker used malware inserted into Mac OS X programs made available at a Chinese-focused third-party app store to install apps to iOS devices over USB. It added apps in jailbroken and regular devices. Masque Attack in its most powerful methodology pushes apps (from websites, email, and elsewhere) that, if installed, overwrite common popular apps and extract cached data for that app, such as Gmail. WireLurker has been effectively defanged; Apple has more work to do to remove any threat from Masque Attack.
But beyond their specific flaws, they show the different paths taken with iOS and Mac OS X for app security. Apple has a split personality, partly rooted in history.
Apple didn't initially plan to allow third-party apps in iOS, and when it opened the marketplace, it wanted to avoid a flood of malware, as well as keep out poorly made software that would crash the OS or burn cellular data. It also wanted a platform that let it control the flow of money changing hands for apps and digital goods. It retains that despite years of complaints, and both Apple and its defenders often cite security as a major reason for keeping that tight control.
By contrast, OS X had its origins 30 years ago and through processor, architecture, and kernel changes, matured with the notion that anyone could write software that would run without outside permission. When the Mac App Store first appeared, there was a reasonable fear that Apple would transition from it being an option to being the only distribution method. That hasn't happened so far, in part because, by dollar value, the vast majority of software used on the Mac is from software outside the App Store.
Yet despite the open nature of OS X and the wide availability of non-App Store software, there has been no virally distributed or widely exploited method of hijacking a Mac or any of its software, despite many serious flaws and some malware found in the wild. System design isn't the sole answer: Apple has made good, but not always great decisions. Market share and a lack of mechanisms to trigger mass emails and automatically execute email attachments may have had more to do with it.
The nonintuitive Gatekeeper
Apple seemingly trusts its Mac users, who may be no more sophisticated than the average iOS user, to launch any software with a tiered approach it added in Mountain Lion with Gatekeeper (part of the Security preference pane's General tab). Gatekeeper allows a user, or someone setting up the system for a user, to select among three options that control which sorts of apps can launch in OS X: only from the Mac App Store, from the App Store plus identified developers (those who sign their apps using Apple Developer credentials), and "anywhere." iOS lacks a direct analogue to this.
The default choice allows App Store plus signed apps, and a well-documented but user-unfriendly method and presentation of opening unsigned apps. Double-clicking an app with an "unidentified developer" results in a dialog that tells the user it cannot be opened, but Control-clicking the app and selecting the Open item bypasses security. This subtlety is certainly beyond the ken of most users. (A signed app that's been tampered with, or for which the developer certificate has been revoked, cannot be installed through this method.)
A friend, Kerri Hicks (spouse of Bare Bones' founder Rich Siegel), explained to me recently that as the web manager of a university's library system, she is regularly consulted by other members of her team when they see the the Gatekeeper dialog. Baffled, they come to her as the expert; it's both nonintuitive and hard to train an average user in the bypass. While I rarely open an unsigned app, Kerri says it's a frequent occurrence in her field, which incorporates free, open-source, and other forms of software in which developers may not want to take the time, spend $99 per year, or jump through Apple's hoops to get a digital signature attached.
An ad-hoc entrance to the walled garden
iOS also has a workaround, although it's extremely limited. Apple allows regular developers to distribute test versions of apps as "ad hoc" releases to up to 100 devices registered in the developer's account. Enterprise users, who pay $299 per year, can distribute apps completely outside Apple's processes, but such distribution is supposed to be limited to employees of the firm with the account. (TestFlight is another option for software testing, now owned by Apple, but Apple handles distribution of releases.)
An ad hoc app is unlikely to be found in the wild except in very particular attacks, because of the unique device limit: without the UUID of an iOS device, an app signed with the ad hoc certificate can't be installed. Enterprise-signed apps were used in WireLurker and are most of the threat of Masque Attack, although it's possible "spearphishing" (highly targeted attacks) could make use of ad hoc provisioning as well.
In both cases, though, either when downloaded or installed, such apps require a user to approve them, with one or two steps, tapping Install or Trust at a prompt that provides little information and none of it validated--a malicious developer can claim to be installing "New Flappy Bird" and instead overwrite any non-Apple app. (Apple can centrally revoke any enterprise certificate, which has shut down WireLurker, and makes it quite difficult to use the Masque Attack, as once it's discovered, an improper or hijacked enterprise developer account can be disabled along with its certificates.)
Users need help to make smart choices
Which brings us to the crux. Mac OS X allows any arbitrary app to be installed, has been resistant to widespread malware, and provides paths that allow selective security for varying degrees of compliance with Apple's mandates. But most users require explanation to install unsigned apps unless they fully disable Gatekeeper, which is inadvisable. iOS resists installation of apps that aren't in the App Store, but seemingly gives naive users little information about making a good choice when faced with unexpected prompts.
It seems like Apple could tighten and loosen app security at once without compromising its intent or users. Relatively few users need to install ad hoc or enterprise apps, and it should provide clearer guidance in iOS--and maybe a way to turn off such installations without an additional security hoop being jumped over. Likewise, OS X could make it easier, or at least clearer, how to overcome Gatekeeper when one needs to.
But the experience with OS X and these two malware attempts should also provide guidance in loosening the reins of iOS. The two exploits are completely thwarted by the difficulty in obtaining enterprise certificates and the ease with which Apple may revoke such encryption documents. Cracking iOS open to add a Gatekeeper option for signed apps only that have verified information and of which user has to approve the installation and launch would still give Apple a way to shut malware down quickly.
Apple's unlikely to give up its rigid iOS control, but it's ironic that malware revealed how well OS X manages integrity, and how easily iOS could be extended to benefit users and developers alike.