Photo: whiteafrican / Flickr
The FBI recently put out a mobile malware alert, providing us with a sobering reminder of this “evil software” for phones and tablets. In this particular case, the FBI was warning against the Finfisher and Loofzon malware, which spies on our data and leaks GPS positions to track our movements. While these threats appear to have been developed for government surveillance purposes, they can of course be used by any organization.
And therein lies the problem. Mobile malware affects all of us.
Unfortunately, the advice the FBI alert shared was vague and maddeningly difficult to follow. For example: “Users should look at the reviews of the developer/company who published the application” and “Turn off features of the device not needed to minimize the attack surface of the device.” Heck, I’m a security researcher, and I’m fuzzy about what all that means.
A security researcher, Dr. Markus Jakobsson is one of the main contributors to the understanding of phishing and crimeware. He holds over 50 patents and 100+ pending patents; has published a collection of books; and is a co-founder and CTO of FatSkunk, a Silicon Valley-based mobile malware startup.
Mobile Malware Advice Doesn’t Help Users
One piece of the FBI advice that doesn’t work is that we must review and understand the permissions we’re granting to applications (apps) before installing them. Studies have revealed this to be too difficult for users: Most people just have no idea what permissions are reasonable … and which ones are risky.
The FBI also advises users not to click on links or download apps from “unknown” sources, but we know that typical users have a very hard time determining whether a source is trusted or not. This is especially true on handsets, where the user interface is very constrained. We can’t always see where we are, especially if the site scrolls away the URL address bar and displays a fake but perfectly realistic URL as part of the content (trivial for hackers to do). And anyway, research shows that habit trumps attention every time.
Perhaps most ironic of all, however, is the advice to “download protective applications” (assuming the FBI means apps here). Apps aren’t allowed to peek beyond their sandbox and scan other apps, let alone probe the operating system to monitor modifications. Apps make a very inadequate anti-virus system.
Users Don’t Get It – But Hackers Do
But the fact remains that users remain unaware of the mobile malware problem, complacent about it, or simply reluctant to take action. Mobile malware is a bit like a traffic accident. Until it happens to us – or we hear a vivid story out there of “it happened to…” – the threat feels very abstract and remote.
All phones can be infected, no matter what operating system they run.
It’s only afterward that we wish we had done things differently. Maybe that’s why a whopping 96 percent of all mobile devices have no security software installed: It just hasn’t happened to enough people yet. For several years, the most common comment I heard when warning of the mobile malware danger was: “It can’t happen.” Today, the response is only slightly different: “It can’t happen if I have an iPhone.” Wrong. All phones can be infected, no matter what operating system they run.
iPhones may be more secure, but at the end of the day, malware criminals are just like other businesspeople: Market size dictates where they focus their efforts. (Think about how much more malware there is on Windows than Mac machines.) Hence the current focus on Android devices: 52 percent of all smart phones are Androids, and only 34 percent run iOS.
Developers also prefer the ease of open platforms. Software developers – even criminal ones – like to reuse code and competence. But iPhones aren’t invulnerable. Just because Android remains the most targeted operating system doesn’t mean that iOS malware is harder to write.
Anybody can also upload an app to the Android marketplace, explaining the prevalence of trojans for Androids. Trojans are effective because they don’t use technical vulnerabilities to install themselves on our systems: They use us to install them (e.g., by posing as a game). We act much faster to things on our phones than on our computers, because our mobile phones are always with us. This makes a world of difference for malware that requires a user action to install and spread. And mobile malware propagates much faster than traditional malware, because its targets are always connected to a network.
Developers prefer the ease of open platforms. Software developers – even criminal ones – like to reuse code and competence.
But hackers especially love mobile devices because the payout is built right into them. I don’t mean in the NFC/ mobile payments sense, but in the basic sense of sending a text message to a premium SMS number or calling a toll number — thus paying the vendor behind that number. This is how malware writers monetize and profit from the devices they’ve taken over. It’s how the notorious “FakeInst“ family of malware works.
Finally, hackers love mobile malware because it represents a huge opportunity: The number of smartphones overtook the number of PCs some time ago … and it’s accelerating every day.
When It Comes to Malware, ‘Mobile’ Changes Everything
We can’t think of a smartphone as just a computer that fits in one’s pocket, because then we assume that approaches for addressing traditional malware can simply be applied to mobile malware. This is a common misconception: Even major anti-virus companies suffer from it, as evidenced by their product offerings.
Because mobile phones aren’t just small computers when it comes to defending against malware: They’re small computers with small batteries, and important updates on them can take weeks. These seemingly minor differences are exactly what makes mobile malware more difficult to address than malware on computers.
On traditional computers, anti-virus software can be automatically updated once new malware strains get noticed. The most common type of update is to add new “signatures,” a series of ones and zeroes unique to a particular piece of software or malware. The anti-virus system compares each piece of software on a device with the list of signatures to identify unwanted software.
Mobile phones aren’t just small computers when it comes to defending against malware.
Unfortunately, malware writers check if their code matches any such signatures by running popular anti-virus software, continually making modifications until their code is no longer detected, and only then releasing it. And since it’s not as rapid or straightforward to perform updates on mobile phones, we’re left vulnerable. Carriers are unwilling to perform Firmware Over The Air (FOTA) patches because of the costs as well as risk of updates bricking their customers’ phones.
The other common anti-virus approach isn’t very effective for mobile devices, either. In this approach, anti-virus software monitors computer software as it executes, looking for signs of bad behavior. Because it’s robust to minor changes of code, the “behavioral detection” approach makes it more difficult for malware writers to make simple recompilations that allow malware to slip below the radar. But on smartphones, this approach doesn’t work well.
Smartphones can’t monitor everything going on as computers can, because that requires a lot of computational resources … which devours battery life.
We Need New Models for Dealing with Mobile Malware
So what does work? These are some of the approaches security researchers have come up with.
Monitor traffic on the network.
Carriers and ISPs can detect when smartphones make connections to “known bad” locations, like Lookout does. This works when a phone is infected by malware that connects to a command-and-control location from which the malware writer coordinates the attack. Similarly, any infected device that starts making an abnormal number of connections to quickly spread the infection can be detected simply based on its anomalous behavior. Network-based traffic analysis doesn’t require updating and doesn’t consume battery resources, yet it makes it harder for malware writers to check against detection. (But beware: malware writers can avoid detection by dynamically changing the command and control locations, obfuscating attachments, and spreading using Wi-Fi and Bluetooth connections.)
Bake control into the handsets.
Another alternative is to increase control on handsets over what code can be run. This can be done using special-purpose hardware, such as Intel’s TXT initiative or ARM’s TrustZone technology. While this approach doesn’t prevent infection per se, it can be used to isolate sensitive routines so that malware cannot modify them. Since each such routine must be certified (though it’s not bulletproof), the attack surface is shrunk considerably.
This approach is ideal because control becomes aligned with liability … and the end user can relax.
Detect malware through device physics.
Still another alternative is to use “software-based attestation techniques.” These techniques determine whether a given device is infected or not by running very short (but very computationally intensive) tasks on the target device and determining how long the computation takes. This approach relies on understanding the physical limitations of target devices: How fast are their processors? How much RAM do they have? How many cores? And therefore: How long should a given process take to execute if there’s no other process running? Knowing this and knowing what the slowdown would be if there were any active malware is how these techniques detect infections. It doesn’t matter what kind of malware it is, which is wonderful news to anybody worrying about zero-day attacks.
If the infected-or-not determination is made by approved external entities – such as one’s bank or employer – they can verify devices are safe before letting users log in.
This approach is ideal because control becomes aligned with liability … and the end user can relax. Which is just as it should be.