It’s one step forward two steps back in the fight against Malware. Every time it seems like we are making progress it becomes apparent that we keep attacking the tip of the iceberg and below the surface Malware is thriving. So let’s take a look at four key reasons I think we are losing the fight.
The first one may surprise people, but it is our over reliance on (completely inadequate) Automation for discovering new Malware. Let’s take the recently uncovered Flame (aka Flamer) malware. Once Flame was uncovered evidence slowly emerged that it had been around for about five years before being discovered. How do we know this? Well, once Anti-Malware (AM) vendors (and other security organizations, which I’ll lump under AM in this article) knew what to look for they went back and searched through previously submitted samples (of potential Malware) and found they had samples of Flame in their collections dating back five years! The automated tools that AM vendors used to evaluate the samples when they were submitted had failed to flag them as potentially being Malware. Only when someone with a support contract forced their AM vendor to investigate suspicious behavior was Flame (and Stuxnet before it) discovered.
To clarify the process some let’s explore how the process of finding new Malware works. A user is going about their business and finds some executable file or URL they think is suspicious. They flag it as suspicious using their AM software, or a link in their browser, or by going to an AM vendor’s website. So what happens with this report? It is fed into an automated analysis system that looks to see if it can find any hints that the software is Malware (or the website malicious). If it finds something then the report is assigned to a Security Analyst to manually figure out what is going on. If they decide it is Malware they write a signature to detect and block it and that is pushed out to the AM software. If the automated tools find nothing? Then the sample is saved away and nothing else happens. Customers reported Flame, but the automated tools never suspected it was Malware and no further action was taken.
One could attribute the failure of automated tools to uncover Flame and Stuxnet to the expertise of the Malware authors. Quite simply, these (apparently state-sponsored) Malware authors have so much knowledge of how the automated detection tools work that they can craft Malware that won’t be detected. But what about the average Malware author? They might not be able to craft Malware that can go five years undetected, but they can craft things that will go days or weeks. They may not have as much knowledge of how the automated analysis tools work, but they can run their Malware through every AM engine and publicly available analysis tool out there (in a very automated way, btw) and make sure they won’t immediately detect the new Malware.
I’ve gone through and found samples that one or more AM products said were Malware and submitted them to vendors who didn’t detect them them for analysis. Those vendors automated systems then analyzed the samples and reported back as their being safe. False positives or false negatives? Based on the “smell” of the samples I’d say they were false negatives. So these pieces of Malware live on until a customer calls for technical support, and technical support decides to bring in a security analyst to really look at the problem.
Which brings me to the second reason I think we are losing the fight, inadequate information sharing amongst AM vendors. Despite numerous information sharing initiatives between AM vendors, community organizations, academic researchers, etc. I already mentioned the situation where one AM product detects a piece of Malware but another doesn’t and how that situation doesn’t change when I submit the sample. Well, I believe the process is exactly the same with information sharing initiatives. If Company A detects a piece of malware then it submits the sample to its partners. They run it through their automated systems and things go exactly as I described above. Company B gives no apparent extra priority (e.g., manual investigation) to Company A’s report over if Jane Soccer-Mom submitted it. Only if a security analyst at Company A contacts Company B and says “we’ve got a live one here” does Company B really pay attention.
I see this even more clearly with attempts to block websites hosting Malware. The simple explanation is this, it doesn’t matter if Company A tells Company B that URL X is bad, Company B won’t block it unless they determine it really is bad. And they can’t manually check every possibly bad URL so they rely on automated systems. Catch-22.
The only way information sharing amongst AM vendors will ever work is if true trust relationships are established. I’ll give an example. WOT (Web of Trust) automatically treats the appearance of a URL (technically URI) on a SURBL as an indication the site is potentially dangerous. It’s not definitive, and it may even yield false positives, but it does say that WOT users gain very rapid protection against links in SPAM. WOT’s community-based system can later refine the rating. Unfortunately trust relationships like this are currently rare.
Which brings me to reason three, the AM and other security vendors are wimps. Really. Let’s start with the search guys. Both Google and Bing detect malware-hosting websites while they are indexing the web. They will notify the webmaster, if they can, that their site needs to be cleaned. They will give the user a warning if they attempt to access the site. But they won’t actually block access to the website. Why? Well quite often it is a legitimate website that has been compromised by hackers in order to distribute their malware. “Joe’s Lighting Fixtures and Home Furnishings” might go out of business if you truly blocked access to their site, and so Malware authors can continue to use it to distribute their wares with only minimal interference from the search vendors (and their corresponding URL filters). If we were really serious about stopping malware distribution we’d apologize to Joe but implement “Ghost Protocol” on his website until all Malware was removed. That means we’d basically make it disappear from the Internet. It would not appear in search results, and all URL filters would block access to it. Gone until clean. Overreaction? Ask all the people who suffer identity theft or similar harm because they were permitted to continue to access Joe’s site after it was discovered to be distributing Malware if blocking access would have been an overreaction.
Wimpiness is also why information sharing is so ineffective. Being on a SURBL doesn’t say a URL is malicious, but just that it appears as a link in a lot of SPAM. Being rated poorly on WOT just means that a bunch of users think the site has issues. ClamAV has high false positive rates compared to other AM products so it’s hard for vendors trying to avoid false positives to trust it when it claims something is Malware. So the industry wimps-out. Each vendor has its own processes, mostly the failed automated processes, for deciding if something is truly bad and action should be taken. These processes favor malware authors and distributors. That’s not the intent of the security industry, but it is the result of their practices.
The fourth reason is a direct follow-on to the third (and maybe even should be labeled 3b), the failure to punish domains for not enforcing good behavior on subdomains. Let us take my current favorite whipping boy, Tumblr. Tumblr is a microblogging site. In looking through my SPAM the last few days I’ve noticed that most of it contains links to subdomains (e.g., joe12343.tumblr.com) of Tumblr and a few foreign sites that also host user-generated content. If you check URL filters they will tell you the site is safe. Why? They rate the domain, such as tumblr.com, and not the specific sub-domain that is malicious. Tumblr is a legitimate and apparently great service with one problem, they aren’t sufficiently policing the content that users can make available there. So the bad guys have figured out that they can use Tumblr to host malicious content without fear that URL filters will block access to that content.
Sites that host user-generated content have to take responsibility for blocking user’s from using them to host malicious content. And the security industry has to get over its wimpiness and hold these domains accountable. If a major URL filter started blocking access to tumblr.com there would be outrage, but Tumblr would address the problem rather quickly. If I were in charge I’d seriously consider giving Tumblr 30 days to get its act together or face the implementation of “Ghost Protocol”.
There are more reasons we are losing the fight against Malware, but those are the ones that have been bugging me the last few days. I’m looking forward to comments telling me I’m wrong, and that it doesn’t work the way I describe it above. I’d love for Tumblr to tell me how they really are working hard to block malicious content. I wish it was two steps forward, one step back, because that means we’ll eventually win. But right now it appears that for every step forward we make we discover that we’ve lost more ground elsewhere. The Internet can’t go on that way for much longer.