Four reasons we are losing the fight against Malware

It’s one step forward two steps back in the fight against Malware.  Every time it seems like we are making progress it becomes apparent that we keep attacking the tip of the iceberg and below the surface Malware is thriving.  So let’s take a look at four key reasons I think we are losing the fight.

The first one may surprise people, but it is our over reliance on (completely inadequate) Automation for discovering new Malware.  Let’s take the recently uncovered Flame (aka Flamer) malware.  Once Flame was uncovered evidence slowly emerged that it had been around for about five  years before being discovered.  How do we know this?  Well, once Anti-Malware (AM)  vendors (and other security organizations, which I’ll lump under AM in this article) knew what to look for they went back and searched through  previously submitted samples (of potential Malware) and found they had samples of Flame in their collections dating back five years!  The automated tools that AM vendors used to evaluate the samples when they were submitted had failed to flag them as potentially being Malware.  Only when someone with a support contract forced their AM vendor to investigate suspicious behavior was Flame (and Stuxnet before it) discovered.

To clarify the process some let’s explore how the process of finding new Malware works.  A user is going about their business and finds some executable file or URL they think is suspicious.  They flag it as suspicious using their AM software, or a link in their browser, or by going to an AM vendor’s website.  So what happens with this report?  It is fed into an automated analysis system that looks to see if it can find any hints that the software is Malware (or the website malicious).  If it finds something then the report is assigned to a Security Analyst to manually figure out what is going on.  If they decide it is Malware they write a signature to detect and block it and that is pushed out to the AM software.  If the automated tools find nothing?  Then the sample is saved away and nothing else happens.  Customers reported Flame, but the automated tools never suspected it was Malware and no further action was taken.

One could attribute the failure of automated tools to uncover Flame and Stuxnet to the expertise of the Malware authors.  Quite simply, these (apparently state-sponsored) Malware authors have so much knowledge of how the automated detection tools work that they can craft Malware that won’t be detected.   But what about the average Malware author?  They might not be able to craft Malware that can go five years undetected, but they can craft things that will go days or weeks.  They may not have as much knowledge of how the automated analysis tools work, but they can run their Malware through every AM engine and publicly available analysis tool out there (in a very automated way, btw) and make sure they won’t immediately detect the new Malware.

I’ve gone through and found samples that one or more AM products said were Malware and submitted them to vendors who didn’t detect them them for analysis.  Those vendors automated systems then analyzed the samples and reported back as their being safe.  False positives or false negatives?  Based on the “smell” of the samples I’d say they were false negatives.   So these pieces of Malware live on until a customer calls for technical support, and technical support decides to bring in a security analyst to really look at the problem.

Which brings me to the second reason I think we are losing the fight, inadequate information sharing amongst AM vendors.  Despite numerous information sharing initiatives between AM vendors, community organizations, academic researchers, etc. I already mentioned the situation where one AM product detects a piece of Malware but another doesn’t and how that situation doesn’t change when I submit the sample.  Well, I believe the process is exactly the same with information sharing initiatives.  If Company A detects a piece of malware then it submits the sample to its partners.  They run it through their automated systems and things go exactly as I described above.  Company B gives no apparent extra priority (e.g., manual investigation) to Company A’s report over if Jane Soccer-Mom submitted it.  Only if a security analyst at Company A contacts Company B and says “we’ve got a live one here” does Company B really pay attention.

I see this even more clearly with attempts to block websites hosting Malware.   The simple explanation is this, it doesn’t matter if Company A tells Company B that URL X is bad, Company B won’t block it unless they determine it really is bad.  And they can’t manually check every possibly bad URL so they rely on automated systems.  Catch-22.

The only way information sharing amongst AM vendors will ever work is if true trust relationships are established.  I’ll give an example.  WOT (Web of Trust) automatically treats the appearance of a URL (technically URI) on a SURBL as an indication the site is potentially dangerous.  It’s not definitive, and it may even yield false positives, but it does say that WOT users gain very rapid protection against links in SPAM.  WOT’s community-based system can later refine the rating.  Unfortunately trust relationships like this are currently rare.

Which brings me to reason three, the AM and other security vendors are wimps.  Really.  Let’s start with the search guys.  Both Google and Bing detect malware-hosting websites while they are indexing the web.  They will notify the webmaster, if they can, that their site needs to be cleaned.  They will give the user a warning if they attempt to access the site.  But they won’t actually block access to the website.  Why?  Well quite often it is a legitimate website that has been compromised by hackers in order to distribute their malware.  “Joe’s Lighting Fixtures and Home Furnishings” might go out of business if you truly blocked access to their site, and so Malware authors can continue to use it to distribute their wares with only minimal interference from the search vendors (and their corresponding URL filters).  If we were really serious about stopping malware distribution we’d apologize to Joe but implement “Ghost Protocol” on his website until all Malware was removed.  That means we’d basically make it disappear from the Internet.  It would not appear in search results, and all URL filters would block access to it.    Gone until clean.  Overreaction?  Ask all the people who suffer identity theft or similar harm because they were permitted to continue to access Joe’s site after it was discovered to be distributing Malware if blocking access would have been an overreaction.

Wimpiness is also why information sharing is so ineffective.  Being on a SURBL doesn’t say a URL is malicious, but just that it appears as a link in a lot of SPAM.   Being rated poorly on WOT just means that a bunch of users think the site has issues.  ClamAV has high false positive rates compared to other AM products so it’s hard for vendors trying to avoid false positives to trust it when it claims something is Malware.  So the industry wimps-out.  Each vendor has its own processes, mostly the failed automated processes, for deciding if something is truly bad and action should be taken.  These processes favor malware authors and distributors.  That’s not the intent of the security industry, but it is the result of their practices.

The fourth reason is a direct follow-on to the third (and maybe even should be labeled 3b), the failure to punish domains for not enforcing good behavior on subdomains.  Let us take my current favorite whipping boy, Tumblr.  Tumblr is a microblogging site.  In looking through my SPAM the last few days I’ve noticed that most of it contains links to subdomains (e.g., joe12343.tumblr.com) of Tumblr and a few foreign sites that also host user-generated content.  If you check URL filters they will tell you the site is safe.  Why?  They rate the domain, such as tumblr.com, and not the specific sub-domain that is malicious.  Tumblr is a legitimate and apparently great service with one problem, they aren’t sufficiently policing the content that users can make available there.  So the bad guys have figured out that they can use Tumblr to host malicious content without fear that URL filters will block access to that content.

Sites that host user-generated content have to take responsibility for blocking user’s from using them to host malicious content.  And the security industry has to get over its wimpiness and hold these domains accountable.  If a major URL filter started blocking access to tumblr.com there would be outrage, but Tumblr would address the problem rather quickly.  If I were in charge I’d seriously consider giving Tumblr 30 days to get its act together or face the implementation of “Ghost Protocol”.

There are more reasons we are losing the fight against Malware, but those are the ones that have been bugging me the last few days.  I’m looking forward to comments telling me I’m wrong, and that it doesn’t work the way I describe it above.  I’d love for Tumblr to tell me how they really are working hard to block malicious content.   I wish it was two steps forward, one step back, because that means we’ll eventually win.  But right now it appears that for every step forward we make we discover that we’ve lost more ground elsewhere.  The Internet can’t go on that way for much longer.

This entry was posted in Computer and Internet, Security and tagged , , , . Bookmark the permalink.

4 Responses to Four reasons we are losing the fight against Malware

  1. I disagree. First of all, it isn’t clear that we’re “losing the fight against Malware”. Every ecosystem will have parasites/defectors, striving for zero defectors is neither viable nor desirable. See Bruce Schneier’s Liars and Outliers for more on this.

    Second, Flame is a bad example. Defending against something like Flame is pretty hopeless given its level of sophistication (e.g. spreading via the Windows Update mechanism).

    Third, users are idiots. A walled garden may be a better solution than anti-malware.

    • halberenson says:

      So I guess we should get rid of the Internet and bring back that AOL thingy? 🙂

      I never suggested we strive for a zero defect world as I agree it isn’t possible nor are the side-effects of really trying desirable. I’m calling for taking a more aggressive stance in addressing suspected cases of malware or malware distribution. You can’t just dismiss a malware report because your automated tools don’t find any, you have to look at the circumstances surrounding the report (e.g., that other products are detecting it as malware) and have a human look. You have to invest in more human analysts, and in more tooling. Even a walled garden only works if you maintain the walls! My Tumblr example is a perfect case of that. By being aggressive on how we treat malware distribution sites I’m trying to make the Internet look a little more like a walled garden. An invisible fence rather than a 12 foot concrete barrier.

      Flame is unusual in many respects, but not really that unusual in propogation pattern. It first has to get into your local network through some means other than Windows Update, such as on a USB key. Once installed on a computer inside your firewall it could then set itself up as a man-in-the-middle and respond (using the forged certificate) to Windows Update requests. Current incarnations of Conficker follows much the same pattern where once inside the firewall it looks for shares that are poorly protected and uses those rather than Windows Update as a propogation mechanism. Conficker has been with us 3.5 years and by some measures is actually growing! And as I pointed out in the post, Flame might be unusual in living on for five years before detection but more common malware often lives weeks or months before discovery. And that is plenty of time for them to do their damage.

      As for users being idiots, that is nonsense and to the extent security people think that way it is precisely the reason we don’t make enough progress. People are people. Give Albert Einstein the same freaking warning over and over and over again when things are perfectly safe and the one time he should have read the text in the dialog box carefully he wouldn’t and would have pressed OK and installed malware on his machine. Let an email from fake MY BANK into the inbox when it looks identical to real mail I get from MY BANK and I might just click on the link inside it. Especialy if I’m in a hurry. More so if I haven’t had my morning coffee. And it doesn’t matter if I realize it’s a Phish before typing in any PII, the drive-by download invisibly compromised my machine just by the action of my visiting the site. Create flexible and thus complex ACL schemes and people (even top experts) will make mistakes that leave file shares exposed. Create bizarre conflicting password requirements, painful lockout schemes, etc. instead of one solid identity solution and people will go to a least common denominator password like the name of their dog. Fail to reject passwords that are easily discovered by dictionary and other simple attacks and it is the Software Engineering and IT communities that are idiots. Systems have to be engineered to protect the broad base of common users, and to a large extent they are not.

      As for if we are winning or losing I can find nothing in the data to suggest we are winning. For a while SPAM got knocked down by botnet takedowns, but it is growing again as spammers move away from the use of botnets. We think we got a botnet and then it turns out to still be alive, or another just replaces it. We make a good dent against traditional malware and bootkits/rootkits take over the landscape. Significant Data Losses continue to be reported each week. Etc. After a decline in infected client systems in the U.S. the rate (as Microsoft measures by CCM) seems to now be flat with occasional spikes when a major new piece of malware comes along. In the Netherlands the infection rate has been on a steady rise. In other words, we make lots of technical progress but the data doesn’t show that it is having the desired outcome. And I called out some of the reasons why.

      • To be clear, by walled garden I was referring to iPhone/iPad/Windows Phone not AOL 😉

        I agree with your values. I’m also very frustrated by the current state of affairs and I’d love to see a much safer ecosystem, but my point was that it is an economic trade-off. Arguably mostly made by idiots (or non domain experts, if you prefer). Some research even suggets that users make the right trade-off (especially since they very often don’t bear the costs).

        The reason anti-malware sucks is because it can. That’s why I was happy to see Microsoft get into it, but unfortunately even they appear to not be very good (although they at least are better at not disrupting non-malware).

        I certainly wouldn’t suggest we’re winning, but instead that we’re in some sort of equilibrium. Everybody just accepts the risks (or is blissfully unaware of them).

        • halberenson says:

          The app store model certainly has worked the last few years and I expect that will continue, despite a potential swing back to just using HTML5 websites for much of what we do with apps today.

Comments are closed.