I’m in the process of setting up a PC that I’m going to use exclusively for playing around with malware. In fact, I’m thinking of starting a pool to capture people’s guesses as to how long it takes someone who intentionally goes seeking malware (i.e., turns off most security features and then starts browsing questionable sites, clicking on links in spam, etc.) to get infected. But this posting is not really about that, it is about a fundamental of the malware world. The impact of letting time go by.
I tried an interesting experiment the other day, I grabbed all the mail in my Junk folder and started checking out the links in emails that were truly spam. There was one very crucial observation, the links in emails that I’d received over 24 hours earlier were no longer valid. This is one of the more interesting aspects of the malware problem, the shelf life of a rogue website is very short. In general I’d guess that it is between a few hours and a day. After that phishing and malware URL filters have been updated to block the site, their ISP has shut them down, and anti-malware signatures have often been updated to catch any previously unknown malware the site may be distributing. It kind of makes you wonder how phishing and malware can be such big problems when “the system” seems to spring into action so quickly.
Unfortunately the rogue sites move around just as quickly as they are blocked or shutdown. Even in the brief life of the site they can steal identities via phishing techniques or distribute malware to the machine of a visitor. So even with a lifetime of only a few hours the rogue site has done its job. Then off goes a new set of nearly identical emails containing the link to the next rogue site they’ve set up and we are off again.
Another time curiosity is how ancient malware continues to roam about and be a threat. Sometimes it turns out that the malware has been tweaked to avoid detection by existing anti-malware signatures. But since that seems to get it a new identity, and malware with the older identities is still roaming about, a better conclusion is just that there are so many machines out in the world that are unpatched, without proper anti-malware, etc. that even ten+ year old malware is still active. We rid the world of Smallpox, but Melissa is still infecting Office documents after 12 years and SQL Slammer is still winding its way about the Internet after 7 years. I have a personal connection to SQL Slammer and I’d really like to see it become a purely historical artifact!
One might think that a major way we could fight malware would be to introduce delays in the system so that rogue websites disappear before anyone can access them. Imagine, for example, that rather than just putting apparent junk mail into a junk folder mail providers actually delayed delivery for 24 hours. On the surface that would seem like a great fix (ignoring that false positive would get delayed as well, but I think you could scope down which mails were delayed to address the false positive problem), except for one little problem: the way we detect dangerous websites is largely by people reporting them. So delaying emails would delay reporting of the website. The same is true of other measures, like simply blocking access to brand new websites. Unless someone goes there and says “this looks like a phishing site to me, I think I’ll report it” or they submit a sample of a program that they think is malware to one of the anti-malware vendors, the site or malware will most likely go undetected. So we are caught in a catch-22.
This time factor may help explain why my parents’ PC, which until I put free anti-malware on them were largely unprotected because my father kept refusing to pay to renew the subscription, was never infected. Not only was my father very limited in his web surfing but he used Hotmail’s “exclusive” level of spam filtering to keep all but mail from his contacts out of his Inbox. He did check email in his Junk folder, but infrequently enough that any dangerous link he clicked on was likely already out of commission.
Even though it would make detecting rogue websites more difficult, I do think that additional research into using time delays to defeat them is in order. I guess I need to go off and see if there are any published papers on just that topic.