How long does it take to get infected with Malware? About 5 minutes

Having a freshly setup PC, and virtual machine, I went looking for trouble.  I did a search for “medical forum” and up popped a link for medhelp.org.  I went to the site and created an account and then started navigating the site.  BOOM, I was redirected to a site that faked looking like an anti-malware scan.  It told me I had malware and asked if I wanted to install anti-malware software.  “Of course” I said yes.  IE’s Smartscreen, using its new reputation check, warned me that it was not a commonly downloaded file.  But I persisted and for the Run Anyway option.  Windows 7 asked if I really wanted to give this download administrator privileges, and I said yes.  So off went the installation.

The Rogue AV installation succeeded and took over my PC.  All was lost.  Of course I had multiple opportunities to block this attack, but I let the social engineering work.  Total time from start to finish, less than 5 minutes.

Of course the really good news is that IE9’s SmartScreen worked perfectly and I really had to go out of my way to figure out how to run the Rogue AV’s setup.  Another great reason to run IE9.

Posted in Computer and Internet, Security | Tagged , , , , | Comments Off on How long does it take to get infected with Malware? About 5 minutes

SenderID Followup

I wanted to do a bit of a followup to my earlier SenderID post both to better capture the comments and make another observation.  First I wanted to bring John Scarrow’s comment to the front of the discussion.  John is the Microsoft General Manager responsible for the various Windows/Windows Live safety features so this is an authoritative position from Hotmail on SenderID, SPF, and DKIM:

Microsoft continues to be committed to SenderID as well as DKIM.  We find them to be complementary in many ways and current support both of them on inbound mail.  SendierID and SPF currently have the largest adoption with our senders and thus we check them first.  SendierID record have the added benefit of improved Phishing protection as we verify the address that is shown to the customer to match the sending domain.   If this fails we then check against DKIM.  Checking against DKIM helps us reduce false positives that may arise from email forwarding.  Although adoption of DKIM has been slower due to the complexity of deployment for mailers we continue to urge senders to adopt both Sender ID and DKIM to insure the highest quality delivery as well as Phishing protection.

Thanks for the clarification John!

As I noted in my previous post one of the issues I have with current SPF/SenderID implementation is that few senders specify a hard FAIL (“-all”) when a message is found to have come from an IP address other than those specified in the DNS record for the alleged sender.  Many sites seem to be specifying either SOFTFAIL or NEUTRAL to be returned if the mail didn’t come from their servers.  A SPAM filter would tend to treat SOFTFAIL as a big strike against the message, and if it found anything else suspicious it would then flag the message as SPAM.  NEUTRAL is supposed to mean the same thing as if there was no SPF record at all, but I imagine different SPAM filters treat it quite differently.  Some treat it the same as SOFTFAIL , some treat it as if there was no SPF record, and others give it special significance.   They each have their own experience built-in as to how often each kind of response is really SPAM or not.  On the other hand, an SPF PASS is a big vote of confidence.  So big in fact that I find most SPAM that makes it through to my Inbox has passed SPF/SenderID checking.  This makes a lot of sense.  You don’t want “your prescription has shipped” email from MedCo to end up in your Junk folder, yet the same mail wording sent from a sender who can’t be verified likely triggers filters to flag it as SPAM.

Interestingly both CVS and Walgreens specify SOFTFAIL in their SPF policies.  In a world where the majority of SPAM is either about sex or drugs, why wouldn’t a pharmacy protect its customers by being extremely clear about making it easy to verify if mail sent under their name really came from them?  Well, Walgreens uses DKIM so if a receiver checks DKIM they are in good shape.  But CVS doesn’t.  CVS in fact confuses me, and I’ll get to why in a moment.

As the recent data loss by Epsilon demonstrates, most large companies use Email Service Providers (ESP) to send bulk emails on their behalf.  So that loyalty program newsletter you get from Marriott Rewards or customer newsletter from Tivo don’t come from their servers they come from Epsilon.  Of course they don’t say Epsilon on them, they appear to come from Marriott or Tivo.  And that is where the SOFTFAIL comes in.  Marriott (or Tivo or Walgreens or CVS) has to either include the IP addresses for the servers of anyone allowed to send email on their behalf as part of their SPF records or they have to SOFTFAIL so that mail from one of their ESPs isn’t rejected out of hand as a forgery.  I imagine they all find that keeping their DNS record in sync with all the potential senders of email on their behalf unmanageable and so simply choose not to try to control it.  What I find interesting about CVS is that they DO include the IP addresses of their ESPs’ servers in the SPF record, and yet they still choose SOFTFAIL when there is no match.  Perhaps they have discovered from experience that they can’t keep the SPF record up to date, but they can at least give SPAM filters a hint that mail from certain ESPs is legitimate.

It seems to me that big Enterprises should get their act together and either go to including all potential legitimate sending servers of email on their behalf in their SPF record with a FAIL for anyone who isn’t on the list, or use DKIM when they can’t validate the server itself.  But what about small businesses?  That is more difficult and a personal experience demonstrates why.  I recently examined a piece of email from the car service I use for airport transportation that had ended up in my Junk folder.  This car service has a Hotmail email address, but they use a cloud service for handling reservations.  The cloud service sends out a reservation confirmation, which of course is made to appear to come from the car service’s Hotmail address.  Hotmail’s SOFTFAIL policy gave this email a legitimate chance of ending up in my Inbox, while a FAIL policy would have certainly sent it to the Junk folder.  (As it turns out the mail went to my Junk folder anyway because the mail is nothing more than an image of a reservation form, and a piece of mail containing nothing more than an image that is from an unverifiable email address is a red flag for SPAM).  Obviously because the car service doesn’t use its own domain for email none of the existing mail authentication tools (SPF, SenderID, DKIM) can be set up to work properly.  This demonstrates the complexity of the problem, particularly as companies outsource more of their operations to cloud application services.

I know that is a bit of rambling on this topic but it really leads to the point that the industry still seems to be at the stage of attacking the SPAM problem with lots of fly swatters rather than the thermonuclear device that is called for.  There is no comprehensive solution, and even the solutions that do exist are poorly utilized.  I have to wonder if we’ll have this problem solved by the time we get to the next decade.

Posted in Computer and Internet, Phishing, Privacy, Security | Tagged , , , , , | Comments Off on SenderID Followup

Is Skype really worth $8.5B to Microsoft?

$8.5B seems rather expensive to me, but I can understand why Microsoft would want Skype.  And I don’t think it is primarily for the reasons most people are saying.

First of all, recognize that Microsoft already has two competing products.  The first, Lync (formerly Office Communicator/Office Communications Server), is aimed at the enterprise unified communications business.  This is not something either Skype nor Google competes in, however it is an area of significant competition between Microsoft and Cisco.  I don’t see how Skype helps Microsoft in this particularly competition.  On the other hand Microsoft has failed in, and withdrawn, its offering for Small Businesses: Microsoft Response Point.  And while Office 365 Lync may be suitable for larger small businesses it isn’t clear it will address the very small (<=5 user) market.  Skype could be a gap filler here,  particularly since I see so many small business people using it.

Microsoft also has Windows Live Messenger (WLM), which besides being an instant messenger client allows for both voice and video calling to other WLM users.  Indeed when analysts say something like Microsoft bought Skype to have something that can compete with Apple’s Facetime I really scratch my head and wonder if they have a clue about Microsoft’s assets.  Microsoft already has everything it needs to offer a Facetime competitor, and I’m sure one was well along as part of the “Mango” update independent of the Skype purchase.

So what is Microsoft missing?  Well for one thing it has no way to connect Windows Live Messenger with the telephone (either VOIP or POTS) network.  So Windows Live Messenger only allows calling between WLM clients.  Skype provides the interconnect with the phone network that Microsoft has been missing.  This is one of the two reasons that I believe Microsoft purchased Skype, but I’m not sure it is the primary reason!

The primary reason I believe Microsoft purchased Skype was to increase its share of Users & Identities on the Internet.  Up until recently Microsoft held the lead on Internet Identities via the Live IDs used for its Hotmail, Messenger, XBox Live and other properties.  But Facebook, not to mention Google and ongoing competition from Yahoo, have erased Microsoft’s lead.  While most now consider Facebook the leader in Internet Identities, the numbers seem to indicate that Skype is the real leader with about 660 Million users.   Add that to Microsoft’s existing Live IDs and we are talking about Microsoft properties having somewhere in the neighborhood of 1 Billion users.  That’s twice Facebook and leaves both Google and Yahoo in the dust.  Of course there are duplicates on all these properties, so the number of unique users/IDs is somewhat of a guess.  But if we assume similar duplication rates across the various user bases then its all a wash.  And given that Windows Live Messenger and Skype are direct competitors, it is likely that Microsoft gains between 250 and 500 million new unique users/IDs.  That is huge.

Microsoft considers owning a user’s Internet identity crucial for all kinds of reasons.  There is the obvious, that an identity represents a user (or at least a user in a specific context), and someone using one of your services is someone not using (or using less of) a competitive service.  It is an opportunity to get them to use more of your services.  And, in this age of rejecting third-part tracking, it is another way to aggregate first-party information and use it for personalization.  That should improve the aggregate user experience, and from a business perspective it makes it more attractive for an advertiser (e.g., Macy’s, Ford, Sony, etc.) to place ads with Microsoft because Microsoft can deliver them to a larger number of users in a more targeted way.

How Microsoft wraps this all up with its existing product lines is yet to be seen, but the opportunities for doing so are huge.  Skype is obviously the best known consumer VOIP/Video calling brand and one that will have great appeal for Windows Phone 7 as well as on future Windows tablets.  The Skype distribution that goes to 660 million users, mostly on PCs, becomes another tool for distributing Bing toolbars and search defaults.   And how will Microsoft integrate and/or rationalize its Skype, Windows Live Messenger, and Lync offerings?  I would expect Skype and WLM to be merged, while Lync is perhaps rebranded but otherwise remains a separate (and hopefully integrated) offering.  As they say, the devil is in the details and we’ll have to wait and see if Microsoft can maximize the opportunity while minimizing the defection of Skype (and Messenger) users to other services.

There is of course one additional way to view this acquisition.  Microsoft is determined to be the leader in Natural User Interface, an area it pioneered in Microsoft Research but was slow to adopt in its products. After letting Apple get an early lead with capacitive multi-touch (since Microsoft Surface addresses such a narrow segment) and Nintendo with motion-sensing controllers, Microsoft took the lead on motion-based control with Kinnect and has been working to achieve competitive parity on the multi-touch UI front.  Microsoft has always seen itself as a leader in all aspects of Voice, although that has yet to really pay off for them.  The addition of Skype may be a further signal that they really intend to own Voice (based control and communications) user interfaces.

Is Skype worth the $8.5B Microsoft is paying?  Potentially.  Unfortunately Microsoft hasn’t yet shown the ability to monetize its Internet properties the way that Google has.  The Skype acquisition can turn out to be extremely successful from a strategic standpoint, but unless Microsoft figures out the “monetization thing” shareholders are going to be dissapointed.  And that applies to Bing, MSN, and Windows Live every bit as much as it applies to Skype.

Posted in Computer and Internet, Microsoft | Tagged , , , , , , | Comments Off on Is Skype really worth $8.5B to Microsoft?

Bing going Mobile

Yesterday we had the announcement that Bing would become the default Search and Mapping solution for the next generation of RIM’s Blackberry.  Over the previous year we’ve seen Microsoft introduce excellent Bing apps for the iPhone and more recently iPad, work with Apple to add Bing as a search option for Safari on IOS devices, ship Windows Phone 7 with Bing search well-integrated, and even ship a Bing app for Android.  In fact Microsoft has been so aggressive in taking Bing to mobile platforms that otherwise compete with Microsoft offerings, that it has Windows Phone enthusiasts scratching their heads over why they aren’t getting the new Bing features first.  I think Microsoft’s strategy is pretty obvious and will try to explain it.

Historically Microsoft has tried to have each of its businesses focus primarily on supporting one another.  So, for example, Microsoft SQL Server only runs on Microsoft Windows and most of Microsoft’s marketing and sales activities focus on a “better together” theme.  If Microsoft wanted to purely go for SQL Server market share they would port it to Linux and thus dramatically improve their chances for being declared the corporate standard database solution in large enterprises.  Instead Oracle is usually the corporate standard with SQL Server permitted for use should a project choose Windows Server as its OS.   Not exactly a ringing corporate endorsement.  On the client side most products are Windows-only, although occasionally something makes its way to the Mac (e.g., Office).  But have you ever seen an end-user product from Microsoft released on Linux?  No, the Bing for Android (which is a Linux variant) App is the first.

Every now and then Microsoft decides that winning in a particular business is more important than reinforcing the Windows franchise.  Gaming with the XBox was the last big one.  Search with Bing is the current one.  While most teams at Microsoft would be under severe pressure to support the Windows franchise, the Bing team pretty much has a single marching order:  grow search share.  And you can’t grow search share by focusing purely on users of Internet Explorer on Windows and Windows Phone.

Finding ways to compete with an entrenched competitor like Google is not easy.  Charging directly at them, that is playing their own game, is at best a strategy that slowly garners you small improvements in market share and at worst is suicidal (meaning you lose so much money you have to abandon the attempt).  The preferred way of competing against such a competitor is to change the game.  Putting the shoe on the other foot, the main reason that Linux has had so little success against Windows in the PC business is that all the desktop Linux attempts tried to do was copy Windows.  The only people who wanted a Windows Copy-Cat were the technologists who could appreciate the differences under the covers.  Typical end-users just knew that their apps didn’t run there, none of their friends and relatives could help when they had a problem, documents sent to them from users of Microsoft Word (and other apps) wouldn’t print properly in Open Office, etc.  Ignoring the PC market Microsoft dominates and changing the game as Apple did with IOS and Google is doing with Android, has made it possible for both of these company’s to sidestep Microsoft’s PC OS dominance.  Indeed, if Microsoft doesn’t react quickly enough to what is going on with IOS and Android they will find Windows marginalized and turned into a legacy business.  So how should Bing compete with Google Search?

Bing is apparently pursuing a 3 pronged approach to competing with Google Search.  Prong one is that head on charge with PC browser-based searching:  improving search results, doing deals for search defaults and toolbar installation, etc.  This is just the price of entry stuff.  Prong two is trying to change the browser-based searching experience.  This one is important to Microsoft’s overall success, but doesn’t seem to me to be enough of a game changer to ensure Microsoft’s success.  Still, they have to be more than just a clone of Google Search if they want to chip away at Google’s market share on the PC.  The third and most important Prong is to aggressively stake out Bing’s position in a rapidly growing, and so far unclaimed, part of the search market: Mobile.

Search on mobile devices is still an open area with no real “owner”.  Google was so worried about this that they purchased a company making a mobile operating system, Android, on the theory that mobile phones running a Google OS would also use Google Search.  Given Android’s growth that isn’t a bad theory, however Google’s open source licensing approach doesn’t lock Android phones into having Google as their default Search.  Depending on market and carrier some Android phones ship with Bing as the default search.   Still, Google’s strategy is mostly working.  More importantly, Microsoft sees the opportunity as so important that they are clearly out to capture the majority of search share on all non-Android smartphones (and other mobile devices)!

If we could see into Bing’s finances I bet we would find that investment on the PC Browser search part of the battle is flat or even declining.  Most likely engineering investment is flat while marketing spend is declining.  For example, would Microsoft prefer to pay $500M to a PC manufacturer to make Bing the search default and install the Bing Toolbar, or $500M to a major mobile carrier  or device maker to make Bing the default search on its smartphones?  Microsoft can’t afford every deal, and I’m betting that most of the spend going forward is on the mobile side of the house.  On the engineering side, doing great jobs with Nokia and RIM, writing great search apps for IOS and Android, and of course doing a fantastic job with Windows Phone 7, takes a lot of money and that is where the spending growth must be going.  But the payoff could be huge.  Imagine a world in which Microsoft and Google were neck-and-neck is the Search market.  Because Microsoft has diversified revenue streams and Google is almost completely dependent on Search Advertising that would throw the game to Microsoft.  That’s exactly what I think Microsoft is trying to do with its Mobile activities.

The big wildcard I see right now is what is Apple’s long-term play?  Right now they are more aligned with Google than Microsoft on the search front, something that started prior to Android hitting the market.  If Android continues to gain market share I can’t imagine Apple will want to continue to add to Google’s success in the mobile space.  So does Apple more fully jump on board with Bing, or do they introduce their own Search Engine?  Apple is certainly arrogant enough to think they could successfully jump into the search engine fray.  On the other hand, given their surprisingly low R&D spend rate one has to wonder if they really are up to absorbing the expense of the search wars.  Recognizing that Microsoft and Apple have learned how to work well together (e.g., with Microsoft Office) despite being mortal enemies it wouldn’t be at all unreasonable for Apple to do a “deep” deal around Bing as their primary search engine.

Should Apple eventually jump into Bing with both feet I also wonder if that says something about Bing’s future as part of Microsoft.  Right now Bing is benefiting mightily from Microsoft’s deep pockets and wouldn’t be able to succeed as a standalone entity.  But at some point it has to be able to sink or swim on its own as a business (even inside Microsoft).  With Nokia, RIM, Yahoo, Microsoft, and (speculatively) Apple involved one wonders if it wouldn’t make sense to create a Bing Inc. owned by all five firms.  If that is what Apple demanded as its price for betting on Bing, would Microsoft go along?  Or, like the other companies would Apple be happier knowing it was reaping the benefits while Microsoft spent all the money?  It’s fun to speculate on these things, but I don’t really expect anything about Bing’s organizaional structure to change over the next couple of years.

Posted in Computer and Internet, Google, Microsoft, Mobile, Search | Tagged , , , , , , , , | 3 Comments

Is Hotmail’s “Trusted Sender” feature more than just show?

One year ago Microsoft introduced a feature in Hotmail that marked mail from a set of “trusted senders”, primarily banks, with a Trusted Sender icon.  The idea was that if mail from your bank was marked with the icon you could trust it, and if mail claiming to be from your bank wasn’t marked then you should be suspicious that it was a phishing attempt.  Sadly, this Hotmail feature does not seem to be working.  For the financial institutions that I use that are considered trusted senders by Microsoft, less than 25% of the legitimate emails I receive are marked with the Trusted Sender icon.  As a result Trusted Sender has absolutely no meaning.  For this feature to really work would require that all email from a trusted sender was marked appropriately so that any mail that wasn’t would obviously be a phishing attempt.

I’ve seen nothing new from Microsoft about the Hotmail Trusted Sender program, and it clearly isn’t working after a year of existence.  So I have to conclude this feature is all about show and not about actually helping users distinguish between legitimate and phishing email.

Posted in Computer and Internet, Microsoft, Phishing, Privacy | Tagged , , , , , | 2 Comments

Has Microsoft put the nail in the coffin on SenderID?

A funny thing happened on my way to some research on Anti-SPAM technologies, I found that Microsoft is wiping SenderID information from its website.  When you try going to the web page for the SenderID Framework you are now redirected to a page for its Forefront anti-SPAM products.  In fact, most links to SenderID information at microsoft.com now redirect to the Forefront page.  The only exception seems to be some support articles and blog entries.  And if you want some final proof, it looks to me like Microsoft is no longer publishing SenderID (i.e., SPF2.0) records in the DNS entry for microsoft.com!  SenderID is well and truly dead.

SenderID was Microsoft’s proposal for eliminating spoofing in email, a major factor in phishing attempts.  It was intended to make sure that mail claiming to be from someone at example.com really came from example.com.  Unfortunately for Microsoft the rest of the industry pretty much rejected the SenderID proposal.  A couple of alternate proposals have wider non-Microsoft support, Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM).  But neither is being adopted agressively enough to make a dent in the phishing problem.

SenderID seems to have died because the SPF and SenderID proponents couldn’t get together on merging the proposals.  I can’t really tell if this was because of some Microsoft intransigence or the prevailing “anybody but Microsoft” (aka, ABM) winds of the time, and I don’t want to revisit the battle.  The result was essentially a move by most of the Internet intelligentsia to support SPF while Microsoft futilly evangelized SenderID (supporting it both in Hotmail and in Microsoft Exchange).  While SenderID adoption outside of Microsoft itself was very limited, SPF has achieved some measure of success.  It appears that about 45% of larger domains, and 10% of all domains, publish SPF records.  But sadly less than half of those do so in a way that will truly block forged email.  Most users of SPF are merely using it to advise SPAM filters that an email might be suspicious rather than being clear that the mail did or did not originate from them.

Fidelity and Schwab?  They specify the all important “-all” qualifier in their SPF record, meaning that unless the mail comes from one of their specifically identified servers it should be rejected as a forgery.  Gmail, Hotmail, or Yahoo Mail?  Nope, sorry.  They have SPF records but instead of specifying that mail coming from a sever not under their control should be rejected they throw up their hands and basically say “we don’t know if that email really came from us”.  Wimps.  What about Ebay and Paypal, two of the most frequently spoofed domains?  Nope, no -all.  That doesn’t seem to make sense, does it?

Yahoo, Google, Ebay, and Paypal are putting more weight on the use of DKIM to address the forgery problem.  Data on DKIM adoption is rather hard to come by, but if I boil down the bits and pieces I can find around the web then DKIM adoption amongst large ecommerce sites is high, on large sites in general it is about half that of SPF, and amongst smaller sites it may be non-existent.  If we look outside the U.S., at Japan, then SPF adoption is approaching 42% while DKIM adoption is at 1/2%.   The data makes it hard to tell how much success DKIM is really having.

The biggest problem with both SPF and DKIM aren’t their technical merits or flaws, it is that there isn’t clear industry acceptance and agreement to deploy one or both.  In the case of DKIM I certainly consider Google and Yahoo’s support, along with major sites like Ebay and Paypal, a big plus.  But while Gmail may reject forged mail claiming to be from Paypal, Microsoft’s Hotmail may not because Microsoft   doesn’t support DKIM and Paypal doesn’t have a hard-corps SPF policy has just started experimenting with the use of DKIM for mail that fails an SPF check.  And since Paypal doesn’t specify -all (which would cause the SPF check for forged mail to fail) Hotmail users will have to hope that Microsoft’s filters find enough else wrong with the mail to classify it as a phishing attempt. Microsoft Exchange and Microsoft Forefront products also don’t have built-in DKIM support, so most corporate email servers will accept the forged Paypal email.  I also doubt most ISPs support DKIM, for example from what I can find I don’t believe that Comcast or Verizon support it in their email systems.

With SenderID dead it certainly opens up the possibility that Microsoft will join the DKIM camp.  Or at least evangelize SPF more aggressively.  If When Microsoft fully supports DKIM in Hotmail, adds native support in Exchange Server as well as Exchange Online (nee Office365), and in its Forefront anti-SPAM products, that should push the effort over the hump towards universality.  In fact, wouldn’t it be grand if Microsoft, Yahoo, and Google declared a date that they would all drop any non-DKIM signed email on the floor (meaning they wouldn’t even put it in your junk folder, they wouldn’t deliver it at all).  Failing that, an agreement that all email must be authenticated with either DKIM or SPF (including -all), with at least  all the major email providers supporting both, would do.  Now that would put a real dent in phishing.

Posted in Computer and Internet, Microsoft, Security | Tagged , , , , , , , , , , , | 9 Comments

Old Operating Systems never die, and it seems they don’t fully fade away either

In Time and Malware I mention that very old malware is still with us.  That made me curious about just how many PCs are still running pre-XP versions of Microsoft Windows (e.g., Windows 98).  The data from a number of sources that track web browser usage suggest that pre-XP versions of Windows still represent 5.5% of the operating system market!  And because that survey uses web browser user agent data, it probably understates the actual market share for two reasons.  The first is that older PCs seem less likely to be connected to the Internet than newer PCs would be.  The second is that there are a lot of people running old versions of operating systems on Servers, which of course would never be used to browse the web (even if connected to the Internet).

The Server case is one I’m fairly familiar with.  Years ago there was a very large SQL Server customer I talked to who was still running SQL Server 4.2 on IBM OS/2 and couldn’t figure out how to move to something new.  The servers were spread over several hundred  nationally dispersed branch locations, and upgrading them would take two years of having IT people physically go out to each branch.  It was such a large task that they just couldn’t even contemplate doing it.  For all I know they are still out there running the OS/2 servers.  And they weren’t alone.  Many IT shops had the attitude that when you have a mature system that requires little attention you just leave it alone until the application it runs reaches obsolescence.  And so there are likely quite a number of Windows NT and Windows 2000 servers that are still kept running, just waiting until someone says “hey, we replaced the FOO application and no longer need that Server for it”.

On the PC side there are people who quite literally still find operating systems like Windows 98 adequate for their needs, because their needs haven’t changed.  The simple spreadsheets they wrote back in the 90s still work, and that old version of Quicken still knows how to maintain a checkbook.  There are also all those people who don’t have access to broadband, with removes much of the incentive to use a modern OS.  Some people might have peripherals that aren’t supported by the NT-based operating systems and so they keep a PC around running Win 9x for the occasion where they need to use that old scanner, for example.  There are also special purpose systems, for example a restaurant order management system, that are still in use.  In fact, I still occasionally see systems like this that are still running character cell (ie, MS-DOS) based applications!  You don’t upgrade the OS until you replace the entire system.  And then I know people who took an old PC and turned it into a print server or other special purpose system.  Why upgrade, especially since it would cost you money and bring no apparent benefit?  Security, after all, is something most people ignore until it bites them on the backside.

And so, with millions of machines out there running operating systems that are still susceptible to attack by ancient malware, ancient malware lives on.

Posted in Computer and Internet, Microsoft, Security, Windows | Tagged , , , , | 1 Comment

This one doesn’t count

Right after posting “Time and Malware” I was looking at my blog site on Google’s Webmaster Tools and decided to check out a site that was linking to mine. BLAM! That web site tried to install the Adclicker Trojan on my computer. Microsoft Security Essentials caught it, so my machine is fine. And I reported the site to both Google and Microsoft.

Posted in Computer and Internet | Tagged , | Comments Off on This one doesn’t count

Time and Malware

I’m in the process of setting up a PC that I’m going to use exclusively for playing around with malware.  In fact, I’m thinking of starting a pool to capture people’s guesses as to how long it takes someone who intentionally goes seeking malware (i.e., turns off most security features and then starts browsing questionable sites, clicking on links in spam, etc.) to get infected.  But this posting is not really about that, it is about a fundamental of the malware world.  The impact of letting time go by.

I tried an interesting experiment the other day, I grabbed all the mail in my Junk folder and started checking out the links in emails that were truly spam.  There was one very crucial observation, the links in emails that I’d received over 24 hours earlier were no longer valid.  This is one of the more interesting aspects of the malware problem, the shelf life of a rogue website is very short.  In general I’d guess that it is between a few hours and a day.  After that phishing and malware URL filters have been updated to block the site,  their ISP has shut them down, and anti-malware signatures have often been updated to catch any previously unknown malware the site may be distributing.   It kind of makes you wonder how phishing and malware can be such big problems when “the system” seems to spring into action so quickly.

Unfortunately the rogue sites move around just as quickly as they are blocked or shutdown.  Even in the brief life of the site they can steal identities via phishing techniques or distribute malware to the machine of a visitor.  So even with a lifetime of only a few hours the rogue site has done its job.  Then off goes a new set of nearly identical emails containing the link to the next rogue site they’ve set up and we are off again.

Another time curiosity is how ancient malware continues to roam about and be a threat.  Sometimes it turns out that the malware has been tweaked to avoid detection by existing anti-malware signatures.  But since that seems to get it a new identity, and malware with the older identities is still roaming about, a better conclusion is just that there are so many machines out in the world that are unpatched, without proper anti-malware, etc. that even ten+ year old malware is still active.  We rid the world of Smallpox, but Melissa is still infecting Office documents after 12 years and SQL Slammer is still winding its way about the Internet after 7 years.   I have a personal connection to SQL Slammer and I’d really like to see it become a purely historical artifact!

One might think that a major way we could fight malware would be to introduce delays in the system so that rogue websites disappear before anyone can access them.  Imagine, for example, that rather than just putting apparent junk mail into a junk folder mail providers actually delayed delivery for 24 hours.  On the surface that would seem like a great fix (ignoring that false positive would get delayed as well, but I think you could scope down which mails were delayed to address the false positive problem), except for one little problem:  the way we detect dangerous websites is largely by people reporting them.  So delaying emails would delay reporting of the website.  The same is true of other measures, like simply blocking access to brand new websites.   Unless someone goes there and says “this looks like a phishing site to me, I think I’ll report it” or they submit a sample of a program that they think is malware to one of the anti-malware vendors, the site or malware will most likely go undetected.   So we are caught in a catch-22.

This time factor may help explain why my parents’ PC, which until I put free anti-malware on them were largely unprotected because my father kept refusing to pay to renew the subscription, was never infected.  Not only was my father very limited in his web surfing but he used Hotmail’s “exclusive” level of spam filtering to keep all but mail from his contacts out of his Inbox.  He did check email in his Junk folder, but infrequently enough that any dangerous link he clicked on was likely already out of commission.

Even though it would make detecting rogue websites more difficult, I do think that additional research into using time delays to defeat them is in order.  I guess I need to go off and see if there are any published papers on just that topic.

Posted in Computer and Internet, Security | Tagged , , , | 1 Comment

Why haven’t we seen a Microsoft Windows Application Store?

If you talk to Microsoft veterans you can find many who either proposed, or actually worked on, Windows application stores dating back to the 1990s.  None of these actually shipped.  There are now plenty of leaks out on the web about a “Windows 8” App Store, and while some are saying this is better late than never I think the timing says more about the technology world evolution over the last 15 years than about App Stores specifically.  This post is going to explore four reasons (well, three plus a bucket of other points) why Microsoft hasn’t/wouldn’t/couldn’t/didn’t want to/etc. introduced a Windows App Store to date.  And by the end perhaps we’ll see why past impediments no longer factor into the discussion and now is the time.

Channel Conflict

In this age of instant software gratification via download over high-speed networks it can be difficult to recall that in the 1990s nearly all software was purchased through bricks-and-mortar retailers, resellers, and distributors.  Later in the 1990s and into the early 2000s we saw a shift towards purchasing from on-line retailers(e.g., Buy.com, Amazon.Com, etc.), something that is still with us today.  In fact, a great deal of software is still sold through these channels.  For example, businesses purchasing software through Microsoft’s Volume Licensing programs are still sent to resellers just as they were in the 90s.  So every time Microsoft has looked at the idea of an app store it has had to face the issue of what would an app store do to its existing channel partners.  Back in the 90s this classic channel conflict was at the forefront of the discussion.  Microsoft’s channel was considered one of its great strengths compared to traditional competitors, and at least a temporary bulwark against emerging competitors.  As a result, although many at Microsoft were aware that someday it would be easier to purchase and download software over the Internet, premature disintermediation of its channel partners was something to be avoided at all costs.

Of course the channel has played an ever shrinking role in delivering software to consumers and in the case of the smartphone and media device (e.g., iPad) markets consumers expect to purchase and install software directly through the device manufacturer.  Further disintermediation of the channel for Windows applications targeted at consumers would have little impact on overall Microsoft sales.   Microsoft could actually bring its channel into the new world by allowing them to have “private marketplaces” in a Windows 8 App Store.  For example, Microsoft prefers to use resellers and distributors for the Volume Licensing programs because it reduces the number of organizations that it must have a direct business (e.g., Accounts Receivable) relationship with.  I imagine Microsoft would want to continue this business model.

The bottom line here is that channel conflict issues almost certainly dominated Microsoft’s internal thinking about a Windows App Store from the mid-1990s through the early 2000s.  And no matter what the technical people wanted to do, the bottom line impact dictated the answer.

Internet Maturity

The odds are that everyone reading this is doing so on a broadband Internet connection with a speed of 1 Mb/s or greater.  An unlucky  few might be on slower versions of broadband (e.g., 640 Kb/s), many of you have home connection speeds of 6-12 Mb/s, and a rare few are probably in the 25-50 Mb/s range.  Software downloads are reasonable on any of these connections.  But state of the art in 1994 was dial-up at 28.8 Kb/s.  At the end of the 90s most people were still on dial-up, which maxed out at 56 Kb/s.  U.S. Broadband penetration at the end of 1999 was only about 5% (more people were still using 14.4 Kb/s modems than broadband).  These numbers are really important when you start thinking about when Microsoft would have delivered a Windows App Store.  Windows 95/98/98SE shipped into environments where the bulk of users either had no Internet connection or had a dial-up connection of 14.4 – 28.8 Kb/s.  Windows ME and Windows 2000 were designed when speeds were at 28.8 – 56 Kb/s.  Windows XP was designed when typical speeds were 56 Kb/s or less, and by the time it shipped in 2001 only about 15% of U.S. users had broadband connections.  So from a practical standpoint an OS supported Windows App Store made no sense through the launch of Windows XP, an operating system that dominated the market until just the last few months.

The first version of Windows where one could make a real argument that an App Store made sense from a network infrastructure perspective is Windows Vista.  Its initial design took place when broadband penetration was low but on a steady growth path.  By the time Vista shipped broadband penetration had crossed the 50% mark in the U.S. and no one was making decisions that catered to dial-up users.  The Longhorn project that lead to Vista was also supposed to produce a major leap forward for Windows, and an App Store certainly would have been appropriate to that goal.  Unfortunately the Longhorn project was a disaster that  distracted Microsoft for 5 years and required Windows 7 to focus more on righting Vista’s wrongs than on exciting new things like an App Store.  I don’t recall if anyone talked about Longhorn including an App Store, but in any case we didn’t get one.  At least we can say that network infrastructure, while a key reason for not doing an App Store in earlier versions of Windows, should not have played a role in keeping an App Store out of Windows Vista or Windows 7.

By the way, although I used U.S. numbers in making this case I don’t believe worldwide penetration rates would change the story much.  There are some countries that were ahead of the U.S. in broadband penetration and many that were behind it.  So relative to decision making at Microsoft I think using U.S. numbers as a proxy in this case is just fine.

App Model

If the previous two items seem like ancient history that no longer applies, this item is the real killer even for today’s Windows 7 world.  Take a look at App Stores as popularized by Apple and by Windows Phone and you see they offer far more than simply a way to purchase and download applications for your device.  They enforce application models that let you put applications on your device that you can largely trust to be free of malware, that won’t reduce your device’s reliability, that won’t interfere with other applications that you load on your device, and that can be cleanly and completely removed from your device.  That is not something that is possible on Microsoft Windows today.

Let’s take just one area and drill in slightly, installation/setup.  Windows installations “spray” settings all over the Registry, put files in multiple directories, execute code during the installation process, overwrite shared files (DLLs) that other applications may rely on, etc.  Even software that does clean initial installation will then often do these nasty things (and potentially require administrator privileges) on the first run of the application.  Not only can a poorly coded setup, and application, make your system unstable but removing it is a very difficult thing to do.  In fact, many Windows applications leave various “artifacts” sprinkled around your system when you remove the app (meaning you can never completely remove them).  Compare this to something like Windows Phone 7 where all setup is done declaratively (no code execution permitted), the location of an application’s artifacts is well known, the application setup doesn’t touch shared files, etc.  This makes it very difficult (if not impossible) for a Windows Phone application to mess up your device or any other application.  It makes application removal easy and fast.  Most importantly, it makes it possible for the Windows Phone Marketplace to verify the application meets minimum quality requirements before accepting the application; And thus allows the Marketplace to offer a reasonable level of assurance to the end user that they can go ahead and install the application on their device.  This is not possible with any existing version of Windows.

Keep in mind that Microsoft has been struggling to correct this problem for some time now.  “DLL Hell”, for example, is a long understood problem that Microsoft has partially addressed over time.  Microsoft further made progress when it introduced .NET.  However, there hasn’t been anything as proscriptive as the application models that Apple’s IOS or Microsoft’s Windows Phone 7 have.  The application world on Windows effectively remains the wild, wild west.

The bottom line here is that for Microsoft to have a viable Windows App Store they first need to create a new Windows Application Model!  Fortunately the rumor mill is starting to fill up with news of just such an application model, called AppX.  Is this new model real, is it part of Windows 8, is it really called AppX?  These and many other questions are things we’ll most likely have answers to over the next few months.

One thing I’m going to speculate on is that the only applications that will be available in a Windows App Store will conform to AppX (or to Windows Phone 7’s app model if that isn’t a strict subset of AppX).  Windows 8 itself will continue to support the full range of applications that work on Windows 7, however only apps that conform to a new app model will be allowed in the store.   Another option would be to have a way to purchase legacy apps in the Windows App Store, but for the App Store not to manage their installation or removal.  Personally I’d rather Microsoft focus on the new app model and let the legacy world take care of itself.

Other

There are actually many other factors that undoubtedly have added to the delay in having a Windows App Store.  For example:

  • Anti-Trust Concerns.  Had Microsoft been the company to pioneer an App Store back during the period of anti-trust hysteria, how would software competitors as well as the channel reacted?  No doubt software competitors would have attacked the idea as way to keep their software off the Windows Platform.  For example, a cross-platform software company might have considered it too difficult to meet the requirements to be in the app store and then felt that users would favor things in the store (such as Microsoft’s own apps) over the competitive ones.  Or they might just have wanted to use the Windows App Store as another way to beat Microsoft when it was down.  And almost certainly parts of the distribution channel would have screamed to the U.S. DoJ or EU about Microsoft cutting them out of the picture.  At this point all the precedents for an OS Application Store have been set by others, particularly Apple.  Neither software competitors nor the channel are likely to get hysterical (though they might raise concerns about specifics things that Microsoft will then address to keep the peace).
  • Liability Concerns.  What liabilities might you be subject to if software in your app store does cause problems?  Again I think that Apple has paved the way, and while lawyers still might raise concerns they won’t be enough to prevent their from being a Windows App Store.  I will point out that something like AppX would help appease the lawyers since the ability for an app to cause problems would be greatly reduced.

Conclusion

The only real remaining inhibitor to Microsoft introducing a Windows App Store is having a good App Model, and it seems that we might be about to get one in Windows 8.  Yes Microsoft is a bit late to the party, but largely because the Longhorn/Vista effort probably resulted in the effort to provide a clean Windows App Model being delayed several years.  Not only does Windows 8 mark the perfect time to finally introduce a new App Model and App Store, it also likely marks a turning point for Microsoft.  Windows 8 will be the point where Microsoft finally makes up for the lost 5 years.

Posted in Computer and Internet, Microsoft, Windows | Tagged , , , | 1 Comment