Microsoft, Linux, and Patents

One of my readers asked if I’d speculate on why Microsoft won’t disclose the 237 of its patents it thinks Linux violates.  Given news this week of Amdocs agreeing to license Microsoft’s Linux related patents (via a patent cross-licensing deal) I figured the timing was good.  I apologize in advance for how long this is going to be!

Microsoft invests billions of dollars a year in R&D and then discovers that others have stolen their work and are profiting from it.  Now historically for any company this is a pretty straightforward thing, you go to the “thief” (who may in fact not even realize they’ve stolen your work) and ask them to stop using it and/or pay you for the right to use it.  And if they refuse you take them to court.  As companies got bigger and more complex they realized that it’s actually rather hard not to unintentionally infringe on patents held by others and so they started doing patent cross-licensing deals.  In those deals the value of the two patent portfolios is assessed and the company with the weaker portfolio trades licenses plus cash for the licenses to the stronger portfolio.  This drives companies to build larger patent portfolios so less cash is involved in any transaction.  The larger portfolios also serve as a defensive measure since nearly anyone who has a product offering of their own, and comes after you for patent infringement, will find that they are probably infringing on one of your patents too.  So one either ignores the possible mutual infringements or takes the cross-licensing route.  In general this reduced patent infringement fights between product companies to an acceptable level.

To put this in perspective let me present a straightforward example.  With all the aggressive evolution in database systems the last thirty years, and the sometimes vicious competition amongst the database players, why is it that IBM, Oracle, and Microsoft aren’t in court constantly battling over patents?  It is inconceivable to me that all three’s database products don’t infringe on at least one patent of the other two.  I have no idea what the state of cross-licensing between these three is, but I can tell you that in earlier times when cross-licensing was uncommon that no one focused on going after the competitors for patent infringement.  Oh there could have been cases.  For example, at both DEC and then Microsoft we patented inventions that gave us a significant boost in the TPC benchmarks.  If a competitor had turned around and implemented that same technique in order to beat us in the benchmarks then we would have cried foul.  But otherwise the reality was that patent infringements rarely have a material effect on the business.  And I think that’s where todays rather litigious environment differs.

Over the last 15 years we’ve seen two forces emerge that upset the delicate “mutually assured destruction” patent peace between companies.  One is the rise in the “Patent Troll” industry.  The other is the emergence of the anti-Intellectual Property community.  Let’s deal with patent trolls first.

While Patent Troll is a pejorative thrown around a lot these days, usually by a thief trying to make the actual owner of the intellectual property look like the bad guy, the proper definition is someone who acquires patents solely for the purpose of collecting licensing revenues.  They have no product of their own, so the “mutually assured destruction” cross-licensing model doesn’t apply.  They can be ethical or unethical.  The ones that are ethical acquire stronger patents and seek reasonable (relative to the actual value of the patent to the infringer) royalties.  The unethical ones acquire and assert (often) BS patents in the hope that potential infringers will find it cheaper to pay royalties than the legal fees (and bad PR, etc.) of fighting.  Either way, Patent Trolls are widely despised.  Having personally fought one of the unethical ones, I tend to despise them as well.  Yet as an individual inventor if I patented something how would I make money off my invention?  The easiest way would probably be to give Nathan Myhrvold at Intellectual Ventures (an attempt at an ethical approach to patent trolling) a call and see if they wanted to acquire the patent.  So Patent Trolls can serve a role in encouraging innovation, even though it often seems like they are just getting in the way.

The other force at work is the anti-Intellectual Property community lead by the Free Software Foundation (FSF) and it’s GNU General Public License (GPL).  The problem here is actually two-fold.  There are those who really disdain patents and other forms of IP protection and those who support the idea of Intellectual Property for their own work but are willing to look the other way on theft of other’s property as long as it provides them sufficient economic gain.  The question here may be who is worse?  I would argue that it is the latter crowd.  I may consider the FSF and it’s followers wrong (and I’m being kind) but at least they are intellectually honest.  Those who turn a blind eye to the theft of another’s Intellectual Property while benefiting from it economically are pond scum.

A brief editorial break here.  Many of you are going to realize you are in the category I have just labeled “pond scum”.  Yes, I know that is harsh.  And that (most of) you aren’t really pond scum.  And you certainly don’t consider yourself pond scum.  But I am trying to make a point here, and it’s a point best made with a large hammer.

So we get to what I believe is the crux of Microsoft’s patent strategy with regard to Linux and it’s Android variant, go after the pond scum.  Now this is certainly more than an emotional reaction to what is happening, it is actually rather hard to figure out how to effectively go after the original authors of the infringing code.  Let’s say you take Linus Torvalds and the Linux Foundation to court for an infringement and you win.  And then what?  The code is already out in the wild.  Winning against Linus wouldn’t stop Red Hat from continuing to ship the existing code.  Let’s say you went after Red Hat and they removed the offending code.  CentOS is already out in the wild and it wouldn’t change availability of that.  In fact you’d have to go after all the Linux distributions, and then any site hosting a distribution, and even if you win it wouldn’t stop Amdocs et al from shipping servers with an offending Linux Kernel.  So you’d have to go after the Amdocs of the world.  Or the end-users  who are “receiving stolen property” and will continue to use the infringing product for their own economic gain.  In other words, you’ll have to go after the pond scum even after you’ve spent years fighting those who stole the property in the first place.  And since the pond scum actually have more to lose economically, why not simply go after them in the first place?  So that’s what Microsoft has been doing, with both Android and (more quietly until now) Linux itself.

Why go after anyone?  Well for a true Patent Troll the reason would be to make money of licensing the IP.  For Microsoft that is just a secondary (and largely immaterial) benefit.  The primary reason is that infringing on Microsoft’s IP, and then giving it away for free, is no different from if you engaged directly in software piracy by stealing copies of Windows.  There is nothing wrong with free software itself, but if you steal Microsoft’s intellectual property and give it away for free that is unfair competition.  And so Microsoft (and Apple, just to make it clear this isn’t purely a Microsoft viewpoint) is fighting back against that unfair competition by demanding compensation for use of its intellectual property.

So now we get to the crux of the question, why won’t Microsoft simply publicly disclose the 237 patents it believes Linux violates.  Well, a better question might be “why should it”?  What is the economic or strategic benefit for Microsoft to do so?  Ok, people don’t like FUD (Fear, Uncertainty, and Doubt) and lack of public disclosure seems like a FUD-based strategy.  Microsoft getting into a very public battle over which of those 237 patents Linux really infringes on won’t help Microsoft one little bit.  It will keep the press, bloggers, industry analysts, other pundits, lawyers, developers, and a gaggle of others busy for months or years as they argue over which 237 claimed infringements are real, which can be removed without causing any serious problems, etc.  But it won’t do a damn bit of good for Microsoft.  In fact, I can only see downside for them.  Not releasing the list gets them labeled for using FUD.  Releasing the list will get them labeled even more strongly for FUD.

Let’s  think about those alleged 237 patents.  How much due diligence has Microsoft done in identifying the infringements?  Obviously it did enough work to believe that Linux does infringe on those patents, but has it done all the work necessary to prove (in a court of law for example) that Linux infringes?  Almost certainly not.  What it probably knows is that if there are 237 apparent infringements that their must be anywhere from 25-100 very likely infringements.  And it probably has done the work on less than a dozen patents to have definitive infringements that they’d likely win on in a court challenge.  Why? Because that’s all they need.  Releasing the entire list would simply cause most people to focus on the perhaps 200 that either aren’t actual infringements, or that have already been removed from the code, or that could easily be removed from the code.  The press (et al) would just bury the fact that there are still dozens of real infringements.   Why not do the work to validate all 237 and just identify the real patents at issue?  Cost primarily.  I would guess it costs them in the neighborhood of $500K on average to create a definitive case for each patent.  Where is the cost/benefit analysis that shows it is worth spending $118 Million so they can be comfortable disclosing the actual list of infringing patents?  I can’t see one.

Second, how does releasing the list of 237 patents (or any list at all) help Microsoft’s primary cause?  What Microsoft wants is for a user of Linux to make the Linux vs Windows decision on some basis other than “Linux is Free”.  Releasing the list makes Linux no less free.

Third, think back to any negotiating class you may have taken (or book you may have read).  Information is power, and in any negotiation the party with more information has the advantage.  Releasing the list of patents Microsoft thinks Linux is infringing on reduces Microsoft’s negotiating power.  Why would it want to do that?

The only way to stop the FUD label is to actually seek to enforce the patent rights.  The only way to rationally do that is to go after those who are getting the most economic benefit from infringing on the patents.  With Android that was the phone manufacturers.  With Linux that is large IT shops and those who incorporate Linux into their own offerings such as Amdocs.

A secondary benefit of Microsoft’s approach is that the companies it targets are more likely to settle than fight because Android/Linux is not their primary business.  Samsung makes money selling phones and tablets, just how much money do they want to spend defending Android itself?  And Microsoft (unlike Apple) isn’t trying to stop them from shipping Android-based phones, it just wants to be compensated for its IP.  Amdocs makes CRM systems for the telecommunications industry.  It uses Linux, but Linux is not its business.  Fighting over Linux makes no sense for them (and as a long-term Amdocs shareholder I support management for avoiding this becoming a major legal distraction).  And a large IT shop?  Do they really want to spend a dime on lawyers to defend their use of Linux?  Will their CEO, or particularly their Board of Directors, really want the company distracted from their actual business (like making and selling shoes or cars or operating restaurants) over an IP issue?  No.

These guys don’t all fold because Microsoft calls up one day and demands money.  I’m sure Microsoft has to prove to them that there is actual infringement going on.  That’s where those small number of patents that Microsoft has done the full due diligence on come in.    A targeted company’s lawyers don’t have to believe Microsoft’s claim that Linux infringes on 237 patents, they just have to believe it truly infringes on ONE.  If so, they’ll advise their client to settle.  I’m sure Microsoft is taking the fact that all 237 probably aren’t provable claims into account in pricing discussions, so that it ends up feeling it is being fairly compensated and the infringing party doesn’t feel it makes more economic sense to fight.

So that’s it, Microsoft is simply following a strategy that makes large users of Android and Linux stop thinking of them as free.  And by focusing on large users it also leaves small users, enthusiasts, researchers, etc. that are more likely to be hurt by a direct attack on Linux itself completely alone.  I think it’s the right thing to do, though I don’t think it will change the market dynamics much at this point.

And in pursuing its strategy I can see no benefit to Microsoft of disclosing the 237 patents it believes Linux infringes on.  So that, quite simply, is the reason they haven’t done so.

Posted in Computer and Internet, Microsoft | Tagged , , , , | 76 Comments

And now a word from our sponsor

With summer well upon us I’m finding it hard to summon up the motivation to make blog postings.  For example, the Office 2013 preview announcement coincided with a two week driving trip.  And now I’m suffering from new puppy-induced sleep deprivation.  And so frankly you can expect my blog to be rather quiet for a while.

I’m sure one morning I’ll wake up, read some news, and suddenly be motivated to skip the sunshine and sit in front of a computer for two to three hours writing about it.  It almost happened this morning as I briefly felt like adding to Ed Bott’s comments on Office 365.  But sorry, I have other things to do.  Like chasing the puppy.  Because you definitely don’t want this to happen.

So keep checking back.  Eventually I’ll try to write something worth reading.

Posted in Uncategorized | Tagged | 4 Comments

Skydrive vs. NAS vs. Windows Home Server vs. Windows’ Shares

No, this isn’t a blog entry about comparing these options though someone would do the world a service by writing a good one.  This is a blog entry about the strategic realities of these four mechanisms for sharing and backing up data with both backward and forward looks.

When Microsoft’s David Vaskevitch was running around in 1993-94 making his pitch why the company should get serious about the Enterprise business he’d use an argument that started with data centers and got to the number of small businesses and branch offices.  There were many millions of places where at least one employee went to work every day, and each of them justified a Server.  And then, more quietly since this was nominally an Enterprise discussion, he’d add-on the opportunity for servers in the home.   It took 14 years for Microsoft to address the server in the home part of the vision with Windows Home Server (WHS), and the question that haunted it almost from the beginning was “too little, too late” or “too much, too expensive for the mainstream consumer”?

There is no question that WHS, now officially being discontinued with the release of Windows Server 2012, was a cool product.  It introduced a cool, though technically flawed, means of storage management called Drive Extender (dropped in the most recent version).  It has a backup/restore that really works for recovering from hard drive failure.  It provides network shared folders.  It allows for remote access.  It serves up multimedia via DLNA.  And it is a platform for third-party apps.

Sadly adoption of WHS was light from the beginning.  How much of this was because of the product itself and how much was because of the marketing strategy is hard to tell.  WHS was a relatively small project within Microsoft, not one that would have the marketing budget to buy its way into shelf-space at Best Buy, run TV ads touting its virtues, or license a theme song from the Rolling Stones.  It counted on a bootstrap marketing strategy wherein a very modest initial effort lead to enough adoption to justify vastly increased marketing effort.  “If you build it they will come”, but they didn’t.  At least not in enough numbers to justify increased effort.  The second major release, Windows Home Server 2011, was greeted not with renewed effort but rather with Microsoft’s major OEM partner HP dropping out of the business.  No other OEM stepped in to fill the void.  Even system builders stayed away (except, oddly enough, in the UK).  It was no longer a consumer product, but rather the low-end of Microsoft’s confusing line of (business) server offerings.  With the advent of Windows Server 2012 the most used functionality of WHS, as well as the more successful but still not a barn burner Windows Small Business Server, are folded into Windows Server Essentials.  And any notion of a consumer product, or of having servers priced for consumer use, abandoned.

From the moment that WHS launched it was under assault from a range of alternatives, none of which were as good as the package but all of which were more than adequate for their individual task.  Windows client had long had facilities for Shared Folders, and when Windows 7 added HomeGroups they really became easily managed by consumers.  Why set up yet another system in your house when you could easily designated one Windows PC to hold your shares?  You could even use Windows Backup, as mediocre as it is, to backup your various PCs to those shares.  Or why not buy a NAS (Network Attached Storage) drive, today a 2TB WD My Book Live is running around $150, and have a backup and share solution for perhaps 1/3 of the cost of an equivalent WHS server.  And then came the problem of Disaster Recovery…what happens to your data when a wildfire sweeps down into your city and burns your house to ground, as just happened in Colorado Springs?  You really want your key data backed up in the Cloud.  Remote Access?  Well there are numerous solutions, including multiple from Microsoft.

The cloud, of course, is the real game changer.  A few years ago David Vaskevitch and I were discussing the Cloud as an alternative to WHS (and WHS-like alternatives).  David, a very serious photographer, was thinking about how much data he generates on a typical trip and also how much a family with a newborn might generate in data from video.  Looking at broadband speeds he could quickly demonstrated the impracticality of backing up all this data to the cloud.  The thing is, David is somewhat of an outlier and video has not exploded to the extent he was considering at that time.  Few amateur photographers generate as much data as David.  Yes people take a lot of video, but their style has adapted to the Internet.  They take shorter clips and want to share them, not hour-long home movies that no one actually ever looks at.  They take a lot of photos, but most of these are for sharing on the Internet anyway.  So while there are some people who will have the problem that David demonstrated in terms of network bandwidth being inadequate for Cloud backup, the vast majority of consumers don’t generate enough data on a regular basis for this to be an issue.

Each of the four answers I mention to the problems of Sharing and Backing Up data have their advantages and disadvantages.  And many super power users use more than one.  Paul Thurrott, for example, uses the Crashplan cloud backup service along with WHS.  I haven’t switched to a full cloud backup service, but everything important has copies on some cloud service.  For example I recently bought additional storage on Skydrive and uploaded all our digital photos there.  Our PCs actually dual-backup to the WHS and to a NAS drive (purchased because our WHS has had hardware issues).  Grabbing the NAS drive is on our list of what to do in a wildfire evacuation, something much easier than figuring out how to take the relatively non-portable WHS itself.

There will always be a place for LAN-based shares and backups for some power users (be those computer power users or semi-pro photograhers/videographers), but the general market has peaked.  The trend, and it’s a strong one, is to put our data in the cloud.  When you take a photo on your Windows Phone it is automatically pushed to Skydrive.  Apple copied that and now a picture on your iPhone gets pushed to iCloud.  Most consumer email lives in Hotmail, Gmail, or Yahoo cloud stores.  The cloud is becoming a place where you can store, and even serve up, your music.  Streaming is already overwhelming local storage of commercial video, and user-generated video is increasingly stored in the cloud so it can be shared outside the LAN.  With both Google Docs, and Microsoft Office’s support for Skydrive, cloud storage of personal documents is becoming the default.

I know a lot of readers of this blog have their reasons why WHS was great, is great, and why it or a similar solution is still needed.  But anyone reading this is almost certainly part of the 1%.  99% of PC (and other computing devices such as Smartphone and Tablet) users will never adopt a WHS-style solution.  Most will not implement a LAN-based sharing, or backup, solution at all.  In fact, most don’t do backups at all despite decades of being urged to do so.  However many, perhaps most, are or will use cloud services for sharing and backing up data.

So goodbye WHS, RIP.  Those of us who love you will continue to run our servers until the hard-drive heads crash, the power supplies burnout, the memory chips generate parity errors, and Microsoft stops issuing security patches for your underlying OS.  But for the 99% your epitath will be “why would I want one of those?”

Posted in Computer and Internet, Microsoft | Tagged , , , , , , | 24 Comments

Tumblr is back on my good side

In a piece I wrote a couple of weeks ago I’d brought up the problem of Tumblr hosting phishing/malicious web sites.  Well after a few days of my feeding examples to Tumblr’s support team, and they taking down the sites, I haven’t seen new SPAM that links to Tumblr sub-domains.  I doubt my small stream of examples was enough to convince spammers to stop using Tumbr subdomains.  More likely Tumblr made some change to their processes to stop the abuse.

So I’ve re-enabled access to tumblr.com in my router and thank them for apparently addressing this problem.

 

Posted in Computer and Internet, Phishing, Privacy, Security | Tagged , , , , , | 1 Comment

The upcoming Vanity Fair article on Microsoft

I probably won’t comment much on the upcoming Vanity Fair piece on Microsoft, but did want to add some observations on the review system (which it seems VF is going to emphasize).

The basic concept of a curve-based review system is used across many high-tech (and probably other industry) companies.  Digital Equipment Corporation, which by most measures was a far kinder and gentler employer than Microsoft, used it as well.  And Microsoft used it during its highly successful periods not just during the “lost decade” that VF is writing about.  Employees in any company hate these systems because they set up a competition between co-workers.   Employers like them because they force managers to do their jobs and identify (and appropriately reward) employee performance.

If you look at a system that doesn’t force managers to differentiate between employees they have a tendency to rate everyone “satisfactory” and reward them approximately the same.  So your superstar who walks on water saved the company gal gets an insignificant pay benefit over the 9-5 spends more time at the coffee machine than doing anything else guy down the hall.

The truth is that the Microsoft system of the 90s was far more brutal than the system of the 2000s!  But in the 90s people complained less because the financial rewards were so great.  Those thousands of competitive college hires were happy to be out proving they were the smartest hardest working software engineer on the planet.

What changed in the 2000s was three-fold in my view.  First the attempt to make the system more transparent actually made it less so!  The old system very clearly communicated the stack ranking that had gone on behind closed doors.  The new system obscured it.  (The newer system had other quirks that made it suck, IMHO, but I won’t go into them since I probably would be crossing a line.) Second, the company went from lots of single 20-somethings whose life revolved around work to people trying to focus attention on their families .  And third, in the 90s you were competing mostly to see who could get to millionaire status the most quickly.  In the 2000s you were fighting to avoid becoming layoff fodder.  The same system (conceptually) went from being primarily a competition for reward to being a fight for survival.

The rest of the discontent with the system really has nothing to do with the system itself but rather with poor management application of it.  For example, the VF abstract gives an example with a team of 10 people.  It is crazy to force a team of 10 into an absolute curve.  In fact, the curve probably only becomes absolute for organizations of 100 or more.  There are various techniques for addressing this, but any second or third level manager who forces a lower level manager with so few people to absolutely follow the curve is failing to do their job.

When I was running a team of 35 my directs would assign their ratings and then we’d all get together to stack rank my org.  Then I’d look at my curve and, if I thought there was someone who was getting an inappropriate rating, go to my boss and talk to him about it.  In every case that I really went to bat for someone it turned out there was someone on another team that the “curve” was pushing inappropriately high and thus he managed the swap of ratings since at his level he really did have to meet the curve.  When I was running a 300 person organization I also worked to make sure that someone wasn’t being inappropriately rated because they were on a small team.  Of course some people still got ratings that weren’t quite perfect.  If you are “on the bubble” you risk being pushed down (or bumped up!) a notch due to the curve.

The other failure of management is to wait until review delivery to let an employee know how they are doing.  Review results should not be a big surprise!  Sure the curve may push someone down or up a bit, but if they think they are succeeding wildly and then they get a “you’re failing” rating that is a management problem.  In many cases the manager has failed to let them know throughout the year how they were doing.  In some cases the manager did let them know, and they were in denial.  The review results are an objective measure of both absolute and relative performance.  It is up to the manager to provide subjective feedback at more frequent intervals so that the employee has a chance to adjust.  As much as this is common sense, every manager (and I include myself) ends up learning this from experience.  Or I should say, every good manager.

But the bottom line here is that little of this is Microsoft specific other than perhaps the shift from competition leading to great rewards to it being a fight for survival.  And that is indeed an issue that Microsoft needs to address if it is to remain a great employer going forward.

Posted in Microsoft | Tagged , , | 8 Comments

Acknowledging the aQuantive Mistake

Yesterday’s announcement that Microsoft would write-down the 2007 acquisition of aQuantive raised my opinion of Steve Ballmer another notch.  Steve makes mistakes, but he also owns up to them.  So let’s travel back and see how Microsoft got into this mess.

In the late 1990s Microsoft was busy fighting the U.S. DOJ’s antitrust action and it was a major distraction for CEO Bill Gates and a number of other senior executives.  To keep the company running executives not caught up in the day-to-day antitrust battle stepped up to fill the gap.  Steve Ballmer became President and later CEO.  As the financial pressures from the antitrust suit, the collapse of the Internet bubble, and then the post-9/11 economic downturn took hold a lot of non-core or non-performing efforts were eliminated or cut back.  This was particularly true of MSN-related efforts.  Sidewalk (whose vision was to be what Yelp, Trip Advisor, and some of the other hot web properties of today are) was sold.  Expedia (then and now the leading on-line travel agency) was IPO’d and then Microsoft’s majority stake sold.  Efforts to build a search engine were abandoned.  Eventually the pioneering Slate magazine was also sold.  Many other on-line properties became little more than portals to third-party content.   While on one hand this was necessary so Microsoft could refocus on its core businesses, on the other it was unclear how to generate significant revenue from them.  Expedia’s transactional revenue aside, everything else relied on advertising to generate revenue.  And on-line ad revenue, already insignificant in the late 90s/early 00s, further collapsed in the post-9/11 recession.  It just looked like a horrible place to be.  As it turns out, Microsoft had just entered (and then left) the market too early.  By the mid-2000s online advertising revenue was on an upswing, particularly revenue from Search Click-Through.  And Microsoft had taken itself out of participation.  Google, on the other hand, had ridden this wave to perfection and proven it to be an economic model that rivaled the Microsoft-lead PC ecosystem.

Now it’s 2007 and Microsoft is trying to dig itself out of a number of problems.  First, of course, is that Vista has just shipped ending the five-year Longhorn morass but at the cost of a poor release.  Microsoft has to rebuild the Windows team and figure out how to take the product forward.  “The Cloud” is emerging as something real, and Microsoft has to put in place effort(s) to address that trend.  Apple’s iPod has brought them back from a near death experience and turned them into a competitor for the hearts and minds of the consumer.  And Google, funded by click-stream ad revenue, has emerged as an across-the-board competitor.  Microsoft decides to act in the search and advertising space both because its economic model can add to Microsoft’s bottom-line and the need to blunt Google.

The problem with going up against an entrenched dominant player like Google is that head-on attacks are (perhaps) necessary but not sufficient.  Typically head-on attacks yield fractional market share gains.  So if you have a competitor with 80% market share and you gain 1/2%  to 2% a year against them does it matter?  With fractional market share gains Microsoft would neither reach profitability on its search efforts nor blunt Google.  And so it has been on a multi-track plan, trying to chip away at Google’s market share in a head-on attack while simultaneously seeking to change the game as new paradigms unfold.  So for the former, after bringing its Core Search capabilities up to par, it has spent $Billions on Search Default and Toolbar deals, partnerships with Facebook and Yahoo, TV advertising, etc.  As predicted this has resulted in only fractional market share gains.  And sadly, Microsoft has still not reached the knee of the curve for advertising revenue.  Given that search advertising is an auction system the price-per-click is determined by the number of bidders for any individual keyword.  The more searches you do the more advertisers want to use you and thus the more bidding you have for keywords.  At some point you have enough searches and bidders for those keywords that your revenue and profit explodes.  Microsoft doesn’t have to beat Google for its Search efforts to payoff, it just has to make it past the knee of the curve on revenue per click.

One observation back in 2007 was that while Google was the absolute leader in search advertising, it was not a major player in Display advertising.  If memory serves, Yahoo was actually the Display advertising leader at the time.  Microsoft is thinking big and wants to be a major player in all forms of advertising, including Display since it is on a rebound.  Google is also working to expand its reach into Display.  Of course if Search is mostly a technical problem, Advertising is mostly a business problem (with some technology behind it).  And Microsoft doesn’t have much DNA in the advertising space.

With so much on its plate, and Goole having such a big lead, Microsoft needs to put its balance sheet to work.  So it goes looking for acquisitions, and one of those was for help building up its Advertising DNA.  With few Digital Advertising companies out there, and Google having bought DoubleClick, Microsoft pays an outrageous amount for aQuantive.  It also recognizes that if it acquires Yahoo it would hit the knee of the curve on search share and revenue, and become the leader in Display Advertising.  And on a smaller scale it acquires travel meta-search company Farecast in an attempt to become the leader in the lucrative travel search category.

Of course the Yahoo acquisition attempt fails, and leads to a partnership that has yet to yield the desired benefits.  Farecast is now part of Bing Travel, but sadly enough only the fare prediction part of Farecast remains.  The actual fare search is handed off to  Farecast’s former metasearch competitor Kayak.

And aQuantive? None  of the expected benefits of acquiring aQuantive seem to have come to fruition.  Of course if Microsoft had paid a few hundred million dollars for aQuantive, which objectively was the most it was worth, then none of this would really matter.  It would purely be a footnote explaining why Microsoft had shaved a few pennies off its earnings this quarter.  Instead it highlights how poor Microsoft has been on big acquisitions and how desperate it was back in 2007.  And raises a question about what Microsoft’s overall Search and Advertising strategy is now.

Although the aQuantive write-down is making headlines today it is really water under the bridge.  The real negative aspects of this, and the other Online Services Division (OSD)acquisitions, is that it pretty much shut down medium to large acquisition activity across the rest of the company.  With so much money and management attention focused on aQuantive, Yahoo, and even Farecast other divisions apparently dropped acquisition efforts that might have yielded far more concrete returns.   What Microsoft would have looked like had it pursued these other acquisitions rather than aQuantive et al is something that can only be guessed at (particularly since we don’t know what other acquisitions would have occurred).

Yesterday’s write-down of the aQuantive acquisition signals the end of the “win search and advertising” at all costs era.  What still isn’t clear is what Microsoft’s current strategy and expectations in this space have become.  I have a guess, hinted at in some other blog entries, but it deserves a blog entry of its own.

Fortunately Microsoft’s recent acquisitions have made a lot more sense, even if the prices raise eyebrows.  Skype is a really positive addition to Microsoft’s “secret sauce”, and Yammer looks to be a big help in keeping Microsoft relevent to its key information worker user base.  If aQuantive was a big mistake, it looks like Microsoft has learned lessons from it.

To me there are three phases in Steve’s leadership of Microsoft.  In the first phase, while he was President and the first few years as CEO, he mostly focused on keeping the ship from sinking in the face of anti-trust and economic concerns.  Efforts started during this time were heavily influenced by that environment, often with positively ugly results (e.g., Vista).  Next came a couple of years of panic where it seemed Microsoft had fallen behind on all fronts and a frantic set of efforts were launched to catch up.  The aQuantive acquisition was part of this.  It was an era where Steve and Bill still shared leadership of the company, and where business units had lots of freedom to prioritize their individual strategy and tactics over an overall corporate strategy.  Some things succeeded, like Windows 7.  But others….  Now we are in the third phase where Steve has fully taken the reins and the Microsoft we are seeing is his Microsoft.   It’s not all positive (particularly for employees), but for customers the 2012 product wave is probably the best in the company’s history.  Microsoft is finally back.  So for me the aQuantive write-down is the last major step in Steve putting the panic phase behind him.  History is going to measure Steve ‘s tenure as Microsoft CEO on what happens in 2012 (FY 2013 for those into financial measures) and beyond, not what happened in the 2000s.

Posted in Computer and Internet, Microsoft | Tagged , , , | 18 Comments

Should someone buy RIM?

My friend and former co-worker David D’Souza has been agitating for some time for Microsoft to buy RIM.  I’m always arguing against the move.  It’s not that RIM doesn’t have some good assets, it’s that cleaning up the financial mess and avoiding the distraction of dealing with the acquisition is more likely to kill Windows Phone than save it.  Microsoft does not have the DNA to do this.  It hasn’t bought and cleaned up any sizeable failing company, and it has almost no experience dealing with a large manufacturing company!Not only that, any such acquisition is going to hurt margins and perhaps do so for many years.  That’s also one of the reasons that Microsoft has, so far, declined to acquire or invest additional funds in Nokia.  In both cases should Microsoft (or anyone else) acquire the company they are going to have to spend incremental $Billions to lay off far more people than the companies themselves will do, shutter more plants, support the phaseout of older and non-strategic products, etc.  Key executives and dozens of other senior employees will need to be moved into the new business, diverting attention from existing efforts.  What all this means is that something that seems to make strategic sense could actually be tactical suicide.  And for a CEO, business suicide.  Once the excitement wears off Microsoft’s stock price would tank and stay in the doldrums as investors realized what a distraction and resource blackhole RIM would be.  And I think every potential acquirer of RIM (or Nokia) sees the same thing.

What Microsoft would be interested from RIM is the backend business and some of the patents.  It could use the backend business to boost adoption of Windows Phone 8 and Windows 8 in secure-communications oriented businesses.  In essence it could buy the customer base without buying the customer base headaches of the Blackberry phones themselves.  But if it did that, then who would take on the phone business?  If Nokia were healthier than it would make perfect sense for them to take it.  Nokia could then produce a line of Windows Phone 8 powered Blackberries.  This dream scenario was rumored to have been explored.  I don’t know if it fell through because RIM wasn’t interested in selling for a reasonable price, or if Nokia’s declining health make it difficult for them to take on without more financial assistance from Microsoft.  Or that when they factored in after-acquisition costs they could never make the deal work.

So what about other players?  Well, they all have the same problem.  There is a piece of the RIM puzzle that they want, but not all of it.  And no one wants to deal with the mess.  Sure RIM itself could clean things up and then pursue a sale, as happened with Motorola.  Google didn’t buy the “old” Motorola with all of its problems, it waited until the cleanup was over then bought the Android Mobile Phone company that was left.   I think something similar is going to happen with RIM.

If we think of RIM as having two primary businesses, a back-end infrastructure business and a device business, they will eventually be forced to split them.  The back-end infrastructure business has a lot of value to multiple players, but the device business?  I’m guessing it would be of most interest to a Chinese player who wants a leg up on penetrating western markets.  U.S., Canadian, and other western nations are unlikely to allow a Chinese company to buy RIM outright because they rely on the secure infrastructure part of the company too heavily.  So one scenario for RIM is to sell the device business to a Huawei  or ZTE and make their play as a mobile communications infrastructure company.  At that point a Microsoft, Apple, Google, or other player might step in and acquire them.

Are there other possibilities?  Well HTC is constantly chomping at the bit to take control of its own destiny and has threatened to build its own OS from time to time.  But despite an acquisition being possible on paper there are problems.  HTC, while public with a large market cap, is actually a family run company that probably doesn’t have the management depth to take on such a large acquisition.  It is also a company that has problems of its own as its market share shrinks in face of an onslaught from Samsung.

Samsung is frequently mentioned as a bidder and I think they keep looking at it.  But Samsung already has so many frying pans on the fire in the OS arena (Android, Windows Phone, Bada) that adding BBX to the mix seems like a big distraction.  And then there is that cleanup effort that no one really wants.  So I think they’d like the patent portfolio and the infrastructure stuff but don’t really care about the device business, putting them in the same camp as Microsoft, Apple, and Google.

Which brings me to the bottom line.  I don’t think anyone should, nor wants to, buy RIM.  People do, and should, want to buy some of their assets.  And so many companies will hover over RIM waiting to see if they can somehow get the pieces they want without all the baggage.  But no one will buy the whole company, unless it is out of a bankruptcy court proceeding.  At least that’s my guess.

Posted in Computer and Internet, Microsoft, Mobile | Tagged , , , , , | 7 Comments

Critics of the Microsoft Surface just don’t get it

The commentaries about the recently unveiled Microsoft Surface Tablet/”Ultrabook”/PC thingies is revealing.  Particularly the few from OEM executives, and the larger number from supporters of the status quo.  My summary of them: They don’t get “it”.  No not the Surface itself, they don’t get why Apple has been winning with the consumer.  They remain in denial about the problems with the OEM business model.  True the overall market share of Mac vs PC hasn’t changed all that dramatically, but that is because enterprise purchases still remain almost exclusively focused on Windows PCs.  Retail sales, which largely reflect consumers rather than enterprises, have shown significant market share gains for the Mac.  And if you include the iPad in the mix then Consumers are indeed running away from the PC.  But how much of this is Microsoft Windows and how much is the OEM’s fault?

Let’s first focus on the classic PC vs Mac battle.  When I talk to people who have switched from the PC to the Mac and listen carefully to what they say I find that perhaps 60%  of their reasoning has nothing to do with Windows itself, another 30% is the result of compromises Microsoft makes as a result of its OEM business model, leaving only 10% as true differentiators between OS X and Windows.  I’m going to get to my own analysis, but if you want a great blog entry that explains why nothing beats the Macbook Air take a look at this.

A few weeks ago a friend called me up and told me he’d switched to a Mac.  As usual in these cases I asked him why.  The first thing out of his mouth was a long diatribe on the horrors of Symantec’s Norton Anti-Virus product and how it had messed with his system’s performance.  The frequency of software updates, and the need to reboot, was another issue.  Buried in the details were all the third-party products with their own update mechanisms on the PC; third-party products that are unnecessary on the Mac.  Boot time was another one, though he mostly was comparing Windows XP to a very recent version of OS X.  And so on.  He even compared the performance of his Mac to his wife’s recently obtain Windows 7 system.  But he had a high-end Mac while she had a low-end notebook.  Apples vs. Oranges.  When you analyzed his complete story you got to my typical 60/30/10 pattern.  All of his issues were very real, and his conclusions valid ones.  It doesn’t matter to him what factors are or are not under Microsoft’s control.  In earlier days of personal computing, where user’s valued having a wide variety of systems, a wide array of price points, and the ability to tune (both hardware and software) to their heart’s content, Microsoft’s third-party-centric business model worked well.  At least the positives outweighed the negatives.   Now the negatives far outweigh the positives for most consumers.  They just want to walk in and buy a system that works, performs, is safe, and appeals to their usability and design sensibilities.

Walk into Best Buy and shop for a PC.  You find some dizzying array of systems with little apparent differentiation.  Dig in and whose keyboard and mouse do they have?  The OEMs.  Microsoft makes great keyboards and mice that have fantastic ergonomics and are well tuned to Microsoft Windows.  Almost all are sold after-market because OEM’s want to cut costs and/or offer an OEM-unique experience.  The OEMs ship mediocre to poor keyboards and mice, which is why Microsoft can sell so many replacements.  But in the store or out of the box the first thing you are confronted by on a PC is mediocre or worse input devices.  On notebooks things are even worse.  As described in the posting I linked to in the second paragraph, out of the box the trackpads on Windows notebooks don’t work well.  And even after tweaking they still have issues.  Yup, that’s my personal experience too;  I have yet to find one that works well.  Now start-up the PC and what do you find?  Each PC is different.  Boot times vary significantly.  What security software are they running?  What photo editing software is installed?  What is the default browser?  What toolbars and other add-ons are installed in that browser?  Who is the default search provider?  What media-playback software is being used?  How is the quality of the drivers?  Or the driver update mechanism?  Are Java or Adobe AIR, neither of which are really needed these days pre-installed?  What desktop add-ons are pre-installed and running?  How much crapware is installed?  Every PC is different, and the experience is not dictated by Microsoft.  A Dell PC is a Dell PC, an HP PC is a HP PC, etc.  They just happen to use Windows as a common underpinning.

You can go to a Microsoft Store and buy a PC from the major OEMs that are at least somewhat configured as Microsoft would prefer them (which it calls Microsoft Signature).  The  OEM still controls the hardware, drivers, etc. they just can’t install crapware or replace Microsoft’s own software with something a third-party has paid them to install.  The out-of-the-box experience is better than what you get at Best Buy or other retailers (or the OEM’s website), but it still is flawed compared to Apple.  It still has the OEM’s poor keyboard and mouse, or poorly cobbled together trackpad.  It still has their specific hardware configuration choices and drivers.  It still reflects their decisions, which tend to favor reducing costs over nailing the user experience.  The machine I’m writing this on reflects that.  The equivalent Apple product is the iMac, whose low-end processor is a quad-core Intel i5.  The PC I’m using only has a dual-core Intel i3.  Apple could offer a lower priced iMac by using the i3 but they choose to focus on user experience over price.  While this system suits my purpose, many users will purchase it and later discover that it was inadequate for their purpose.  That will not happen with the iMac.

The core problem for OEMs is this: they want to offer a unique OEM-specific experience as a way to differentiate from their competitors and allow them to squeeze a little extra margin out of otherwise undifferentiated systems, but they (a) can’t go outside the identical cost envelope as their competitors and (b) they can’t invest enough to produce an experience that customers will really be attracted to.  Most of the time what you see out of OEMs just seems like a hack.  This isn’t just about PCs.  Wonder why Android Tablets haven’t made a dent in the iPad?  It is the same factors that are causing PCs to lose ground to Apple products.

What Apple has shown is that consumers want systems in which every detail has been carefully thought through.  Where each component of the system, from all the hardware details to how the operating system works to the services behind them are designed to work together.  They want innovation where it really does something for them.  They want some “cool factor”.  They don’t expect perfection, and in fact consider well thought through imperfection an improvement over randomness.

Why has the Amazon Kindle Fire gained some traction?  Price?  No, there have been other $200ish Android tablets and they’ve gone nowhere.  The Fire follows the modern playbook.  Amazon designs the hardware, the software user experience, and the services.  They didn’t just put a thin shell on Android, they hid it below a very Amazon-specific experience.   Are they as good at all this as Apple?  No, just better than all the OEMs.

Which brings us back to Surface.  Microsoft likes its OEM business model, but I think recognizes that it isn’t working too well with consumers.  Surface follows the modern consumer computing device playbook, OEMs don’t.  Microsoft hopes that Surface spurs OEMs to up their game, and at the same time is a backstop against a complete collapse of the OEM business model.  Think that unlikely?  Recall that 9 months ago HP was on a path to dump its PC business, and Dell is running away from consumers and towards business customers.  Five years from now we may very well find that OEMs are focused on business customers and have ceded the consumer computing business to Microsoft, Apple, and a few niche players (like Amazon’s Fire).

And, since many will ask, of the smartphone business?  Well it is on a similar trajectory with perhaps a different timeline.  Apple is being Apple.  Google let Android spin so totally out of its control that it has now purchased Motorola and has positioned itself to be able to offer the end-to-end experiences.  Microsoft has partnered with Nokia in a way that gives it the ability to offer the modern end-to-end experiences without the downsides of acquiring Nokia’s corporate problems.  And nearly all of the profits in the vast Android smartphone market are currently accruing to one player, Samsung.  Which suggests that the Android smartphone market may split into two totally dominant businesses, Google/Motorola and Samsung, each offering their own version of the end-to-end experience that consumers desire.  The rest of the smartphone business is starting to look just like the PC OEM business of a decade ago.  It is becoming a race to the bottom.

With Surface Microsoft isn’t attempting, nor does it desire, to undermine its OEMs.  What it is acknowledging is that the OEMs have been in decline for a decade and that, at least with consumers, that decline might be unrecoverable.

 

 

Posted in Computer and Internet, Microsoft, Windows | Tagged , , , , , | 23 Comments

Four reasons we are losing the fight against Malware

It’s one step forward two steps back in the fight against Malware.  Every time it seems like we are making progress it becomes apparent that we keep attacking the tip of the iceberg and below the surface Malware is thriving.  So let’s take a look at four key reasons I think we are losing the fight.

The first one may surprise people, but it is our over reliance on (completely inadequate) Automation for discovering new Malware.  Let’s take the recently uncovered Flame (aka Flamer) malware.  Once Flame was uncovered evidence slowly emerged that it had been around for about five  years before being discovered.  How do we know this?  Well, once Anti-Malware (AM)  vendors (and other security organizations, which I’ll lump under AM in this article) knew what to look for they went back and searched through  previously submitted samples (of potential Malware) and found they had samples of Flame in their collections dating back five years!  The automated tools that AM vendors used to evaluate the samples when they were submitted had failed to flag them as potentially being Malware.  Only when someone with a support contract forced their AM vendor to investigate suspicious behavior was Flame (and Stuxnet before it) discovered.

To clarify the process some let’s explore how the process of finding new Malware works.  A user is going about their business and finds some executable file or URL they think is suspicious.  They flag it as suspicious using their AM software, or a link in their browser, or by going to an AM vendor’s website.  So what happens with this report?  It is fed into an automated analysis system that looks to see if it can find any hints that the software is Malware (or the website malicious).  If it finds something then the report is assigned to a Security Analyst to manually figure out what is going on.  If they decide it is Malware they write a signature to detect and block it and that is pushed out to the AM software.  If the automated tools find nothing?  Then the sample is saved away and nothing else happens.  Customers reported Flame, but the automated tools never suspected it was Malware and no further action was taken.

One could attribute the failure of automated tools to uncover Flame and Stuxnet to the expertise of the Malware authors.  Quite simply, these (apparently state-sponsored) Malware authors have so much knowledge of how the automated detection tools work that they can craft Malware that won’t be detected.   But what about the average Malware author?  They might not be able to craft Malware that can go five years undetected, but they can craft things that will go days or weeks.  They may not have as much knowledge of how the automated analysis tools work, but they can run their Malware through every AM engine and publicly available analysis tool out there (in a very automated way, btw) and make sure they won’t immediately detect the new Malware.

I’ve gone through and found samples that one or more AM products said were Malware and submitted them to vendors who didn’t detect them them for analysis.  Those vendors automated systems then analyzed the samples and reported back as their being safe.  False positives or false negatives?  Based on the “smell” of the samples I’d say they were false negatives.   So these pieces of Malware live on until a customer calls for technical support, and technical support decides to bring in a security analyst to really look at the problem.

Which brings me to the second reason I think we are losing the fight, inadequate information sharing amongst AM vendors.  Despite numerous information sharing initiatives between AM vendors, community organizations, academic researchers, etc. I already mentioned the situation where one AM product detects a piece of Malware but another doesn’t and how that situation doesn’t change when I submit the sample.  Well, I believe the process is exactly the same with information sharing initiatives.  If Company A detects a piece of malware then it submits the sample to its partners.  They run it through their automated systems and things go exactly as I described above.  Company B gives no apparent extra priority (e.g., manual investigation) to Company A’s report over if Jane Soccer-Mom submitted it.  Only if a security analyst at Company A contacts Company B and says “we’ve got a live one here” does Company B really pay attention.

I see this even more clearly with attempts to block websites hosting Malware.   The simple explanation is this, it doesn’t matter if Company A tells Company B that URL X is bad, Company B won’t block it unless they determine it really is bad.  And they can’t manually check every possibly bad URL so they rely on automated systems.  Catch-22.

The only way information sharing amongst AM vendors will ever work is if true trust relationships are established.  I’ll give an example.  WOT (Web of Trust) automatically treats the appearance of a URL (technically URI) on a SURBL as an indication the site is potentially dangerous.  It’s not definitive, and it may even yield false positives, but it does say that WOT users gain very rapid protection against links in SPAM.  WOT’s community-based system can later refine the rating.  Unfortunately trust relationships like this are currently rare.

Which brings me to reason three, the AM and other security vendors are wimps.  Really.  Let’s start with the search guys.  Both Google and Bing detect malware-hosting websites while they are indexing the web.  They will notify the webmaster, if they can, that their site needs to be cleaned.  They will give the user a warning if they attempt to access the site.  But they won’t actually block access to the website.  Why?  Well quite often it is a legitimate website that has been compromised by hackers in order to distribute their malware.  “Joe’s Lighting Fixtures and Home Furnishings” might go out of business if you truly blocked access to their site, and so Malware authors can continue to use it to distribute their wares with only minimal interference from the search vendors (and their corresponding URL filters).  If we were really serious about stopping malware distribution we’d apologize to Joe but implement “Ghost Protocol” on his website until all Malware was removed.  That means we’d basically make it disappear from the Internet.  It would not appear in search results, and all URL filters would block access to it.    Gone until clean.  Overreaction?  Ask all the people who suffer identity theft or similar harm because they were permitted to continue to access Joe’s site after it was discovered to be distributing Malware if blocking access would have been an overreaction.

Wimpiness is also why information sharing is so ineffective.  Being on a SURBL doesn’t say a URL is malicious, but just that it appears as a link in a lot of SPAM.   Being rated poorly on WOT just means that a bunch of users think the site has issues.  ClamAV has high false positive rates compared to other AM products so it’s hard for vendors trying to avoid false positives to trust it when it claims something is Malware.  So the industry wimps-out.  Each vendor has its own processes, mostly the failed automated processes, for deciding if something is truly bad and action should be taken.  These processes favor malware authors and distributors.  That’s not the intent of the security industry, but it is the result of their practices.

The fourth reason is a direct follow-on to the third (and maybe even should be labeled 3b), the failure to punish domains for not enforcing good behavior on subdomains.  Let us take my current favorite whipping boy, Tumblr.  Tumblr is a microblogging site.  In looking through my SPAM the last few days I’ve noticed that most of it contains links to subdomains (e.g., joe12343.tumblr.com) of Tumblr and a few foreign sites that also host user-generated content.  If you check URL filters they will tell you the site is safe.  Why?  They rate the domain, such as tumblr.com, and not the specific sub-domain that is malicious.  Tumblr is a legitimate and apparently great service with one problem, they aren’t sufficiently policing the content that users can make available there.  So the bad guys have figured out that they can use Tumblr to host malicious content without fear that URL filters will block access to that content.

Sites that host user-generated content have to take responsibility for blocking user’s from using them to host malicious content.  And the security industry has to get over its wimpiness and hold these domains accountable.  If a major URL filter started blocking access to tumblr.com there would be outrage, but Tumblr would address the problem rather quickly.  If I were in charge I’d seriously consider giving Tumblr 30 days to get its act together or face the implementation of “Ghost Protocol”.

There are more reasons we are losing the fight against Malware, but those are the ones that have been bugging me the last few days.  I’m looking forward to comments telling me I’m wrong, and that it doesn’t work the way I describe it above.  I’d love for Tumblr to tell me how they really are working hard to block malicious content.   I wish it was two steps forward, one step back, because that means we’ll eventually win.  But right now it appears that for every step forward we make we discover that we’ve lost more ground elsewhere.  The Internet can’t go on that way for much longer.

Posted in Computer and Internet, Security | Tagged , , , | 4 Comments

The practical impact of not upgrading older phones to Windows Phone 8

I’m going to leave all (well, most) of the emotion about the lack of upgrades for older phones to Windows Phone 8 on the side and just address the practical impact.  I’m not talking about the potential negative short-term impact on Windows Phone sales, but rather the practical impact on owners of Windows Phone 7.x devices.

Let’s start with enthusiasts, perhaps the audience that is most pissed off about the lack of a Windows Phone 8 upgrade for existing devices.  I put myself in this category, and once I calmed down (and I’ve had almost two months to do so since it became apparent there weren’t going to be upgrades for existing devices), I realized that Microsoft was probably making the right call in assuming enthusiasts would end up ok with their decision.  Why?  You’re telling me that you wouldn’t flush your new Lumia 900 down the toilet right now to have a dual-core, 1024×768 display, NFC support, removable MicroSD card, with a Nokia PureView camera no less, phone?  The way I’ve been handling phones the last several years is to buy one on contact every two years, then pay full price in the off years so I could have the latest and greatest.  My assumption when I picked up the Lumia 900 was I’d wait a few months after Windows Phone 8 shipped to see what everyone was bringing to the table.  Ok so I won’t keep the Lumia 900 the full 12 months that implies, and I’ll keep whatever follows it for longer than 12 months until my next contract renewal.  But the generational leap from the 900 to a Windows Phone 8 device will be much larger than anything that appears in the 15-18 months that follows so that will be ok.  I’ll still be on my one phone per year program.  And once other enthusiasts calm down I think most will come to the same conclusion.  We want all that new hardware, and just upgrading an existing phone to Windows Phone 8 wouldn’t have brought that with it.  So we’ll find a family member who doesn’t care about having the latest and greatest to give the Lumia 900 to.  They’ll think its a really great gift and that Windows Phone 7.8 is just an awesome, modern, OS that goes far beyond what they really needed or wanted.

Which actually brings me to the “Typical User” (unregistered, un-trademarked, ambiguous and some claim meaningless term).  I’ve said this before, given the behaviors of Android phone owners it would be fair to assume that the vast majority of Windows Phone users don’t care much about upgrades.  I’ve asked many an Android user what version they are running and the near universal response is “I don’t know”.  I then ask if an update is available for their phone and the near universal response is “How do I update?”.  They buy a phone, it does what it does, and that is the way of life.  They don’t think of it as a general purpose computing platform that gets new versions the way their PC does.  So how will a typical Windows Phone user react to the lack of Windows Phone 8?  They probably won’t care.  T-Mobile just commented that they are selling a lot of Lumia 710s to people moving from a feature phone to a smartphone.  Do you really think this audience knows, cares, or wants to know about updates?  No way.  Maybe the press can plant seeds of doubt for potential new purchasers, but the reality is that most people buying a new Windows Phone 7.x device won’t think about the update question unless it is shoved down their throat.

And then there is the most problematic audience.  The ones that don’t even have a name.  They aren’t enthusiasts, but they did drink the Windows Phone Kool-Aid.  They are fans.  They bought for the future as much as they bought for today.  And there is no future for them.  They aren’t going to buy a new phone until their contract is over and they can get another subsidized phone.  They are going to look on those able to run Windows Phone 8 with envy.  They are going to be really pissed at Microsoft, Nokia, and AT&T (or other carrier) for selling them the Lumia 900 as the flagship Windows Phone just a few months before telling them it had no future.  They are going to be vocal to all who will listen about how Microsoft et al screwed them.  I don’t know how big this audience is, but if it is large then Windows Phone is in trouble because they will chase away future purchasers.

There will be three classes of apps going forward.  One is the class of apps that will run fully on Windows Phone 7.8 and Windows Phone 8, and this includes the existing 100,000 apps in the Marketplace.  Another are apps that will run on both but have some functionality that is only available on Windows Phone 8.  The last are apps that only run on Windows Phone 8, for example because they use native C/C++ code.  Or because the developer decides the Windows Phone 7.8 market is too small to bother with.  I’d predicted earlier that the Windows Phone 7.x marketplace would top out soon after 100,000 apps because developers would switch attention to Windows Phone 8.  While the exact number is fuzzy I think the principle still holds.  Gaming developers, for example, will switch from using XNA to native which leaves out the devices running 7.8.  Of course one could argue that those devices are too slow to support the emerging games and so Microsoft was doing users a favor by not bringing them to the older hardware.  But I think that will be lost on people.  What they will see is that the marketplace grows, but not for them.  Once the number of Windows Phone 8 devices out in the world exceeds the number of Windows Phone 7 devices, which I suspect will happen rather quickly, developers will have no incentive to target the older devices.  Even current applications will introduce new versions that won’t run on Windows Phone 7.8.  Of course you could argue that users who don’t care about updates also don’t care much about new apps, and you’d have a point.  But there is a scenario where (for example) Pandora comes to Windows Phone 8 but not Windows Phone 7.8, and for Pandora fans who own older devices it will be a great source of frustration.  Again, perhaps this matters only to my third category “fans who are not quite enthusiasts”.

On the other hand, Nokia seems to be going all out to bring goodies to Windows Phone 7.x devices to keep its momentum going.  So in addition to 7.8 it looks like we might get Nokia-specific functionality as well as more Nokia-exclusive apps.  I don’t know if any other device manufacturer will try to prolong the life of their existing Windows Phone lines, or just concentrate on selling Android until Windows Phone 8 is introduced.  All I know is that my Lumia 900 apparently is going to get a lot of goodies between now and when I replace it!

So is there a bottom line here?  When you take out the marketing gaffe I wrote about this morning it looks like Microsoft is thinking very clearly about real customers.  But clear thinking and good marketing are not the same thing.

Posted in Computer and Internet, Microsoft, Mobile, Windows Phone | Tagged , , | 8 Comments