Is 2012 Microsoft’s Year?

It’s that time of year when reviews of 2011 and predictions for 2012 are hitting the blogosphere (as well as traditional media). Well here is my Microsoft prediction for 2012 (or more precisely, Fiscal Year 2013): This is the year that Microsoft finally puts the “lost 5 years” behind them. It’s the year (forgive the baseball analogy) they “hit for the cycle”, and perhaps that home run is even a Grand Slam. I think it is also the make or break year for Steve Ballmer. A year from now he’s going to start showing up on the cover of business magazines whose articles extol Microsoft’s return to glory, or investor pressure for his ouster will become irresistible. So why is 2012 going to be the year of Microsoft’s return? Let’s put the year in context.

Microsoft doesn’t have Soviet-style 5-year plans, but its fate does seem to flow in roughly 5-6 year waves. For example 1990-95 saw 16-bit Microsoft Windows (3.0/3.1/3.11) become the dominant desktop operating system and saw Microsoft rise from a follower to a leader in information worker products.   From mid-95 through 2001 we saw Windows move to 32-bits as well as switch to the NT kernel, and solidify its ownership of the desktop computing space.  Office continued its dominance.  And Microsoft became a leading supplier of Server software on the backs of Windows Server, SQL Server, and Exchange Server as well as a host of smaller products.  It became a leading force in consumer products both with CD titles and its MSN (including Hotmail) offerings.  It had entered many areas, such as restaurant, entertainment, and travel that are today amongst the most popular uses of the Internet.  It launched the Xbox.  And then the bottom fell out.

In the late 90s the U.S. Department of Justice began its anti-trust lawsuit against Microsoft that resulted in the late 2001 consent decree.  The E.U. piled on with an anti-trust finding in 2003.  It’s hard to overstate the impact this would have on Microsoft, so I’ll just give a few examples.  To begin with, the pressures of fighting the anti-trust battles lead Bill Gates to turn over the CEO role to Steve Ballmer.  And it lead the leading product executive, Paul Maritz, to depart.   Volumes have been written about the former, but very little about the latter.  Not only is it likely that many of the mistakes of the 2001-2006 period would have been avoided had Paul stuck around, he remains the favorite of most insiders as a potential replacement for Steve Ballmer as CEO.  The anti-trust actions influenced business strategy dramatically, but they influenced product strategy nearly as much.  I still recall a new VP reviewing what his largest team was working on and discovering that 25% of its resources were kept busy responding to Technical Documentation Issues (TDIs) as required by anti-trust settlements.  Hundreds, if not thousands, of Microsoft engineers were tied down by the creation of the required technical documentation and then responding to TDIs from the early 2000s through 2011, when the load should have started to abate.  When you combine the impacts of the anti-trust actions, the collapse of the Internet bubble, and the post-9/11 economic turn-down you get a lot of business decisions that ultimately would turn out to be mistakes.  With little ad spending going to the Internet Microsoft sold off Sidewalk, the precursor to services like Yelp, TripAdvisor, and others that now are part of daily lives.  It ended efforts to get into Search.  It spun out Expedia, exiting the travel category that would grow to be huge on a transactional revenue basis as well as a key search advertising category.  But most importantly for this discussion, the anti-trust actions caused a “swing for the fences” move that would lead to Microsoft’s worst failure and the lost five years.

With anti-trust efforts limiting Microsoft’s freedom of action, and Linux making a play for the desktop, Microsoft decided to do a release of Windows that would represent a seismic shift.  Not only would the Longhorn project take Windows so far ahead of where Desktop Linux was trying to go that it would be hard for it to catch up, but other Microsoft products would simultaneously take similar (and related) leaps.  Unfortunately for Microsoft the entire effort failed and Microsoft was barely able to crawl out of the resulting crater to produce the Windows Vista release.  Basically Microsoft spent five years and produced something without enough compelling value for users and with too many negatives for it to achieve widespread adoption.  Along the way it forgot how to actually build software, and as it became increasingly desperate to fix the Longhorn damage it took its eye off of other areas.  Vista shipped at the end of 2006, five years ago.

For the last five years Microsoft has been focused on recovery.  Windows 7, by any measure a success, was as much about re-learning how to build software as it was about the end-result.  Realizing that the earlier decision to spin out or kill advertising and other e-commerce businesses had been a mistake, and then pouring massive resources and management attention into re-entering those realms.  Realizing that they’d let the Windows Mobile franchise go stale, just as smartphones were taking off, and revitalizing that space.  Realizing that the Cloud was going to become a threat to much of Microsoft’s existing business, and deciding Microsoft had to become a leader in Cloud Computing or risk irrelevance was another.  And so while there have been several unquestionable successes (e.g., Kinect) over the last five years, overall Microsoft has appeared to be a bumbling has-been.

The problem for Microsoft has been that digging out of such a deep hole was not something it could do overnight (though I’m not arguing they couldn’t have done it better).  In most cases Microsoft had to turn the crank at least once to fix its previous mistakes, and as 2012 dawns it is finishing the second major crank; the turn that puts it back on track for delivering new and exciting things.  So Windows 7 was the repair release while Windows 8 is the return to leadership release.  Windows Phone 7.0/7.1 (aka 7 and 7.5, with 7.5 being the real target release and 7 being what they had to ship to make the Holiday ’10 schedule) as the repair release, Windows Phone 8 as the strive for leadership release.  Internet Explorer has crawled back from the neglect of the Longhorn effort (wherein it was supposed to be replaced) to once again poised for real leadership in 2012 with IE10.  You can see similar moves across the board.  Take Hotmail.  The original leader in web-based email had gotten quite stale.  Over the last couple of years it has been updated dramatically, and with the round of updates they just finished rolling out it has once again taken a leadership position (in fact I’ve abandoned Outlook in favor of using the web interface to Hotmail because that offers so many goodies not available through Outlook or other mail clients).

Of course it’s not just the things that Microsoft neglected that are poised for new leadership in 2012.  Servers are an area that, while significantly impacted by the “lost five years”, never took a step backwards.  And in 2012 we’ll see a major release of SQL Server early in the year, Windows Server 8 later, and a likely refresh of the Office servers at the end of the year.  SQL Server 2012 looks to be a leadership release (e.g., Columnstore), and what little we know of Windows Server 8 suggests that the team nailed what customers are looking for.  Overall 2012 is looking to be a good year for Microsoft’s business customers.  Mary Jo Foley lists her 10 “sexiest” teases for business in 2012; take those as just a starter!

In March of 2010 Steve Ballmer made a speech in which he said Microsoft was “all in” on the cloud.  The offerings we’ve seen since that time were not what he was talking about, they were the result of the efforts already underway.  Steve returned from that speech and started changing strategies, budgets, and ultimately his leadership team to make “all in” a reality.  It was the fall of 2010 when I had to go make the rounds to partner organizations letting them know my teams were backing out of commitments for various on-premises features in order to accelerate cloud-related work.  And it wasn’t until February of this year that Satya Nadella took over the Server and Tools Business, a leadership change specifically designed to accelerate STB’s cloud efforts.  Hadoop for Windows Azure is one of the results of that acceleration, but it is the tip of the iceberg.  In 2012 I expect we’ll see Windows Azure, SQL Azure, Office 365, and other cloud efforts make major leaps forward.

And what about Xbox and Kinect?  With the introduction of Kinect at the end of 2010 (and thus the expansion beyond just the hard-corps gamers) and an Internet TV-oriented release of the Xbox software at the end of 2011, Microsoft has finally moved to stake out its position in the living/family room.  This was the strategic target from day one of the Xbox, though it seemed to always be on the back burner as Xbox focused on meeting the needs of gaming enthusiasts.  Today’s offering has a lot going for it, but it is still quite immature.  Will there be a “next generation Xbox” in 2012?  While initially there were rumors of such a 2012 release most observers now think it is coming in 2013.  I have no insight on this specific topic, but whether or not a next generation Xbox is in the cards for 2012 I suspect a lot of Xbox-related innovation is.  We will almost certainly see the next rev of the Xbox software that takes the TV experience to another level.  And beyond a rumored version of Kinect tuned for the PC experience (where you are sitting right in front of the screen) I suspect we’ll see an updated Xbox version as well.  For example, one that handles a wider range of distances (e.g., there is already a 3rd party lens you can add if you don’t have the 6-8″ Kinect currently demands) and a high quality camera to support Skype videoconferencing.

And what of Skype?  Microsoft (and specifically Bill Gates) has long believed in the importance of telecommunications and that VOIP changes everything.  Despite numerous investments (adding video to Messenger, Lync, the ill-fated Response Point, etc.) Microsoft failed to find the secret sauce to become a leader in this space.  It remedied the situation by buying industry leader Skype earlier this year.  During 2012 I think we’ll see Microsoft exploit Skype to add pizzaz to nearly every one of its offerings (that has human interaction).  In other words Skype will join Kinect and TellMe as part of the “secret sauce” that defines Microsoft’s future.

Speaking of TellMe I suspect that 2012 will be the year that Microsoft finally takes full advantage of that technology.  It was already going down this road, but Apple’s Siri will spur it on.  If you take a look at TellMe’s ability to recognize human speech and generate out the equivalent text then it is every bit as good as, and probably better than, Siri.  Using TellMe to perform Bing searches on a Windows Phone or XBox works extremely well.  It is on the back-end, where the natural language text is translated into computer commands that Siri has an advantage.  Quite simply Microsoft never wired the two decades of work it has done on natural language recognition in to its operating systems, requiring the user to speak in the operating systems command structure rather than natural language.

As you can see during 2012 Microsoft will refresh nearly its entire product line.  And this time the releases aren’t about fixing past mistakes or catching up with where they should have been a few years ago, they are about starting to re-establish leadership.  Can Microsoft go all the way from Bum to Hero in a single pass? Probably not.  But they can demonstrate they’ve gotten their Mojo back.  And they can make Fiscal Year 2013 an exciting one for shareholders!

What could go wrong with this scenario?  Obviously the products could turn out to be unsuccessful because they aren’t well received by customers.  Or Microsoft could have great products but screw up the marketing.  Or some of the predictions could turn out to be wrong (e.g., if a next generation Xbox isn’t until 2013 and Microsoft holds off on Xbox SW improvements until the new hardware; or if something happens in development of Windows 8 or Windows Phone 8 that delays them or causes what Microsoft delivers to be less than expected).  Undoubtedly some of the predictions will be wrong, but hopefully just the minor ones.  If that’s the case then the overall assessment remains, 2012 is going to be Microsoft’s year.   And likely the first of a good five or so.

 

Posted in Computer and Internet, Microsoft | Tagged , , | 5 Comments

Why doesn’t Windows Phone sell?

Charlie Kindel has a nice blog posting on why Windows Phone has yet to take off.  Robert Scoble, amongst many others, chimes in with his response.  Now I agree with Charlie, and many of the other critics as well!  There are a myriad of reasons that Windows Phone has yet to take off.  But I will point out that Charlie was right in the middle of things and not just an observer.  What he is saying is indeed the facts (my brief time in Microsoft’s mobile arena confirming what Charlie saw).  But it isn’t necessarily all the facts, which is why I agree with Scoble and others as well.

When the Windows Mobile 7 reset occurred Android was not yet available.  And by the time the Windows Phone 7 plans were locked down only one device, the T-Mobile G1, was available running Android.  Apple was cleaning up, and Microsoft veered sharply away from the Windows Mobile business model that Android was emulating to focus on as close to an Apple-like model as possible for a software-only player.  Unfortunately for Microsoft Android did take off while it was busy fashioning Windows Phone 7.  In the U.S., Verizon Wireless decided it had to counter AT&T’s exclusive iPhone arrangement and came up with an Android-powered smartphone it marketed heavily as the Droid.  The Droid campaign was so overwhelmingly successful that it spawned a family of phones, and just as importantly customers were walking into Verizon competitors AT&T, Sprint, and T-Mobile asking for a “Droid phone”.  They walked out not with a (Verizon exclusive) Droid, but with some other Android phone.  The iPhone had been countered.  Now along comes Microsoft with Windows Phone 7 and no carrier to really champion it.   Charlie does a good job of explaining why, but you can take it one more step and simply say that WP7 was too late for any carrier or manufacturer to really need it!

Now Microsoft has Nokia, and that is a great step.  As I’ve said before it means that you have a manufacturer who will pour their best ideas into Windows Phone devices (and add-on software).  But that isn’t enough.  Having prioritized the consumer over both carriers and manufacturers, Microsoft need consumers to demand a Windows Phone.    And as others have noted, it just isn’t doing much itself  to promote Windows Phone (choosing instead to fund manufacturers and carriers to push it).  Consumers don’t know what a Windows Phone is nor why they should want one.  And unless Microsoft takes the bull by the horns and convinces them, success with Windows Phone will remain elusive.

Posted in Computer and Internet, Microsoft, Windows Phone | Tagged , | 7 Comments

Deepsec 2011: Are Companies “Evil” When it Comes to Privacy?

Last month I attended the In-Depth Security Conference (better known as Deepsec) 2011.  This was my first security conference (outside of Microsoft’s Bluehat) so I’m not sure exactly how to characterize it compared to the better known and larger Black Hat and RSA conferences.  I had a few favorite sessions such as Duncan Campbell’s keynote on Terrorist’s use of Encryption (bottom line is that contrary to the fear mongering we’ve been hearing since the Cold War they aren’t using encryption very much at all), and the sheer fun of Kizz Myanthia’s pursuit of toys using technology from the James Bond films.  But perhaps the session that made the biggest impression on me was Chistopher Soghoian’s session titled “Why the software we use is designed to violate our privacy”.

As it turns out Chris and I almost completely agree on what companies should be doing.  But I found his explanations of why they don’t rather one-sided, and his characterizations of them as “evil” mostly off base.   Chris looks at these issues from the standpoint of a researcher and advocate without apparent benefit of having had to deliver and support a high-volume software product or service.  As a result he tends to put the blame for companies’ actions on lawyers and bean counters (i.e., “the suits”) while ignoring the engineering reasons for their actions.

Last January I wrote a blog post on the topic of full-drive Encryption that, like Chris, called for full-drive encryption to be the default.  However, I drilled into the practical difficulties of delivering this to consumers while in his talk Chris not only ignored these aspects but when I challenged him it was clear he didn’t have a full grasp of them.  I only worked specifically in the security arena at Microsoft for 18 months, but if I’d thought bringing full-drive encryption to consumers was easy I would have pushed like hell to make it happen.  Instead my investigation of the topic lead me to believe more ducks needed to be lined up.  See my earlier post on the topic for why this is so hard, but to put a fine point on it we are still struggling to make this foolproof for Enterprises  and need a lot more evolution across the entire ecosystem for it to become practical for the average consumer.  BTW, another interesting development in this area is the failure to-date of self-encrypting hard drives to catch on.  In other words this is a complex area where the parties’ motives are good but the path to nirvana is a maze of twisty little passages.

Another part of Chris’ talk focused on Microsoft’s decision to drop the “InPrivate Filtering” feature in Internet Explorer 8 (IE8).  Chris focused on the Wall Street Journal’s coverage of Microsoft’s internal debate, in which they describe the conflict inherent in Microsoft being in the advertising business and how that caused InPrivate Filtering to effectively be removed from IE8.  Chris called what happened “Evil”, again a characterization I disagree with.  The technology that Dean (Hachamovitch) and company came up with was both delightfully and problematically simple.  They counted the number of different websites that referenced a third-party cookie and if it exceeded a certain number (10 by default, if I recall correctly) they assumed it is a tracking cookie and blocked further access.  That had two bad side-effects.  One is that in a growing world of Mashups web sites would suddenly stop working correctly because popular services they shared would have their cookies blocked even though they weren’t engaged in tracking.  Second is that there was no way to discriminate between tracking cookies that the user might want (e.g., something they explicitly opted-in to) and those they definitely would not want (e.g., a tracker that had weak privacy protections).

Looking at how InPrivate Filtering was “removed” tells a lot about the state of the discussions inside Microsoft.  They obviously occurred shortly before IE8 was to ship because Microsoft didn’t really remove InPrivate Filtering, all they did was remove the  check box for persisting it across browsing sessions.  This was the absolute smallest change they could make as to avoid potentially breaking IE8 and delaying its shipment.  And, in fact, anyone with a little knowledge (or the ability to do a search with Bing or Google) could find and modify the registry entry to turn on InPrivate Filtering all the time.  I did this and ran that way on my own PCs until IE9 became available.  What I found is that the feature broke so many websites that I refused to turn it on for other members of my family.  Putting myself in the place of the Microsoft executives debating whether to ship the full InPrivate Filtering or not, and taking both the input from the advertising community as well as evidence that the feature had usability issues in mind, and being so late in the shipment cycle for IE8, I would have pulled InPrivate Filtering from the release and made it a goal to design a better solution for the next release.  And that’s exactly what Dean (Hachamovitch) and company did in IE9.  Buried in IE9’s Tracking Protection List (TPL)feature is the old InPrivate Filtering (in all its glory).  It is now the “Your Personalized List” feature within TPL.  However you are far better off, and will have a much better browser experience, selecting a list developed by third-parties that apply some intelligence and criteria to the selection process.  You get to choose which list, from those that are extremely exclusive to others that allow tracking by sites that conform to a set of industry privacy principles.  The latter addresses the objections the advertising industry raised about the IE8 feature, but it is your choice.  The only thing left for Microsoft to do is enable TPL by default, something they’ll likely do once there is industry momentum behind a particular list.

So I don’t agree with Chris’ characterization of what happened around IE8 InPrivate Filtering as “evil”.  They clearly would have been better off had the debate occurred before introducing the feature into beta.  And I really wish they had been more forthcoming in explaining why the feature was removed and what they were going to do about it.  I think that is really Chris’ objection to what happened.  Had Microsoft come out and clearly said “InPrivate Filtering needs to be redesigned to address concerns raised by the advertising industry, including our internal people, as well as usability concerns discovered during beta” he would have been disappointed but understanding.  This is one area where I agree with Chris on the topic of lawyers and other spin-doctors.  Companies have let the lawyers et al control how candid they are in public communications, usually to their detriment.

A last area is that of turning on SSL by default for website access, something I addressed last April.  Here Chris and I are in almost 100% agreement.  So far Gmail remains the only major site to have done this.  Facebook, Twitter, and Hotmail now support always using SSL, but they don’t do so by default and they (Hotmail and Facebook) hide the feature rather deeply in their Settings where typical users will have trouble finding them.  Again I understand some of the usability, engineering, and cost reasons for not having gone straight to turning on SSL by default as Google did for Gmail.  Cost-wise of course it requires many more web servers to handle the same number of users with SSL (https) as without (http).  And it takes time to roll out all those new servers.  But there are other considerations.  For example, in the case of Hotmail turning on SSL for the website broke access by Outlook, Windows Live Mail, Outlook Express, etc.  Microsoft has to wait for a high-enough percentage of users to upgrade to newer versions that work with SSL turned on before it would want to turn on SSL by default.   But it is important that having SSL turned on by default become the norm ASAP.  Earlier this week I noticed that my cousin’s ex-wife’s Facebook account had been compromised.  She kept changing her password but the account was soon compromised again.  I discovered she was using public WiFi hotspots for most of her access and knew nothing about the SSL feature in Facebook.  She also didn’t know what a strong password was.  I told her how to turn on SSL in Facebook and explained strong passwords.  Then I asked her what email provider she uses and it is Yahoo.  Sigh, the only one of the big 3 who doesn’t offer SSL at all (except for login).  I’m afraid her privacy will continually be at risk because there is no practical way to get her, or most other consumers, into a mode of communicating securely all (or even most) of the time.  Unlike the other two areas this is one where control of the situation is nearly completely in the hands of the web sites and doesn’t require changes to the rest of the ecosystem or major changes in user behavior.  Every web site should have SSL on either by default, or with a very easy to find (“in your face”) way to enable it if there are mitigating circumstances (like Hotmail’s Outlook issue).  So I agree with Chris that this is an area where the industry’s behavior is just inexcusable.

Finally I want to address Chris’ belief that the lawyers are in charge.  Sorry, no.  But they do have a lot more influence than they used to.  You can blame governments.  When I joined Microsoft in the mid-90s it was rather hard to find a lawyer.  Basically if you were doing a contract, filing a patent, buying a company, or doing some other legal process you went and contacted them.  But lawyers didn’t come to product design meetings, marketing meetings, staff meetings, etc.  After the U.S. DoJ filed its anti-trust suit against Microsoft you almost couldn’t have a meeting without a lawyer present (and how much do you want to bet that the increased scruitiny of Google is leading to the same thing).  It’s more balanced at Microsoft these days, but lawyers have far more visibility and input than they used to.  Still the General Manager, CVP, or more senior executives get to make the actual decisions.  The lawyers are advisors making sure you understand the risks of your decisions, not actual decision makers.  And rarely is their advise black and white.  If they are good and you are smart then you pay a lot of attention to their input, but in the end one can’t abdicate decisions to them.  A company that does is doomed.  I think what Chris is seeing is that companies are now tending to hide behind lawyers more in public.  How much time does an executive have to spend working with governments, regulators, etc.?  I will talk, candidly, to customers and analysts until I’m blue in the face.  I’ll be a little more guarded with the press, but I’m happy to talk to them at appropriate times.  But talking to government bodies holds little interest for me (or most other technology executives).  You need specialists to do that job, and one of those specialties is Attorney at Law.

Chris and I agree on what we want to see happen, and I even agree with him that companies often live in complex environments with conflicting pressures (though I attribute more of these to practical considerations while he attributes them to “the suits”).  Where we mostly differ is on the hyperbole.

 

Posted in Computer and Internet, Privacy, Security | Tagged , , , , , | 1 Comment

Will Microsoft buy Nokia Redux

Danske Bank has re-ignited speculation that Microsoft will buy Nokia, or more precisely Nokia’s mobile phone operations. It isn’t out of the question, but I don’t buy the “where there’s smoke there’s fire” argument that some are making. Back in the 90s a rumor would hit every few months that Microsoft was going to buy Sybase. Not only weren’t we in talks to do so, but we weren’t even discussing the idea internally (ok, we did discuss it once but that was not one of the times that rumors surfaced). So I personally know that people invent these things, either to manipulate a company’s stock or in hopes that their total guess becomes reality and they can be “analyst of the year” or win a Pulitzer Prize, or some similar motivation.

Microsoft will acquire Nokia if, and only if, (a) it determines that the rest of the ODMs are beyond hope of ever sufficiently backing Windows Phone and (b) Nokia has proven that it can produce phones that will beat the combined Apple + Android ecosystem. Add Google’s purchase of Motorola to the mix, so that Microsoft won’t be the company that is introducing the notion of being both partner and competitor into the mix, and you have a formula that might appeal to Steve Ballmer.

The dynamics have indeed changed enough so that, unlike previous postings, I’m no longer a total skeptic of the acquisition. Acquisitions are messy and often unsuccessful affairs. Even in the best case Microsoft would have to worry that attempting the acquisition would stall Nokia’s progress for a year or more. And that would be fatal. So some skepticism remains.

I’ve recently commented about how neither Samsung nor HTC are showing leadership on Windows Phones (i.e., best features introduced first on Android phones, Windows Phones are warmed over Android designs). This was true for both the generation introduced with Windows Phone 7 (pre-Nokia/Microsoft relationship) and the one recently introduced with Windows Phone 7.5 (post-Nokia/Microsoft relationship), so we know this isn’t simply because of Microsoft’s relationship with Nokia. Combine that with how little Samsung and HTC seem to be doing to market their Windows Phones (something that Microsoft reportedly gave them a lot of money to do) and I imagine Microsoft could once again be questioning their reliance on third-parties.

Next, look at the success Nokia is having with the Lumia family. Even at this early stage it is clear that Nokia can build devices that stand up to Apple (and the Android cartel). As Nokia’s momentum around Windows Phone builds I have little doubt that the results will be spectacular.

In other words, it may be that the conditions wherein Microsoft would seriously consider acquiring Nokia’s mobile phone operations have been met. Microsoft could do the acquisition and continue to offer Windows Phone software to other manufacturers. But now instead of Microsoft counting on them to make Windows Phone a success it would fully take its fate into its own hands. If Microsoft is able to achieve success than other manufacturers will ride its coattails. If not, well what other manufacturers do really doesn’t matter. For those who think the scenario of being both competitor and partner (or at least supplier) is far-fetched consider this: If Apple were to license IOS don’t you think that Samsung, HTC, and a dozen others would very quickly produce IOS devices despite having to compete with Apple’s iPhone?

On a closing note, if Microsoft is going to acquire Nokia then the recent changes around Windows Phone leadership make even more sense. It could be that Andy Lees is prepping to lead the acquisition and integration of Nokia. But that is entirely speculation on my part.

Posted in Computer and Internet, Microsoft, Mobile, Windows Phone | Tagged , , , | 6 Comments

What’s behind Terry Myerson replacing Andy Lees as head of Windows Phone

I was suprised as anyone at today’s news that Terry Myerson was replacing Andy Lees as head of Windows Phone.  I don’t know if Steve Ballmer was dissatisfied with Andy, though I can see reasons (beyond the obvious) why he should be.  Or if the evolution of the former Mobile Communications Business into the Windows Phone Division over the last few years has made the job less of a match for Andy’s skills.  Or if Steve was faced with having an excess of senior executives in Windows Phone and a pressing need elsewhere.  Or if this was a prepatory step to merging the Windows Phone and Windows divisions.  Or all of the above.

Before taking over the Mobile Communications Business (MCB) Andy very successfully ran Marketing for Servers and Tools (STB).  Before that he held other marketing and sales roles at Microsoft.  I honestly found it a little odd that he was given ownership of MCB since he had no product experience, but Microsoft had entered a period in which Steve was favoring marketing leaders over engineering leaders for key product roles.

It took Andy a fairly long time to recognize that Microsoft’ mobile business was going down the wrong path and put in place the “Windows Mobile 7” reset that created Windows Phone.  It is this delay that allowed the iPhone to solidify its lead, and even more importantly allowed Android to enter and take overall market leadership.  Along the way the MCB business made numerous other mistakes, the biggest one of course being Kin.  MCB wouldn’t kill Kin because of commitments it had made to Verizon Wireless, yet by allowing Kin to ship in such a non-competitive state they undoubtedly damaged the relationship with Verizon every bit as much as if they’d cancelled it.  Meanwhile I don’t think there is anything they could have come up with that better said “Microsoft neither gets it nor is competent to build mobile devices”.    Also in the original plan Windows Mobile 6.x would continue life for a number of years after Windows Phone 7 launch in order to address market segments that WP7 did not target.  Somewhere along the way this plan was abandoned.  But in the meantime lots of effort was wasted (e.g., an app store that never amounted to anything, multiple attempts to get Over The Air updates to work, etc.) on a dieing business.  In another marketing gaffe they actually sort-of renamed Windows Mobile 6.5 to be Windows Phone.  I never got the point in doing that since they had a real sea-change coming with Windows Phone 7 (and all the premature naming could do was hurt that).  Fortunately people quickly forgot about the Windows Phone naming for Windows Mobile 6.5 and very quickly associated Windows Phone with Windows Phone 7 and later.  There are other things, but the point is that even as the Windows Phone 7 team moved forward strongly to correct the earlier mistakes of the Windows Mobile world the rest of MCB continued to bounce from large disaster to small.  And so Andy Lees gets to bask in the glow of Windows Phone 7’s (partial) success, but he also has to accept responsibility for all of MCB’s failures during his tenure.

Meanwhile if you take a look at MCB’s org structure over the years you see something interesting.  At the time of the WM7 Reset it consisted of an engineering team that would go on to build Windows Phone, a team devoted to Pink/Kin, a team devoted to mobile services for multiple platforms (including Symbian, iPhones, etc.), and a team devoted both to working with device makers and carriers as well as continued Windows Mobile 6 development.  Plus a marketing team of course.  Terry Myerson was brought in to run the Windows Phone team.  Shortly after Windows Phone 7 plans solidified the services organization was blown apart and moved under Terry, with a focus on services for WP7.  After Kin shipped it was cancelled and the remnants of that team were moved under Terry.  Windows Mobile 6 efforts soon ended.  I haven’t followed all the organizational changes, but I could certainly sense that frustration with things like NoDo update delivery (where Terry’s engineering team completed their work but then it was up to the ODM/Carrier team to drive the update delivery, and that turned into a disaster) would have led to Terry’s org taking over more responsibility for dealing with manufacturers and carriers.  And so Andy Lees had gone from presiding over a number of semi-independent activities related to mobile communications to presiding over a marketing team for Windows Phone on one hand and Terry running everything else about Windows Phone on the other.

In Microsoft’s evolving “functional” product group model there is no place for an overall engineering manager role such as the one played by Terry.  Take a look at the Windows organization as an example.  Program Management, Development, Test, and Marketing all report directly to Division President Steven Sinofsky.  IE ships separately and has its own owner, so a better generalization is to say that the Program Management, Development, and Test organizations all report in at the level of a shippable unit.  Well, with MCB having evolved to be purely a Windows Phone organization Terry’s role as just its engineering leader was (from an organizational philosophy standpoint) becoming superfluous.  The natural evolution had to be to force Terry out and have the functions report directly to Andy Lees, or to have Andy take on more or different responsibility with Terry becoming the full owner of Windows Phone.  The former didn’t make sense given Andy’s history, and the important role Terry has played in creating Windows Phone 7.  So Steve took the second approach, but rather than expanding Andy’s role he moved Andy into a different one.

It is very easy to imagine what Andy’s new role is going to be.  With a huge launch in 2012 for Windows 8 and Windows Phone 8, as well no doubt of related products, Microsoft could use a seasoned veteran to lead the multi-divisional launch efforts.  But beyond the launch it isn’t at all clear what Andy’s role might be.

With this change it also appears that Steve’s experiment with placing marketing and sales people in charge of product organizations has come to an end.  Recent appointments to President have come from the engineering ranks.  For example, Kurt DelBene for Office, Satya Nadella for STB, and now Terry Myerson is running Windows Phone (even if he isn’t a President).

Finally, with Windows Phone 8 likely adopting Windows 8 MinWin as its base, and phones and tablets being part of a continuum,  one could make a good argument that Terry and the Windows Phone organization should report to Windows President Steven Sinofsky.  And this change to Windows Phone leadership might just be a prepatory step for that change.  It certainly would explain not giving Terry the President title.

Posted in Microsoft | Tagged , , , , | 4 Comments

Update on the Nokia/Microsoft Windows Phone relationship

Although it is way too early to say anything conclusive, the evidence is growing that the Nokia/Micorosft relationship over Windows Phone is succeeding for both companies.  The first of two Windows Phones introduces by Nokia, the Lumia 800, has rocketed to become the most popular Windows Phone to date despite not yet shipping in the U.S.  The lower price Lumia 710 just started to ship last weeek, which should add dramatically to volumes.  At the same time Nokia is continuing to introduce the Lumia line in additional countries, with an expected announcement of U.S. launch with T-Mobile coming later this week.  Rumors of a Nokia Lumia with LTE support for both AT&T and Verizon coming early next year are growing.  Meanwhile Nokia CEO Stephen Elop is talking about working together with Microsoft to establish Windows Phone as a preferred smartphone for businesses.  Both companies have to be pretty happy with the early goings in their relationship.

I’ve been thinking more about what is so different between Microsoft and Nokia vs other Windows Phone partnerships, and it is clear that Nokia’s commitment to 100% use of Windows Phone on its Smartphones is the key.  Take a look at Samsung and HTC’s Windows Phone offerings and you see that they are little more than warmed over versions of devices that those companies make for Android.  Put another way, both companies take their best ideas and apply them to Android phones, then send a small team off to adapt those devices for Windows Phone.  Only Nokia is applying its best ideas to Windows Phone first, and exclusively.  And although Microsoft is throwing marketing dollars at all of the players, the campaigns Nokia is putting on in Europe are both more innovative and more visible than what Samsung and HTC are doing in any of their markets (including the U.S.)  This is why Nokia well deserves to have more influence with Microsoft than any of the other players in the Windows Phone ecosystem.

It would be easy for a Samsung or HTC to have a similar relationship with Microsoft to what Nokia currently has (and indeed, before it became the initial launchpad for Android phones, HTC did have such a relationship), they would just have to put Windows Phone first.  But neither will.  Samsung is not only committed to Android, it also has its own BADA smartphone platform.  And HTC is so committed to offering its own distinctive user experience that, if anything, it would try to add its own homegrown platform (ala BADA) rather than focus on Windows Phone.  At least with Android HTC feels they can customize it enough to have a very HTC-specific user experience.  Both Samsung and HTC will stay in the Windows Phone market because for very little incremental engineering work it can provide them with a nice incremental revenue stream.  But absent a sea-change (e.g., Apple’s patent assault on Android preventing them from shipping Android devices for an extended period, or their worst nightmares around what Google does with its wholly owned Motorola subsidiary coming true) they will never again take a leading position with Windows Phone.  Most likely the real challengers to Nokia will be smaller Asian-based manufacturers who find the Android world too crowded and elect to work with Microsoft on Windows Phone as an alternative.  In other words, someone who wants to be the next HTC and will ride Microsoft (and Nokia’s) coattails to get there.

I keep hearing from friends that the next wave of Windows Phones from Nokia will really blow me away.  Certainly the current offerings from Samsung and HTC, while very cool, aren’t enough of an upgrade from my Samsung Focus for me to upgrade after only 1 year.  And the current Nokia Lumina 710 and 800 are missing some things I really want (4″ or greater screen, front facing camera). So I’ve decided to wait to see what Nokia is bringing in 2012.  In the meantime it looks like even Nokia’s modest initial efforts are making great strides in the world Smartphone market.

Posted in Computer and Internet, Microsoft, Mobile, Windows Phone | Tagged , , , , | 3 Comments

Kindle Fire is no iPad

Ever since I bought an original Sony Reader, and then when I purchased an Amazon Kindle, and then again when speculation began about the screen size of the original iPad, I’ve been fascinated by the idea of a 7″ tablet. Devices like the Sony Reader and Amazon Kindle were just so conveniently small and light that I could carry them anywhere, even when toting around a (moderately) heavy Notebook computer, that I thought that form factor would make an ideal general-purpose tablet computer. Well the iPad came out with a 9.7″ screen, and I’ve really grown to love that form factor. Still my interest in 7″ devices persists. I came this close to buying a Samsung Galaxy Tab just to try a 7″ device, but was stopped by three factors:  (a) I don’t much care for the Android 2.x UI, (b) there were no tablet-specific apps for it, and (c) the $500 price-point is not someplace I care to go for something I feared would spend most of its life sitting in a drawer.  So I waited.  Now it turns out that I’m a big fan and user of the Amazon ecosystem, so when they introduced the Kindle Fire at $199 I figured there was finally a 7″ form factor device I would try.  I’ve had mine for a few days now and wanted to give an initial report.

In previous postings I’ve talked about tablets being Content Consumption optimized devices (whereas PCs are Content Creation optimized), but this is really a continuum.  On one end of the spectrum are devices that are not only optimized for consumption, but are optimized for consumption of one particular type of media.  The Sony Reader, Amazon Kindle, and B&N Nook are book reading optimized Tablets (but Tablets nonetheless).  Some Personal Media Players also fall at this end of the spectrum.  At the other end of the spectrum would be high-end Desktop PCs used for things like CAD/CAM, Traders’ Workstations, Software Development, Computer-Generated Animation, etc.  Right in the middle, spanning the consumption/creation optimization line, we have a group of things that fall into the “thin and light” notebook category.  The Macbook Air and the new Ultrabooks, are good examples.  So where do the Kindle Fire and iPad fit into this taxonomy?

The space between a dedicated media consumption Tablet and a general purpose consumption Tablet has always seemed fuzzy to me, but the Fire and iPad bring a little more clarity.  The iPad is a true general purpose consumption oriented device that approaches, though doesn’t span, the line between consumption and creation.  More specifically, the iPad is optimized for running any consumption-oriented application.  The Fire is optimized to be just a step above the dedicated media consumption devices.  Sure the Fire can run most Android applications, but the experience is really optimized for consuming media from the Amazon cloud.  And it does a wonderful job of this.  But as you try to use it for other purposes the experience rapidly breaks down.

Let’s start with something as simple as using the Fire for email.  The built-in email client sucks.  To begin with it is a pure POP3 and IMAP client, meaning poor (if any) support for Hotmail or Exchange (both of which are present on the iPad and many Android devices).  There is no calendar application.  There is no calendar (obviously) or contact sync.  Ok, let’s go look at 3rd party applications that support these things (particularly the Activesync protocol used to talk to Hotmail and Exchange).  There is only one such app that has been specifically tested for the Fire, it is $19.95, it doesn’t actually claim Hotmail support (though it works), and it has a UI only a dead rodent could love.  They actually make a free version available for 30 days so you can test to make sure it actually works with your mail server before you shell out the $19.95.  Not exactly a confidence builder.  And I found its calendar support buggy.   Given the way I use my iPad (and my wife uses her’s) the Fire would not be an adequate substitute.

The application library for the Fire is also pathetically small as there are still few applications that Amazon has tested and verified work on it.  Even the built-in Facebook icon just launches the browser and takes you to Facebook’s mobile web page.  True one can change a setting on the Fire and install most any Android app, but that means taking you further from the experience, ecosystem, security, and safety that Amazon is trying to (compete with Apple on and) bring to the Android Tablet market.  It also runs you right back into the problem that although Android has a huge application library almost none of those apps are designed for tablets.  You’re basically running a smartphone app on the tablet.

If  you have any experience with another tablet or a smartphone you’ll find the Fire’s lack of GPS a real issue.  Just take something as simple as running the Weather Channel app.  If you are used to having the option of getting the weather for your current location you are out of luck.  Ditto for apps for finding restaurants.  The lack of GPS, even more so than a camera (which the iPad “1” doesn’t have either), just reinforces that this is a media consumption device and not a general purpose tablet.   So if the Fire is not intended to be a general purpose Android Tablet, and that’s what you want, why not  wait until January when you’ll be able to get one for $100 that includes features missing from the Fire such as Cameras and GPS?

Which brings me back to the real reason I bought the Fire, to try out a 7″ device.  From a physical form factor standpoint it is nicely small.  My wife’s immediate reaction though was that it was kind of heavy for its size, and I agree.  I’m not sure that you’d notice any weight advantage carrying it around vs. the iPad 2.  Moreover, I find it much harder to keep in your hands than the ~10″ form factor devices.  It is actually hard to hold with two hands, and keeps wanting to slip out of one.  Like the first generation Kindle eReader there is no way to actually hold the Fire securely without touching a control (in the Fire’s case meaning the screen).  Hopefully the case I have on order, which will sadly add weight and size, will solve the grip problem.

But my real problem with the 7″ screen size is that it doesn’t blow away using a smartphone (where 4″ and larger  screens have become popular) for most experiences.  I find the 9.8″ screen on my iPad an excellent substitute for paper or a large computer screen when running the Wall Street Journal app.  When I run that app on the Fire there just isn’t enough screen real-estate to make the experience work.  This suggests to me that 7″ screens are either going to be better with a smartphone optimized user experience or a dedicated 7″ user experience.  And I don’t know that many app authors are really going to target separate 3.5-5″ (smartphone), 7-8″,  9-10″, and (when Macs and PCs are thrown in) large screen user experiences.  I think most will choose to force the 7-8″ world into one of the other user experiences, and if that’s the smartphone experience then it just might not be  enough for people to carry around both.  Reading a book on the Fire is fine, but the iPad’s larger screen is better.  Watching a movie on the Fire is fine, but the iPad’s larger screen is better.  etc.  Of course rumors abound that Amazon will introduce a larger screen Fire in 2012, so my comments in this paragraph are not really trying to knock the Fire concept but rather just apply to the current model (and other 7″ devices).  All this said I still think there is room for 7″ devices.  From the many people I know who will still carry a Notebook around in their briefcase but want a small (and light!) tablet they can throw in as well, to women who want something that will fit in a smaller handbag, the 7″ form factor offers a solution.  We’ll just have to wait to see if Amazon and others can differentiate these sufficiently from smartphones to convince people to carry both.

Now that I seem to have trashed the Kindle Fire let me tell you how great it is at the thing it was obviously designed for, consuming media from the Amazon cloud.  The experience reading a book or watching a movie, or indeed buying a book or movie (or music) on the Fire is so far ahead of where the iPad is that Apple ought to be embarrassed.  When you order your Fire, Amazon associates its serial number with your Amazon account.  So once you connect to a network the rest of “registration” and setup is automatic.  If it was a gift then the giftee will have to associate it with their account, but that is a simple login step.  You immediately see your media on the screen and can start consuming it.  Purchasing new media is a breeze, something Apple has intentionally made hard for 3rd parties such as Amazon to offer on the iPad!  As an Amazon Prime customer I can stream video to the device for no extra charge, and with the ease of doing so on the Fire that has become my favorite feature.

Basically the Kindle Fire is a multi-media consumption device that doubles as a general purpose tablet.  The iPad is a consumption-oriented general purpose tablet geared to running a broad array of applications.  If what you want is an awesome multi-media experience at a low price, and particularly if you are pretty devoted to Amazon’s services, the Kindle Fire is great (though if you aren’t in a hurry I’d wait for a larger screen).  But if you want a more general purpose device, the iPad is not only the better choice it is in a completely different league.  This positioning may change over time, with software updates and new versions of the Fire, but this is how things shake out now.

As for my Fire the jury is still out on how I will integrate it into my life.  Certainly in situations where I am carrying around a Notebook I will likely leave the iPad at home and take along my Fire.  And I may prefer to actually leave all personal information (other than that associated with my Amazon account) off of the device making it easier to take with me in situations with higher risk of loss or theft.  Or I may find that it doesn’t fit in between my Windows Phone and my iPad (or a future Windows 8 Tablet) at all and allow it to find its way to eBay.  I’ll write about its fate in future postings.

 

 

Posted in Computer and Internet, Mobile | Tagged , , , , | 3 Comments

Why is Microsoft really doing ARM support?

ARM is all the rage these days. Microsoft has long participated in the ARM ecosystem via its Windows CE-based offerings. Recall that its first real success in the PDA space was with Compaq’s iPaq, which was based on Digital Equipment’s StrongARM processor. And now Windows Phone is focused on ARM processors made by Qualcomm (and, in the future, others). But why bring mainstream Windows to ARM? Most observers suggest this is because Intel can’t seem to make itself competitive in the low-power processor space. Intel, of course, thinks it can. Why else would it have jettisoned StrongArm (which it renamed XScale after acquiring the family from Digital) just as the mobile space was exploding? That Intel has failed to deliver to date is one explanation for Microsoft’s new-found love of ARM. But that isn’t a sufficient reason. A friend who recently joined MIPS pointed out that the majority of Android Tablets sold to date have had MIPS processors in them, not ARM processors! So why didn’t Microsoft just resurrect its support of the MIPS architecture (which was an original target of Windows NT) rather than bless ARM?

It’s all about the ecosystem. The ecosystem around ARM has matured into one that rivals the x86. Even as Intel (or AMD) produces x86 processors that are competitive in the low-power space for Tablets, the ARM ecosystem will march on. And Microsoft wants a piece of the action. With the number of classic PC manufacturers shrinking it wants to make sure that OEMs growing up from the mobile device market (e.g., Nokia and HTC) can extend into the Windows PC space. ARM is the architecture of choice for the larger mobile device manufacturers, and so it is the right target for Microsoft to grow its own ecosystem.

Of course it isn’t just the mobile device manufacturers that Microsoft cares about. Ever since the x86 vanquished all other CISC processors (e.g., the Motorola 68000) and the P6 (Pentium Pro) proved the x86 could keep up with RISC architectures, semiconductor manufacturers have been looking for a way to compete with Intel in general purpose processors. AMD took advantage of a licensing quirk to directly compete in the x86 market. Everyone one else pretty much was out of the game. Gradually both previous industry leaders, such as former #1 (and current #3) semiconductor vendor TI, and innovative newcomers such as Qualcomm and NVIDA adopted the ARM architecture so they could target the growing mobile device market. What about other leaders? Motorola was a major player in the PowerPC RISC processor market. It spun off its semiconductor unit into Freescale who, lo and behold, is now producing the i.MX family of ARM processors. Samsung (#2) produces ARM processors as does Toshiba (#3) . STMicroelectronics (Europe’s #1 semiconductor maker), ARM. All the big boys, with the exception of Intel (and IBM, but they are a special case), are focused on ARM. With so many semiconductor firms focused on ARM a great deal of innovation is bound to happen in that world. Microsoft can’t afford to miss out on the ability to take advantage of that innovation, and so this becomes the other reason for their support of ARM.

What about the current debate over supporting so-called Desktop Applications on ARM with Windows 8? Microsoft could just assume that x86-64 based Tablets will take up the slack for customers, primarily businesses, that want to support Desktop Apps. But this penalizes OEMs who want to focus on ARM. For example, if Nokia wants to stick purely with ARM and also wants to target enterprises then it will be at a significant disadvantage to Dell, HP, Lenovo, and the other big x86 devotees out there who can either stick with x86 or bifurcate their offerings into ARM-based consumer devices and x86-based enterprise devices. Indeed some, like Acer, have already done this with Android/ARM tablets for consumers and x86/Windows 7 Tablets for enterprises. Now Acer has the choice of ARM or x86 for consumer Windows 8 Tablets too.

And what if Intel can’t really make the x86 competitive with ARM? It could always buy MIPS so that it had a competitive architecture it also owned. MIPS was an early leader in RISC processors, and an original target platform for Windows NT, and its architecture remains extremely popular for microcontrollers. Even many of the ARM licensees I mentioned also license and manufacture MIPS processors as part of communications products and various microcontroller offerings. Recent developments such as this $100 Android tablet sporting a MIPS processor show that MIPS and its licensees are capable of playing in the Tablet space. Not only would Intel immediately have a major presence in Android tablets, it no doubt could convince Microsoft to (re-)add MIPS as an architecture upon which Windows runs. (Note that absent a player of Intel’s stature on the semiconductor front, or a Dell or Nokia on the mobile device front, committing to MIPS for general purpose mobile processors I doubt Microsoft will support that architecture. The costs simply outweigh the benefits.)

And so I now add another wrinkle to the discussion about supporting Desktop Applications on ARM (Tablets or otherwise). In an earlier post I said a lot of this had to do with what ISVs are telling Microsoft about their intent to port Desktop Applications to ARM. Now I add the other side of the coin. Do Microsoft’s OEMs, new or existing, care about having Desktop Applications on their ARM devices? Or are they telling Microsoft they are perfectly happy ignoring those applications entirely (e.g., players new to the Windows ecosystem) or addressing them purely with x86 (e.g., existing Wintel players)? That’s what Microsoft is weighing.

Posted in Computer and Internet, Microsoft, Windows | Tagged , , , , , | 4 Comments

Update: Can Windows 8/Windows Phone 8 Ship in June 2012

A few weeks ago I explored how Microsoft could release Windows 8 and Windows Phone 8 in June of 2012.  With yesterday’s announcement that Windows 8 Beta won’t begin until late February people are wondering about hitting 2012 at all, let alone June.  There is little chance Microsoft will miss the holiday 2012 selling season with Windows 8.  Not having a competitive Tablet in the market in 2012 is a potentially existential problem, and would make Windows Vista’s delays, deficiencies, and market ultimate failure seem minor by comparison.  But will Windows 8 ship (more precisely, reach GA) in June, Summer, or Fall of 2012?  If you read my previous arguments then Summer seems most likely.  I can (and in a moment will) make an argument for June still being feasible, but I have my doubts.  In fact, if Windows 8 does indeed reach GA in June then I will spend the entire month of July parading around in t-shirts with Steven Sinofsky’s picture on front and pictures of his directs on back.

Since the late 70s the industry has been trying to figure out how to eliminate the need to put the testing burden on customers (aka “Field” or “Beta” testing).  If you think about products like DEC’s TOPS-10 I really don’t recall there being any formal testing.  You would write-up a change on a listing (!) of the source code, the code would be reviewed at a weekly meeting and, if approved, edited into the source tree by release engineering.  They’d do the weekly build and deploy the result on the main development system.  Testing was simply by usage.  Over time the builds would deploy to additional machines.  And then we’d send it out to customers for field test.  Basically customers were the primary way of testing.

Fast forward a decade to DEC Rdb/VMS circa 1988 and things had evolved, but not so much.  When a developer checked in a change they wrote a test for it and checked it into a test suite.  There was no test plan, and no review of the tests the developer wrote.  Some did decent positive and negative test case coverage, but many did quite a minimal job. We would build the product and run the test suite every night and hold a meeting each morning to review test results and assign regressions to developers.  For Rdb/VMS V3.0 I’d pushed to get a contractor to independently write a more extensive set of tests, but after several months senior management decided not to renew the contract.  They just didn’t see the value.  So we had better tests, but not nearly the test suite I’d envisioned.  Once we were done with development off the product went to customers for field test.  Again the vast majority of testing was left to customers.   One problem with this approach is that it is hard to know when field testing should end.  You basically live with a plan that says you’ll RTM when the bugs stop being reported.

One particular story highlights the flaws in relying on customers to perform product testing.  About a year after Rdb/VMS V3.0  shipped a major customer sued DEC over a bug that caused massive data corruption.  This customer had previously been one of our best, and indeed was a field test customer for V3.0.  During the discovery process it was found that the customer had discovered the bug during field test, but the report had sat on the customer’s system administrator’s desk rather than ever being submitted to the Rdb team!  Months later they put V3.0 into production and….  The bug required a particular stress condition that only they, amongst all our field test customers, achieved.

Fast forward another decade as Microsoft SQL Server 7.0 was being readied for release.  We had separate test teams writing complete test suites of positive and negative test cases for the entire product.  We did code coverage analysis to see areas that needed more test attention.  We had several massive stress tests designed to push the limits of the product (exactly the thing that would have uncovered the data corruption problem back with DEC Rdb/VMS).  We took traces of many dozens of customer systems and ran them regularly against our builds.  We worked closely with ISVs to run their software and test suites against our builds from very early in the process.  And we had internal Microsoft IT systems running in production on builds from pre-beta onwards.  We gave out builds, called IDWs, to internal development partners (e.g., Visual Basic, Access, etc.), ISVs, customers who were part of the Early Adopter Program, book authors, and some others.  A sea-change had clearly occurred, but we still had a reliance on beta testing to finish finding bugs and help us polish the product.

Jump forward yet another decade and there had been more evolution (again using SQL Server as an example).  Attention to specifications and having complete plans prior to any coding is more evident and reduces the amount of rework (which tends to generate more bugs than the original coding) needed later in a project.  Features can’t be checked in until they are completed end-to-end (e.g., tools support, replication support, etc.) and fully (positive and negative) tested (for 7.0 we required positive test cases for check-in with negative test case complete coming later).  More tools (e.g., PreFast) are available that eliminate coding errors.  The testing processes developed for SQL Server 7.0 have continued to evolve.  Builds are kept close to shippable state at all times (where as with SQL Server 7.0 we would allow the bugs to back up and then spend months in a separate stabilization milestone fixing them in a  in order to have a beta or release quality build).  And the SQL Server team has eliminated a formal beta.  The IDW builds of old have evolved into Community Technical Previews (CTPs) that are made available from time to time during the course of development.   And the intent of the CTPs has evolved from primarily about finding bugs to being primarily about getting people to start using new capabilities as soon as possible.  Yes there remains some intensive customer testing programs, such as TAP/RDP (the successor to EAP), and ISV engagement.  But what they’ve tried to do is get away from a philosophy that you roll the product into a release like thing called “Beta”, based on some arbitrary date, and throw it over the wall to customers for finding bugs.  It’s this philosophical change that is the real difference between CTPs and Beta.

There are major differences between the processes used by SQL Server and by Windows (e.g., Windows controlled information release policy precludes SQL Server’s CTP process), but the Windows processes go even further in trying to avoid having customers be the means by which bugs or design deficiencies  are discovered.  Attention to design and specifications takes dramatic precedence in the current Windows processes.  Ask most developers within Microsoft and they are shocked by how few actual coding weeks and milestones there were in the Windows 8 development cycle.  Amongst other things this forces work to be done in smaller chunks and hopefully keeps the overall product from being destabilized.    The amount of testing that goes on after each coding milestone is extensive.  Customer input is something you collect early in the project, not after you are nearly done.  You see this in how often they quote telemetry in the Building Windows 8 blog, though telemetry is not the only input (e.g., usability testing, feedback from the previous release, etc.) you rely on.  Thousands of Microsoft employees have no doubt been using Windows 8 as their primary operating system for many months, and by the February beta it will probably be 10s of thousands.  In other words, most of the heavy lifting of making sure a release of Windows is ready for customer production use is done long before it goes to beta.  Beta, in actuality, is becoming a formality and not a required part of a release process.  In the extreme you see this with on-line offerings, such as from Google, where things stay in beta for years and the primary purpose of the label is as a way to say “we reserve the right the pull the rug out from under you at any time”.  So what does “Beta” mean for Windows 8?

Beta is mostly about getting the ecosystem ready for the release.  This started with the Developer Preview (and earlier private builds to key partners), but Beta will be where they say to partners “we are done, get yourself ready”.  Sure they will continue to work on bugs, both internally discovered and reported by beta testers.  But that is actually a secondary goal.  They will avoid changes that impact OEMs, developers, documentation (because of localization lead times), etc. or anything they feel could destabilize the release.  Basically Windows 8 Beta is more traditional RC0 then it is a traditional beta.  So, can you go from RC0 to RTM in 3-4 months?  Of course.  And that, combined with what I talk about in the earlier blog entry, would still allow for June availability.

The fly in the ointment of my justification that June is still possible is how dramatic some of the changes in Windows 8 are.  I’ve been playing out two scenarios in my head (as though I were running the project).  The first is that feedback (on both the new user experience and the new application model) from the Developer Preview and other activities is of sufficient quality and quantity that I’m comfortable not having any runway to react to input once Windows 8 beta starts.  That is the one in which a June (or shortly thereafter) RTM occurs.  The other scenario is if I’m uncomfortable with the feedback received to date.  In that case I would want to build in some opportunity to react after beta, leaving me to target more of a September RTM and GA.  The Windows philosophy clearly is more lined up with the first scenario.  But only Microsoft has the data to know if they are on track or might need to go with the latter scenario.

I think it is most likely that Microsoft received enough feedback from earlier activities to know Windows 8 needed a bit more work, but not that it required a traditional beta.  This is all but confirmed by the late February beta date being so far beyond what most observers expected.  It seems like Microsoft added another milestone in order to fully accommodate the feedback now rather than ship the beta on some original schedule and then add a beta update to pick up changes it was still working on (which was the traditional approach).  But I’m also thinking that even if they make a June RTM it will be late June and thus GA will move into July (or later if my earlier assertions about changes to the end game are incorrect).

Posted in Computer and Internet, Microsoft, Windows | Tagged , , , | 2 Comments

Will Windows 8 on ARM support Desktop Apps?

Paul Thurrott has thrown some fuel on the fire regarding Microsoft’s plans to allow Desktop (aka, existing) apps to run on the ARM-based Windows 8 tablets (and notebooks).  Basically he is reporting that one reliable source is telling him that Desktop apps will not be supported while another is telling him that they will be supported.  Which source is right?  Probably both.

Consider that Microsoft has (at least) three different audiences for Windows 8 tablets: Entertainment-oriented Consumers, Power-User Consumers,  and Enterprises (including small business).  Many consumers, like those willing to purchase an Apple iPad or Amazon Kindle Fire, are obviously willing to sacrifice the ability to run legacy apps in order to have an optimized consumption-oriented experience.  Meanwhile there are other consumers who are avoiding existing tablets specifically because they want to be able to run one or more legacy applications.  More importantly, most Enterprises from small to large have legacy business applications that they find critical to put in the hands of their employees.  Windows 8 tablets become extremely attractive to them as a way to put tablets, with new applications, in the hands of their employees while allowing those employees access to legacy applications as well.  As everyone is already aware of, Windows has long come in multiple editions.  And thus Microsoft has the opportunity to create different editions of Windows 8 to address these varying user requirements.

In comments on another of my posts people have brought up the question of will Microsoft price Windows 8 in a way that allows OEMs to offer consumer tablets that are price competitive with Android and IOS tablets.  Once again I go back to the editions point.  Microsoft must have an edition of Windows 8 whose OEM pricing takes the cost of the operating system largely out of the competitive equation.  We’ve seen Microsoft do this before.  Recall that Netbooks were originally conceived as Linux devices, priced so cheaply that they couldn’t absorb the cost of Windows in the Bill of Materials.  Microsoft responded with a lower priced Windows Starter Edition and pretty quickly over 80% of Netbooks were shipping with Windows rather than Linux.  We’ve also seen Microsoft offer the Office Home and Student Edition at a very attractive price, constructed of applications and licensing conditions that allow consumers to stick with Office without significantly cannibalizing sales of higher priced editions of Office to businesses.  SQL Server Express Edition is free even as Enterprise Edition costs $10s of thousands per processor.  Microsoft knows how to use pricing and licensing to simultaneously open up markets and maintain its margins.  And they will try to do this with Windows 8 on Tablets (ARM or x86) as well.

A very sensible licensing structure for Microsoft would be to offer OEMs a low-priced Tablet Edition that does not include support for Desktop apps as an entry-level product.   At the same time most if not all other editions of Windows 8 would ship with the ability to support Tablets as just another configuration, and thus include the ability to run Desktop apps on them.  That would be for x86 or ARM.  So what you would see is the base price for Windows 8 Tablets includes the Tablet Edition, but if you wanted the ability to run Desktop Apps you could upgrade to “Windows 8 Home Premium” and if you wanted the ability to also hook up to your corporate Domain you would upgrade to “Windows 8 Professional or Windows 8 Enterprise”.  In other words, the model would be just as it has been with Netbooks, where they are offered with Windows Starter Edition but many users upgrade to one of the more feature-rich editions.  In the case of Windows 8 Tablets the Metro-only vs Desktop and Metro applications capability is a very natural line for Microsoft to differentiate on.

If Microsoft does this successfully it will be healthy for their bottom line.  They will make less on pure consumption devices, but this is an area that is nearly all new business.  Meanwhile Tablets that are truly replacing notebooks, as well as Tablets used by Enterprises, will yield Microsoft the same revenue and profit they get out of Windows 7 PCs.  Over the long haul, as more apps move to the Metro model, Microsoft will have to find new ways to differentiate between editions if it wants to maintain margins.  But that is a problem for Windows 9, 10, and 11.  For Windows 8 the Metro/Desktop App split would be perfect.

The really interesting question is around what legacy apps will actually run on ARM-based systems.  Will, for example, SAP recompile their client bits for ARM?  How many versions back will they do this?  It’s an important question because an Enterprise may not want to upgrade to a newer version of SAP just to get ARM support.  What about Intuit’s products?  For small businesses the ability to run Quickbooks, and for consumers Quicken, would be huge in deciding they wanted support for Desktop Apps.  What are these and other ISVs telling Microsoft?  If there is one fly in the ointment of my analysis it is that what the ISVs tell Microsoft could alter its plans.  If ISVs only want to support ARM with new Metro apps then Microsoft could decide not to offer ARM Desktop App support.  But I doubt this is the message they are getting from ISVs.  If we are going to see ARM-based Notebooks then ISVs will want to be there, and that means the apps would work on Tablets as well.

So the bottom line is that we are likely to see Desktop App support for ARM-based Windows 8 systems, but that most consumers will experience Windows 8 Tablets using a Desktop App-free Windows 8 Tablet Edition.

Posted in Computer and Internet, Microsoft, Windows | Tagged , , , | 5 Comments