What will the follow-on to Windows Server 2012 be called?

Let me start with “I have no idea”.  Don’t like that answer?  Well, I’d bet on Windows Server 2012 R2 based on post 2003 release history.  But let’s explore this further.

With the latest revelation that Windows Blue is likely to be called Windows 8.1 I ended up in an amusing Twitter stream of speculation about Windows Server naming.  Microsoft has maintained a consistent naming convention for server (of all flavor) releases since 2000.  I’ll give some history of this, speculate on what happens this year, and speculate a bit on the future.

Back in the mid-90s when Microsoft decided to name both client operating systems and Office (e.g., Windows 95 and Office 95) by year rather than version number Jim Allchin declared that his products (Windows NT, SQL Server, etc.) would never adopt that naming convention.  I want to say that Jim said “Over my dead body” but my memory is fading enough that I won’t promise he actually said that.  Anyway, Jim didn’t die when 5 years later he agreed to adopt the calendar-based naming convention for Microsoft’s server products.

Prior to 2000 each of the server products was on its own development schedule and had little in the way of coordinated marketing.  But as it turned out 2000 was going to be a big year for product launches with most of the server products bringing something to the table.  Paul Flessner, who had become head of the Server Applications Division, and others were of the belief that marketing individual products to IT departments was no longer the way to go and that Microsoft should put more emphasis on marketing the family over the individuals.

There were two problems with marketing the 2000 wave of products as a single family.  The first was that it wasn’t really developed that way.  Of course there were the usual efforts by each team to coordinate certain features and usage scenarios, but it wasn’t as front and center in the product planning as one would have pursued had the family concept been around when we started as opposed to coming out of marketing later.  The second, and easier to fix, problem was that product naming conventions left little guide as to the relationship between the products.  That could be fixed, and was fixed, by adopting a common family naming convention.

This was the coming out party for .NET and so the naming convention that was adopted was .NET yyyy.  Yes, oh so briefly, SQL Server 8 had become SQL Server.NET 2000.  All the other server releases followed this pattern.  My SQL Server 8 developer conference was hijacked to become the .NET Developer Conference (“Featuring SQL Server.NET 2000”), a mistake for which I will never forgive Paul 🙂  It didn’t take long before the .NET was dropped from the name of all the products but they retained the convention of using the year rather than version number to indicate specific releases.  So SQL Server 2000, Commerce Server 2000, Exchange Server 2000, etc.

Over the years this has been rejiggered in a number of ways.  For a while all these products were marketed together as the “Windows Server System” which included Windows Server itself as well.  And while no serious attempt was made to force all the products to release in waves, as the Office team does, there has been an increased level of coordination.  For example STB maintains a set of criteria that all of its products must meet in order to be allowed to ship.  Think of it like an internal logo program that is used to achieve some consistency across server products.

Initially management of the Server family was purely virtual.  While Paul Flessner ran the Server Applications Division (which at the time owned Exchange Server too) he did not own Windows Server (which remained in the Windows organization) nor Developer Division.  But he did have an overall business ownership role across them.  Microsoft is not a good organization for matrix management, so eventually all of the servers  were consolidated into what we now know as STB with Eric Rudder as its leader.

Bob Muglia took ownership of Windows Server, reporting to Eric, and sought to accelerate its development (which had been slowed by the Windows’ teams effort to transition the client OS to the NT kernel, followed by the Longhorn debacle).  Bob instituted a system in which Windows Server would release more frequently than Windows itself, alternating releases without kernel changes with those in which Windows had a release of its own (and thus a revised kernel).  These non-kernel releases were given the designation R2, the first of which was Windows Server 2003 R2.  In time the kernel vs non-kernel differentiation became meaningless and the true meaning of R2 became a way to designate a minor (or perhaps more appropriately positioned as a “.5”) release.  Other server products adopted the convention, although they have not used it very extensively.

Without any significant changes in branding we have STB on-premise products with a fairly clear naming convention.  The product is either the product name followed by the year, or it is the previous product release name followed by R2 (or R3, R4, etc. though that has rarely happened).  That means a Windows Server Blue would either be named Windows Server 2013, Windows Server 2014, or Windows Server 2012 R2.  Let me explain the 2013 vs 2014 thing.  There is some concern that a product shipping in the 4th calendar quarter of the year would seem dated just a few months later if it used the actual year of release in its name.  So sometimes Microsoft will use the subsequent year in the name.  But my bet on Windows Server Blue is that it will use the Windows Server 2012 R2 name.

Although I’ve heard some rumbling that Windows Server Blue might actually bring more dramatic improvements to Windows Server than Windows Blue is bringing to Windows 8, I have my doubts that they could bring so much to the table that they’d want to use a major version name.  Not only that, but being the conservative upgrade types that most IT leaders are, Microsoft might want to send a message of stability rather than one of change.  Depending on what kinds of changes are in Windows Server Blue they may be able to get IT departments to switch deployment efforts mid stream from a Windows Server 2012 to a Windows Server 2012 R2 effort.  It is unlikely they could get them to switch mid-stream to a 2013 named release.  So I think the dynamics point towards Microsoft using the R2 naming convention in this case.

There is a third reason for Microsoft to use the R2 convention, which is that the entire server product family branding scheme is getting rather dated.  At some point Microsoft will want to re-brand all of the products to better communicate their appropriateness for cloud and hybrid environments in addition to on-premise.  Is 2013 the year in which they will do this?  Overall for the company it is a year of consolidating the position they established with the 2012 product wave.  So maintaining a notion of stability where changes are more incremental and meant to mature the 2012 wave makes the most sense.  But if I factor out Windows 8 and focus on how solid a release Windows Server 2012, SQL Server 2012, etc. are then a rebrand of the server products seems more reasonable this year.  Still, I’m betting against it.  Why?

As many have noted the value of the “Windows” brand is in decline.  For one thing it doesn’t carry much cachet with consumers any more.  For another, Windows 8 isn’t your father’s Windows and future versions will make this even more obvious.  So Microsoft could in fact be on the cusp of a more dramatic re-branding than just changing how server products are named.  If that is in the cards then it makes no sense for the server products to change before the company has figured out an overall re-branding.  And given I think they won’t want to make such a disruptive change in 2013, the server naming conventions likely won’t change in 2013 either.

For me the bottom line is that Windows Server 2012 R2 is the most likely name for Windows Server Blue, followed by Windows Server 2013 as a second possibility.  Anything more dramatic wouldn’t be a total shock, I just don’t expect it.  Anything more off the reservation, but not part of a major re-branding, would just be silly.

(Update: I forgot to mention that Windows 2000 originally got that name because it was supposed to be the follow-on to Windows 98 SE.  When the team was unable to finish all the app and driver compatibility work a final follow-on in the Win 9x family was added to the plan.  Since Windows 2000 had already taken that name the Windows 9x release was called Windows ME.  So the Windows 2000 naming was not the result of the name syncing scheme described above, but rather the bridge from the client OS use of that scheme to the server use of it.)

Posted in Microsoft | Tagged , , , | 6 Comments

Apple TV and Xbox 720 to usher in new era of A/V-control interfaces

As a computer guy I’ve always hated the control interfaces that the Consumer Audio/Video (A/V) Equipment companies have put on their components.  They just can’t seem to get it right.  The Zenith-invented ultrasonic remote control interface dominated the field for over 20 years until it was replaced by Infrared (IR) around 1980.  IR remains the primary way our remote controls talk to  A/V equipment and pieces of A/V equipment talk to each other (e.g., how a standalone DVR changes the channel on your cable set-top box).

I’ve long wondered why the Consumer Electronics industry hasn’t updated to more modern control technologies than IR.  It isn’t that they haven’t made (fairly minor) stabs at it.  Occasionally you’ll see a piece of equipment from the 1990s sporting an RS-232 interface.  Though usually it is more for programming purposes than actual control.  More recently they’ve been sporting Ethernet and Wi-Fi connectivity, though in most cases that is for data (e.g., stream a movie from the Internet) and not control.  Radio Frequency (RF) remotes have also had a niche status, although one might have expected RF to replace IR long ago it hasn’t made much of a dent.

If you don’t understand the problems with IR let me give a brief summary.  It requires line-of-site to use wirelessly and expensive repeaters, extenders, blasters and other paraphernalia for interconnecting equipment.  It is horribly unreliable, with devices interfering with one another as well as cabinet design and room lighting causing problems.  It is slow, requiring you to hold a remote pointed at the same place for several seconds as it spits out a sequence of commands to operate multiple devices.  This last point caused my Mother so many problems (she’d end up with the TV on and the DVR off, or vice versa and be unable to get them back in sync) that we took away her DVR.  It really is time for this ancient and insufficient technology to die.

Of course the Consumer Electronics industry evolves very slowly, unless given a kick in the rear by the Computer Industry.  And once again it looks like Apple will be the one to apply boot to backside.  My sources are telling me that Apple intends to introduce a new proprietary control technology along with its forthcoming Apple TV called FireIR.

Apple’s FireIR will reportedly continue to use IR technology, but use a new proprietary high-speed protocol to replace RC-5 and the other protocols currently used in Consumer Electronics.  The FireIR protocol will be protected by Apple patents and Apple will require all connected equipment to use it.  For example. Apple will not license FireIR for use in Universal Remotes that support multiple protocols.  So in addition to your remote your CD player, DVD player, Stereo receiver, etc. would all need to be replaced by units that support FireIR.  The iPhone 6 as well as future iPod and iPads would of course support FireIR, completing the perfect Apple Consumer Electronics ecosystem.

I’m sure that most of you are worried about if Microsoft’s next-generation Xbox 720 will support FireIR, and friends on the Xbox team are telling me no.  That’s not a surprise as it seems unlikely that Apple will license FireIR to competitors.  Microsoft is working on its own alternative using technology developed by Microsoft Research.  WiNCE  (Wireless Next – Consumer Electronics) is a wireless version of RS-232 that supports very high-speed data transfers in addition to a remote control protocol.  Whereas Apple will combine FireIR control with 802.11AC for data transfer Microsoft is hoping to supplant both with WiNCE.  Microsoft will license a subset of WiNCE to the Consumer Electronics industry, and submit that subset to the IEEE for standardization.  Of course the Xbox 720, Windows 9, Windows Phone 9, and other Microsoft products will support a richer super-set of WiNCE that Microsoft hopes will lead to deployment of completely Windows-centric homes.

Personally I think that both Apple and Microsoft are crazy and that these developments are simply the ultimate example of the “greater fool” theory at work.  My understanding is that the Consumer Electronics industry doesn’t want either of them having this much power and is looking to establish its own replacement for traditional IR.  My bet is on Samsung’s proposed NG-TCOAS.  NG-TCOAS, or Next Generation – Two Cans On A String, is a universal technology that can be implemented easily and cheaply by all types of Consumer Electronics equipment.  Samsung will, of course, bake support into its own variant of Android as part of a strategy to wrestle control of that operating system from Google.  But that is a story for a future blog post.

 

 

Posted in Home Entertainment, Uncategorized | Tagged , , , , , , , | 7 Comments

Google Reader and other customer tragedies

(This was written weeks ago but not published until now.  I’m going to largely leave it as originally written, with an update at the end.)

I’m pretty upset about Google’s decision to drop support for Google Reader.  I think it is a mistake, and one that will come back to bite them (and actually already has).  I’ll get to more details of that a little later, but first let’s examine this on general principles.

All products and services must die at some point.  Maybe like Bonomo Turkish Taffy they will be reborn, but at some point a company has to decide it is no longer worth offering the product or service.  Often they hold on too long leading to failure of the organization itself.  Other times they have to look at the costs of continuing the business.  Those costs include actual costs, which may exceed the revenue received, and opportunity costs.  Opportunity costs include a tradeoff where putting $1 into business A produces $1.20 in return while putting it into business B produces $1.80.  So you stop investing in A and invest in B.  And that probably sends A into a death spiral.

There is also the problem of management attention.  The biggest complexity around management is not the number of people in your organization nor the size (in $) of the business, it is the number of different things you have to manage.  Be good at managing 100 pet stores and you can manage 1000 or 10000.  Be good at managing 100 pet stores and try to simultaneously manage 100 Women’s clothing stores and you will fail.  You just don’t have enough cycles to do both businesses justice.  Get into a conglomerate like GE and the complexity goes up a couple of orders of magnitude.  Those who can mange the diversity of businesses become CEOs (and get paid a lot for that ability).

But even great CEOs, let alone mediocre ones, hit their limit.  They reach the point where they can’t understand or manage the diversity of businesses.  They realize that this problem is not just theirs, but that the next level or two down in the management chain is struggling with the same problem.  They realize that they aren’t managing opportunity cost very well.  And they start to narrow the focus of the company.  And when they do that, products and services have to go.

Perhaps the most persistently popular posting on my blog is one discussing the demise of Microsoft Forefront.  Forefront TMG was a successful product, but it had to go for all the above reasons.  Forefront as a business had to go for all the above reasons.  It’s not that Microsoft couldn’t have successfully competed head-to-head in the security products business, it is that it was at the bottom when evaluated against management distraction and opportunity cost.  Most of the capabilities live on in the businesses that needed them, could justify the investment, and had the management cycles to mange them in the context of other products.  SPAM filtering in the context of an email server is not a distraction, it is a core capability.  Meanwhile what can be lost in all of this is the customer.

Once you have one customer you have a problem because you can’t discontinue a product or service without pissing off that customer.  It doesn’t matter if you have 1 or 1000 or 1 million, the reaction is going to be the same.  So TMG customers are pissed off.  When Microsoft discontinued Money large numbers of people were pissed off.  Customer’s don’t care that the customer base isn’t growing, or that in fact it is shrinking.  What they care about is that they are using the product or service, that they invested heavily in it, and that there is no wholly suitable replacement.

How upset customers will be is one of the factors you have to consider in deciding to kill off a product or service, because it can impact other product lines.  If you are in the Dishwasher and Jet Engine businesses and you decide to kill off your dishwasher line then it probably doesn’t have any impact on Jet Engine sales.  If you are the supplier of both computer servers and computer storage to large enterprises and you kill off your server business it may very well tank your storage business as well.

Besides outright killing business are there alternatives?  Sure, you can sell or spin-off the business.  It is questionable how much goodwill you retain as a result.   If you picked DEC Rdb over Oracle because you didn’t much care to do business with Oracle, but then DEC sold Rdb to Oracle, how happy with DEC were you?  If, as I’d agitated for, we had spun Rdb out from DEC as its own company would customers have been happier?  Now instead of dealing with a large well-resourced company they’d have been dealing with a small one with very limited support resources.  One that would probably have failed or been acquired by yet another company they didn’t want to do business with.  It’s lose vs. lose vs. lose.

I actually tried to sell Forefront TMG when it became clear it was no longer strategic.  The financials of the sale didn’t make enough sense, and my argument that it was the best thing for customers was somewhat debatable.  On the good side the pressure from my efforts lead to a modest increase in funding that allowed TMG to soldier on a few more years.  Did Microsoft re-think that “sell it” idea last year when they finally decided to end-of-life TMG?  Would customers be happier if TMG were sold to a small company that would give it a modest ongoing life?

Google was long accused of having a product strategy that consisted of throwing jello at the wall to see what would stick.  Of course it’s not binary.  There is no stick vs. no-stick.  Everything sticks, a little.  A couple of years ago they decided to get real about their product portfolio and start trimming things that hadn’t really stuck nor had a way forward.  As a manager and as an investor (though not in Google, fwiw) I applaud them for this.  It should, over time, make them a much stronger company.  But every time they drop one of their offerings they piss off customers.  So far it has been worth the tradeoff, but is killing off Google Reader going to cause grave bodily harm?

I used to manage RSS feeds using Microsoft’s My.live.com page, one of their own initial jello-throwing efforts around the creation of Windows Live.  Of course they never reconciled having both My.live.com and My.msn.com and eventually they killed My.live.com.  My.msn.com can actually function as an RSS reader, but while My.live.com could export your feeds as a OPML file my.msn.com couldn’t read a OPML file!  So I created a Google account and imported my feeds into Google Reader instead.  For years now that has meant I grab my first cup of coffee, sit down at the computer, and look at Google Reader.  Moreover, it has meant that whatever else I do through the day I’m always logged into my Google account.

I’ve used other Google services since creating the account so I could use Reader, but for me none are particularly sticky.  When Reader goes one thing is certain, I will no longer log in to my Google account except when absolutely necessary.  That isn’t a statement of protest, that is just a statement of reality.  Without Reader I have no reason to leave my browser logged in to Google.  It also leaves me reliant on just one other Google service, Google Voice.  Now the truth is that I use Google Voice for just one thing, to get speech-to-text versions of voicemail, and that means it isn’t a very sticky service (particularly since my mobile provider now offers speech-to-text, for a modest charge).  Meaning I could very easily end up relying on no Google services, and dropping my Google Account entirely.

Google Reader also happens to be something that is heavily used within the community of “influentials”.  So while other services that Google has dropped were met largely with a whimper, this one lead to an explosion of protest.  Adding fuel to the fire, Google had driven most competitors out of the market with Reader so there is a feeling that something nefarious is going on here.  Did Google pre-plan to monopolize the RSS Reader market so they could then kill it?  I highly doubt it.

Many have pointed out that RSS Feeds are somewhat of a dying breed, because people use Twitter, Facebook, and other social networks as an alternative.  I partially agree with that assessment though I personally have a problem with it.  I may want to follow a tech blogger’s occasional technology posting, but that doesn’t mean I want to see a couple of dozen of his tweets every day about everything from his political viewpoints (which I may or may not agree with) to what his cat had for breakfast, just so I know when he’s updated his blog.  And in fact multiply that by the 100 or so blogs I follow and the Twitter stream becomes so large that I can’t find the announcements of blog entries (or read the tweets of those I really want to follow).    So I see RSS Feeds retaining an important, if modest, niche.  At least until Twitter comes up with better filtering tools.

Hey Twitter, if you want someone to help you build an RSS Feed replacement into your service drop me a line.

From a business standpoint why should Google have kept Reader alive?  One reason really, that “Google Account” is super-valuable.  It is a means for getting someone to use other services.  Moreover, it is the ultimate means of performing tracking.  As long as I’m logged in to a Google Account while using Google services, including search, they can track my behavior.  And because it is first party none of the attempts to block that tracking apply.  TPLs?  They don’t apply to First Party cookies.  Safari and other browsers’ outright blocks on Third Party cookie?  No effect on First Party tracking.  Identities are important, and Google seems to have forgotten that.

Of course for heavy users of Google services none of this probably matters.  I doubt very many people who use Gmail as their primary email service are going to move to a competing provider.  But for the hundreds of millions of us who use Hotmail, Yahoo Mail, or an ISP’s mail and maintained a Google Account for use of various ancillary services the death of those services means we have no reason to maintain a relationship with Google.  And in the long run that will hurt them.  Not fatally of course, but potentially enough that it would have paid for them to keep services like Reader alive.

Products and services eventually have to die, as painful as it is for users.  If I weren’t a user of Google Reader I’d simply look at this and wonder if Google really did a good business analysis before making the choice.  And if they did, then I applaud them for making the hard decision, taking their lumps, and moving to put all their weight behind more strategic initiatives.  But since Google Reader was one of the most important services in my daily life, it’s hard to be that dispassionate about it.  I feel totally screwed over by Google.  It adds to my view that Google is not a good company to do business with.

So as I prepare to move on from Google Reader I’m also thinking about what would happen if I dropped my Google Account entirely.  What would be the impact both short-term and long-term.  That re-evaluation of the business relationship is exactly what happens any time any company decides to discontinue a product or service.  What Microsoft has done with Forefront made complete strategic sense.   But there are certainly many corporate customers who no longer view Microsoft as their provider of security software, even though Microsoft still offers most of what they need.  And there are no doubt a few who became wary of choosing non-security Microsoft products over leading third-party alternatives, though that caution will pass with time.  As will much of the impact of the negative reaction to the end of Google Reader.

(And now the update.)

I’ve moved on from Google Reader to Newsblur, which overall seems like a better cloud-based RSS Reader to begin with.  I did go with a paid account, both because that better met my needs and because Newsblur still isn’t taking new free accounts as they struggle to scale.  I’ve found one app on Windows Phone that supports Newsblur, but none on Windows 8.  So I’ll be using the web on the latter for now.  I’m sure in the coming weeks we’ll see multiple apps support Newsblur, as developers of apps that targeted Google Reader otherwise face the demise of their app.

Although I’d originally signed up for Google Voice as a potential second line or business telephone I barely used it that way.  Skype is a much better option for my needs.  So I was using Google Voice as a voicemail service, but I’ve switched that back to my carrier’s voicemail.  I also question the future of Google Voice as Google just isn’t acting as if it is a strategic offering.  And so rather than committing to it further I decided it was better to pull the plug before Google’s next “spring cleaning”.

I’d toyed with Google+, but long ago concluded that it really offered me nothing over my established social network on Facebook.  I can’t think of a single person I want to be “friends” with that is on Google+ but not Facebook (but most of my Facebook friends are not on Google+).  I could have split my personal networking on Facebook and business oriented networking on Google+, but I already do that with Twitter (which I use almost exclusively for technology networking).  So I stopped all use of Google+ many months ago.

I looked at my Gmail and discovered that other than a few distribution lists the only individuals contacting me there were the results of accidents (i.e., I’d mistakenly sent something out from that account and people had replied to it.).  And all of those individuals have my Hotmail address already, so it didn’t really matter.  I could drop the Gmail account with no real repercussions.

In one of those bizarre privacy violating situations I realized that Google had connected my account to a brother’s Picasa album.  For Google this shows the value of having someone logged in to their Google Account all the time.  For me it was disconcerting.  I don’t use Picasa myself.  Another area where people are (unknowingly?) sucked into the Google Account (lack of) privacy realm is YouTube.  I don’t post to YouTube myself, but because I was always logged in Google could track every YouTube video I watched.

If I ever want to post to YouTube I’ll create a new account that I use just for that purpose, and not leave myself logged in except when I’m posting.  Ditto if I ever decide to use Picasa, though I see that as much less likely.

Free of all need for a Google Account I deleted it.  I am now completely Google-Free.

 

Posted in Cloud, Computer and Internet, Google, Microsoft, Security | Tagged , , , , | 13 Comments

Windows Blue Buzz

With the leak of a build (apparently of development milestone 1) of Windows Blue the blogosphere is abuzz over what is coming in the next few months.  Most of that buzz is about nice modest improvements to the Modern (yes, I’ve conceded this is what it should be called) environment.  The usual trolls abound.  And then there are some bloggers who are just being bizarre.

If you go back and read my earlier posts you know I’ve been expecting changes like allowing more concurrent apps on-screen than just the one full size view plus one snapped view.  I’m really happy to see that coming.  And I’m very happy that Microsoft will apparently make even greater use of Skydrive to sync state and backup data in the cloud.  I also expect there are numerous improvements that won’t really appear until the second, and final, development milestone.

Windows Blue is a completion release, taking Windows 8 to where Microsoft wanted it to be but couldn’t fit in to the available timeframe.   Its primary influence is almost certainly things that Microsoft had on its Windows 8 planning list but fell below the line.  How does that happen?  Well you’ve got functionality (and performance etc.), time, and resources as the principle variables in any development effort.  Resources are hard to vary once you are into a project, so that leaves schedule (time) and functionality that you can tradeoff.  Windows 8 used up every inch of the schedule available.  Originally it was planned as a 3 development milestone release (which is where rumors of an April 2012 RTM date first came from).  Somewhere mid-release they realized they needed an additional development milestone, which pushed RTM out a few months.  But with “Holiday 2012” as a hard deadline, that left many things undoable in a “V1”.  Windows Blue picks those things up.

Windows Blue is also the first place where Microsoft could really react to the feedback coming from the Developer, Consumer, and Release Previews.  Pretty much what we have in the market today was cast in drying concrete by the time the Developer Preview hit the market.  At best minor tweaks came out of the Developer Preview and bug fixes out of the later previews.  In planning Windows Blue Microsoft would have taken both customer feedback and telemetry from the previews into account.  Post-RTM usage would come too late for influence over Blue and will instead factor into Windows 9.

I find two topics coming out of the blogosphere rather bizarre.  The first was an expectation by some that Microsoft would abandon Modern and return to the classic desktop.  This is bizarre because it would mean a decision to turn Windows into a pure legacy offering with a slow death ahead of it.  But hey, the 5% of the overall user population who are in the “desktop is the future” camp would have been happy.  Anyway, there was not a snowball’s chance in hell of that happening.  Microsoft may want to provide a solution optimized for the 5% for years or decades to come, but that isn’t the future for the other 95%.  The right answer for the 95% is to rapidly evolve Modern to meet their needs.

The other side of this bizarre coin is those claiming that Microsoft would abandon the desktop by Windows 9.  I’ve pointed out that the move away from the desktop is a 5-10 year evolution.  Many other observers thing it is well over 10 years.  Windows 9 would be only two years into that cycle.  David Vaskevitch used to point out that people tend to overestimate the amount of change that is possible in 2 years and underestimate the amount of change that will occur in 10 years.  Once again I think he’s right.

The other thing about moving away from the desktop is the standard logic error so many people make.  A is B does not mean B is A.  Microsoft will move quickly to make the desktop superfluous for those who just need Modern applications.  That says absolutely nothing about removing the desktop for those who have a need for it.  The desktop will disappear from Windows RT relatively soon, perhaps by Windows 9, because once you have a Modern Office and Modern system utilities it is superfluous.  It will live on in mainstream Windows for many years to come because customers have a need to run desktop applications.  But that will be a continually shrinking market, and at some point Microsoft will use various moves (pricing, licensing, etc.) to accelerate its decline.  I don’t expect any of that to happen with Windows Blue or Windows 9.  Maybe Microsoft will ramp up the pressure to narrow the desktop focus to only things that really need the desktop by Windows 10.  Maybe.

In the meantime we’re just a few months away from a pretty significant evolution forward on Windows 8.  Some of that evolution, in the case of first party apps, is already rolling out.  By the start of the summer it looks like anyone who wants will be able to try out Windows Blue, and by the fall we’ll all be enjoying Windows 8 the way Windows 8 was always meant to be.

 

Posted in Computer and Internet, Microsoft, Windows | Tagged , , | 47 Comments

Does Windows RT have a place in the sun?

There have been a lot of articles/posts lately questioning the future of Microsoft’s Windows RT.  My biggest problem with the anti-Windows RT thinkers is that they take such a short-term focus.  Windows RT, just as the Windows RunTime (WinRT) it is named for, are long-term initiatives.  We are currently looking at V1 of those and people are trying to project that Windows RT (aka Windows on ARM for its most obvious origin) will die, apparently before even reaching its first birthday.  While Microsoft could have a change of direction, I’d bet that the path they started on is very much intact.

Later this year we’ll have V2 of  both WinRT and Windows RT.  Next year we’ll likely see V3.  And as we all know, it typically takes Microsoft (and most others actually) three releases to get something right (or more precisely, to cover the landscape sufficiently for most people to feel comfortable it covers their needs).  I think Microsoft will give WinRT and Windows RT the full three releases before considering if they’ve gone in the wrong direction.

I’ve written about this topic a number of times over almost a year and a half.  Back in December of 2011 I wrote about why Microsoft was doing ARM support and in January of this year I pointed out that Windows RT was not specifically about ARM.  Windows RT is about producing a legacy-free version of Windows for the future.  And in between these two I gave some history of the Windows 8 effort that further explains the origins of the WinRT and Windows RT.  Numerous other posts offer additional insights.

Let me summarize all this.  The Win32 application model is broken and unfixable.  Microsoft has been looking to replace it for over a decade.  The first version of that replacement, the Windows RunTime or WinRT, was released as part of Windows 8.  To scope the effort the version in the first release focuses on what was necessary to address the tablet market, though it is not tablet specific.  The name Windows 8 is used to mean versions of Windows capable of running both legacy Win32 apps and new WinRT apps.  The name Windows RT is used to mean a version of Windows only capable of running WinRT apps.

In the long run WinRT expands to cover far more of the application space currently addressed by Win32, and Windows RT becomes the mainstream version of Windows.  Windows RT and Windows “8” eventually switch roles, with the non-RT version of Windows becoming the niche offering for those who need legacy Win32 capabilities.  Does this happen in Windows “9”, Windows “10”, Windows “11”, or Windows “18”?  Who knows  But it will happen.

Critics of Windows RT, particularly those who claim it has no role in life, are too busy looking in the rear-view mirror.  They are commenting on V1, not thinking about where it will be by V3.  But it is the latter that really matters.

Could Microsoft change direction and run to some other alternative to WinRT and Windows RT?  Sure.  Will they?  I doubt it, not at least without seeing the path they are on through another 2-4 years.  If at that point the “re-imagination” of Windows has failed we’ll have to see a re-imagination of Microsoft as a whole, probably away from the broad client-computing realm.  They don’t want to go there, and the key to avoiding that fate is in making Windows RT succeed.

 

 

Posted in Computer and Internet, Microsoft, Windows | Tagged , , | 35 Comments

It’s a good thing I have anti-malware software installed

Along with those who believe NASA faked sending men to the moon and Holocaust deniers, is a group of people who argue that traditional anti-malware (or anti-virus) software is useless and unnecessary.  These AV-deniers believe that simply by avoiding things like playing web-based games, downloading software, visiting porn sites, etc. they can avoid being infected by malware.  Further they believe that anti-malware doesn’t work because it does poorly against zero day attacks, that is new and unknown malware.   They ignore that something like 98% of the malware running around on the Internet consists of old, well-known, attacks.

This morning I needed to look up the opening time for a Seattle-area furniture store called Dania.  So I got on my trusted computer and went to “wwwDOTdaniaDOTcom”.  This was a big mistake, because that website contained a drive-by malware download.  Fortunately Windows 8’s Windows Defender caught and blocked the attempted download.  If not for having anti-malware software on my system it might have been infected with a Trojan that enabled remote control of my computer.  Was Dania Furniture responsible for having a compromised web site?  No, their actual website is www.daniafurniture.com.  What percentage of people would try the harmful URL rather than the safe and correct URL?  I’m guessing the vast majority.

Good, “old-fashioned”, anti-malware is a very necessary if not sufficient tool for protecting computers from being infected with malware.

One interesting point is that URL filtering was not effective in blocking my attempt to access the bogus site.  In this instance I had both Norton DNS and WOT, as well as SmartScreen of course, in the path of my attempt to access the site.  None of the three blocked the attempt.  I subsequently checked OpenDNS and it too had no inkling that this site was harmful.  Google Safe Browsing also wouldn’t have blocked access to this site.  Of course, I’ve reported it to all of the above and hopefully they will have investigated and blocked access in a timely manner.

You need to use all the tools at your disposal to avoid malware, including your brain, and even then you won’t be 100% protected.

Posted in Computer and Internet, Security | Tagged , , | 10 Comments

WinFS, Integrated/Unified Storage, and Microsoft – Part 4

I knew that the Cairo Object File Store (OFS) was in trouble my first week at Microsoft.  I’d been asked to attend a design meeting for OLE DB that would start at 6PM.  On Friday.  Why such a seemingly important meeting would be scheduled for Friday evening would soon become apparent.  OLE DB was being created as Microsoft’s new unified storage API, and it would be an API for OFS.  So you can imagine my confusion when the Program Manager from OFS reported he had to delay completing a part of the OLE DB spec because of “his day job”.   In what shape could OFS be if they couldn’t make specification of its API, the API that Developer Division would focus tools support on and evangelize, part of someone’s day job?  That was just the first hint of trouble.

My initial job at Microsoft was to take the 100,000′ foot vision for Microsoft in the Enterprise (Servers in particular) that David Vaskevitch had presented to Bill, and gained approval for, and drive it to a set of real engineering plans.  The second clue that OFS was going down is that David (who along with Bill and Jim Allchin was one of the primary executives pushing for Integrated Storage) didn’t include figuring out how OFS fit into the Enterprise plan in my effort.

The third clue was the definitive one.  Because I was new to Microsoft (and thus could be objective) I was asked to intervene in a spat between the Exchange team (working on the first version of Exchange Server, nee Exchange 4.0) and the JET-Blue database engine over the performance of the Mailbox Store.  What I learned along the way was that the intent was for Exchange Server to be built on OFS, but since OFS wasn’t ready Exchange was doing its own interim store for Exchange 4.0.  The plan of record was for the second version of Exchange to move to OFS.  However, in an email discussing the performance of the existing mailbox store the Exchange General Manager mentioned that he didn’t think Exchange would ever move to OFS.  While the OFS project was still alive, it was clear to me that everyone in the company had already written it off.

I didn’t pay any attention to OFS and some months later word came down that it had been canceled.  I can’t really tell you what happened with it.  Was it a conceptual problem?  A timing problem?  Or mostly an execution problem?  I don’t know.  My guess, based on the subsequent Integrated Storage efforts, was all of the above.

While OFS was fading from the scene my efforts included a focus on another part of the unified storage puzzle.  Going back to DEC I’d put a lot of focus on the question of Integration (a single image store) vs. Federation (making separate stores appear as a unified one).  OLE DB was a piece of that puzzle and we put significant resources into its ability to support Federation.  I proposed and lead the acquisition of Netwise so we could bring mainframe data into the picture.  Later, when I was running the Relational Engine team for SQL Server 7.0 I promised David that I’d find a way to get heterogeneous query into the release even though it was “below the line” and I had no resources to do it.  It was one of those calculated risks (making a personal commitment that wasn’t supported by the official commitments) that paid off.  Soon other stores within Microsoft started exposing data via OLE DB, including Exchange and Active Directory.  Federation and OLE DB would be another topic, so I won’t say more about it here.

For a while I thought that federation was going to end up being Microsoft’s answer to the unified storage riddle, but the itch to create an integrated store was one that still had to be scratched.  About mid-way through development of SQL Server 7.0 Peter Spiro and I were summoned by Jim Allchin and told that we were being locked in a room with representatives from Exchange and Windows until we came out with a design for Integrated Storage.  The minimum lockup was set at two weeks (and Jim was serious about this as even though we finished our work a couple of days early he wouldn’t let us return to our regular jobs).

The best part of Jim locking us all away was that it was the first time that the leaders of the unstructured, semi-structured, and structured storage communities had really gotten together to level-set on how these three storage types worked.  Before you get too shocked by this keep in mind that in any earlier meeting, such as during OFS’ conception, Microsoft was not a significant player in the structured storage world.  The discussions at the time would likely have been around how you could build Access on top of OFS, not on how the guys who create engines for structured storage were going to build or contribute to an integrated storage engine.  Also, I should point out that intensive two-way discussions had always gone on between the file system group and structured and semi-structured communities.  It was getting the structured and semi-structured guys in a room for a long enough time for them to really understand one another that was novel.

At the end of a few days of level setting we got down to defining an integrated store.  We named the design proposal we came up with JAWS, which if memory serves came from Jim Allchin’s Windows Storage.  We all returned to our day jobs, which in Peter and my case meant the death march that was SQL Server 7.0.  I think a week or two went by and then David called me over to let me know that Bill and Jim had decided to proceed with JAWS and I was a candidate to lead the effort.  I declined, and the other candidate (a Product Unit Manager from Exchange who had been one of their participants in the design) took the role.

While I was a strong believer in integrated storage, and very interested in JAWS, there was just no way I would leave the SQL Server 7.0 effort at that point in the cycle.  And given the success of 7.0 and the SQL Server business I don’t regret my decision one bit.  But at the time I was worried I was making a horrible career choice, because Integrated Storage was of greater strategic importance to Microsoft than the database business.

Being buried in SQL Server 7.0 meant I didn’t get to follow what was going on with JAWS, but after a year or so of development the powers that be took a look at it and decided it wasn’t going to work.  Without the ability to get any support from the underlying system JAWS had turned into a  complex and very heavyweight layer on top of SQL Server.  The project was cancelled with the idea that the SQL Server team would pick up responsibility and create a lighter-weight solution as part of the SQL Server 2000 release.  That responsibility fell to me and I was back in the Integrated Storage business.

SQL Server 2000 was initially thought of as a 12-18 month release because we were concerned we’d missed something in SQL Server 7.0, and wanted to be ready to respond quickly to customer feedback.  The Integrated Storage work thus created a tension between our thinking of what was best for the database business vs. what was best for the Integrated Storage charter.  We allowed the release to stretch into the 18-24 month range to accommodate this, but it wasn’t without creating inter-team issues and planning miscues.

Besides doing design work on turning SQL Server into a usable store for semi-structured and unstructured data types we worked with potential clients around Microsoft to figure out who was going to be our customer.  For example we hoped that Outlook would sign on to use SQL Server 2000 as a replacement for its PST files.  But although we had the official charter for coming up with an Integrated Storage solution, an alternative effort was underway in the Exchange team.

The JAWS project had largely been staffed out of Exchange (plus new hires) and when it was cancelled many of them returned to the Exchange team.  They pursued creating an Exchange File System as part of Exchange Server 2000.  And this is where politics does rear its ugly head.  At some point a Microsoft reorganization brought Exchange, Office, and Developer Division together under Bob Muglia but left SQL Server in a different reporting chain.  Bob’s organization rallied around the Exchange File System creating a proposal that would have Office use it as their store, Developer Division create a new code management system on top of it as well as provide tools support for apps written against the store, etc.  As a result the key clients that would have used a SQL Server-based Integrated Store were lost to us.

In a meeting with Bill to decide the direction for Integrated Storage he had to choose between two options.  One was the technology base that he thought was the right one for the long-term vision of Integrated Storage, but it was a store with no one committed to use it.  The other was a solid plan and commitment to deliver something that unified the unstructured and semi-structured worlds within Microsoft.  Bill chose to let the Exchange-based plan proceed, but also encouraged us to continue to work on SQL Server as the basis for a future Integrated Storage solution.

While some of what we were working on for SQL Server 2000-based Integrated Storage continued, such as a plan to evolve full-text indexing over the course of a few releases, we mostly put Integrated Storage on the back-burner and turned our attention to the database business.  Plans for Win32 file API access to Blobs was pulled from the release, for example.

The Exchange File System work continued until shortly before Exchange 2000 was to be released.  The powers that be then took a look at what had been done and decided that it was not the right thing to promote as Microsoft’s Integrated Storage solution.  The grand plan was rolled back at the last-minute, and responsibility for Integrated Storage once again fell to the SQL Server team (including reorganizing Exchange into the Server Apps organization that SQL Server was part of).

At this point I was ready for a break from the storage world.  Not only had I been focused primarily on database software for 25 years, I’d spent 5 years in death march mode.  So I decided to go help David Vaskevitch with his new efforts to launch Microsoft into the small business software arena.  Before making the change I drove creation of the engineering plan for Yukon (SQL Server 2005), during which the deal to acquire Great Plains Software was made.  WJen I moved over to work for David my first assignment was to drive the engineering side of integrating Great Plains into Microsoft.  A few months in to this effort I took a long-planned two-month sabbatical.  Just before leaving it became apparent that as it made no sense to have two Senior VPs in the small business software arena (Doug Burgham had joined as part of the Great Plains acquisition) David would be taking a new position.  Bill wanted him to take a CTO role and before I left for sabbatical I told David I’d go with him if he took that position.  So I returned from sabbatical to find myself working for the CTO.

The first thing on David’s plate as CTO was to revitalize the Integrated Storage effort.  And so I returned from sabbatical to find the main thing on my plate was Integrated Storage!  In the six months since I’d left SQL Server they’d made some progress on this front.  A couple of people from Exchange had joined the SQL Server team and created something called Mighty Mouse, essentially a SQL Server image suitable for use inside Windows.  And the Exchange team had started design work on a redesign based on SQL Server.

While I was on sabbatical David had neatly partitioned the problem in an attempt to simplify the effort, with SQL Server as the store and a set of schema definitions for People, Places, and Things that we wanted to promote across Microsoft and to third-parties.  So, for example, we could have a common definition of Contacts that would be used by the Windows Shell, Outlook, Exchange, Great Plains, etc.  And we could evangelize SAP and others to use our schema as well.  For the next few months we would engage many teams around Microsoft in an effort to get them to adopt SQL Server and the P.P.T. schema.

A plan for Integrated Storage was finally coming together.  The Windows Shell team had decided to build the new Longhorn shell on Mighty Mouse and use the P.P.T. schema.  Exchange was on board.  The SharePoint team, already users of SQL Server in an idiosyncratic way, was on board for a redesign.  Outlook was back to taking a serious look at switching to SQL Server for a PST replacement.  Active Directory was committing to doing their future work, starting with a meta directory, on SQL Server.

Why the apparent success this time?  I think two things were different.  First of all was the previously mentioned industry-wide attitude towards SQL databases.  Whereas during the mid-90s they were considered suitable just for large-scale data processing applications, by the turn of the century they were accepted as a storage medium for a broad array of storage needs.  Second, rather than having to deal with a future Integrated Storage file system, projects were looking at SQL Server itself.  We’d ended up with a much more incremental strategy, move everyone to SQL Server (plus some common schema) then evolve it to the full Integrated Storage solution.

I agreed to move back to the SQL Server organization to become the Integrated Storage General Manager.  This looked like it would work out both professionally and personally.  I could build an organization and deliver the first version of Integrated Storage in Longhorn, put designs and a plan in place for the more complete Integrated Storage solution in Blackcomb, and then retire to Colorado on the timeline my wife and I had set.  Sadly this is not how things would play out.

Microsoft had decided that NT would become the base for mainstream Windows and the plan was to have one release to merge the two streams and then follow that up with another major release to move Windows forward (i.e., in the direction that Cairo was originally intended).  Windows 2000 was supposed to be the merged release, but finishing up the application and device compatibility work proved to be more than could be accomplished.  So the plan now became a quick turnaround release called Whister to finish the merge and then follow that up with Blackcomb as the leap in user experience etc.

Whistler became Windows XP and during its development the Windows team decided they needed to do another modest release before moving on to the major overhaul that was envisioned for Blackcomb.  This release was named Longhorn.  While Longhorn work started based on the original modest requirements and schedule a number of factors lead to a change in thinking.  Microsoft couldn’t afford to keep delaying the reinvention of Windows, and so effectively Longhorn took on the Blackcomb requirements.

From an Integrated Storage perspective the new direction for Longhorn meant that the incremental strategy would have to be replaced by a full-on Integrated Storage File System.  For me personally it put the effort out of scope.  I’d already worked out with my wife that if Longhorn slipped (as releases often do) she’d move to Colorado and I’d stay in Redmond for a few months to finish up.  But now Longhorn had officially moved out a year, and unofficially everyone believed it would take at least two years beyond the original schedule.  I briefly considered being a commuter, but that wasn’t going to be viable (as I’d have to be in Redmond five days a week, every week, and often on weekends) for as long as Longhorn would take.  So I bowed out and Peter Spiro took on the Integrated Storage responsibilities.  The accelerated Integrated Storage effort would eventually get the name WinFS.

Shortly before I left the effort Hailstorm was cancelled and some of its charter and teams (e.g., synchronization) moved into my organization.  Basically the distractions to finally delivering on Integrated Storage were being removed.  But as the Integrated Storage charter grew so did the problem of deciding what to address in the first release.  For example, could you address client, server, and cloud storage all in one release cycle?  No, at least not equally.  So this became one of many tensions in the system.

A few months ago someone asked me why Exchange wasn’t built on SQL Server.  And earlier in his post I mentioned that they’d actually started such an effort.  But with the creation of WinFS they were stuck between a rock and a hard place.   Should they target SQL Server or WinFS?  WinFS was prioritizing client first, which suggests they should have continued with their SQL Server port.  But then Exchange would have faced the prospect of porting once to SQL Server and later to WinFS, so I believe they decided to wait for a server version of WinFS.  Apparently they are still waiting.

Longhorn itself turned out to be too aggressive an effort and have too many dependencies.  For example, if the new Windows Shell was built on WinFS and the .NET CLR, and WinFS itself was built on the CLR, and the CLR was a new technology itself that needed a lot of work to function “inside” Windows, then how could you develop all three concurrently?  One story I heard was that when it became clear that Longhorn was failing and they initiated the reset it started with removing CLR.  Then everyone was told to take a look at the impact of that move and what they could deliver without CLR by a specified date.  WinFS had bet so heavily on CLR that it couldn’t rewrite around its removal in time and so WinFS was dropped from Longhorn as well.

The WinFS project continued with the thought that it would initially ship asynchronous to a Windows release before being incorporated into a future one.  But now it had two problems.  First, it was back to the problem of having no Microsoft internal client that was committed to use it.  And second, they eventually concluded that there was no chance in the forseeable future of shipping WinFS in a release of Windows.  With the move of Steven Sinofsky, who had been a critic of WinFS, to run Windows that conclusion was confirmed.  WinFS was dead.

Of course the SQL Server team had learned a lot about the needs of non-traditional database applications and created a lot of technologies while working on WinFS.  And they had a strong ship vehicle of their own in SQL Server.  So they decided to incorporate much of what they’d learned on WinFS in future releases of SQL Server as part of making it a broader Data Platform.  For example Sparse Columns, released as part of SQL Server 2008, is a feature that the semi-structured storage community had been asking for since the JAWS discussions over a decade earlier.  The Entity Data Model, File Streams, Semantic Search, etc. are all outgrowths of the long history of work on Integrated Storage.

In the last and final part of this series I’ll speculate a bit on the future of Integrated Storage.

Posted in Computer and Internet, Database, Microsoft, SQL Server, Windows | Tagged , , , | 23 Comments

What if Microsoft had done Windows 8 differently?

A comment Matt Rosoff over on CITEworld’s re-publication of my post on Microsoft’s current approach to developer (and other customer) engagement inspired me to do a thought experiment.  What if Microsoft had approached “Windows 8” differently?  There are a number of scenarios that they could have followed, so let’s explore a few of the more likely ones.

Most of the criticism about Windows 8 is around how Microsoft attempted to bridge the gap between a modern touch-based, tablet-centric (if you will), UI and the traditional desktop.  Now this is actually a complex situation, because we aren’t just talking about UI but also about a new app model.  In theory you could have one without the other.  In previous posts I’ve talked about why the overall re-imagination of Windows, including all these elements, was important.  So as one thinks about various alternative scenarios you have to consider that not all of the modernization would have necessarily occurred.

What if Microsoft had kept tablet support separate from desktop Windows?  That is, what if Microsoft had three different versions of Windows (Windows Phone, Windows Tablet, and Windows Desktop) with a common underpinning (NT) but separate user experiences.  Windows Tablet would have been the full-on “Metro” experience, including finishing the job of completely eliminating the Desktop.  Windows Desktop would have been more of a 7.5 product, combining the underlying architectural improvements in Windows 8 (faster boot, Secure Boot, better multi-monitor support, SmartScreen, etc.) with minor tweaks to the Windows 7 desktop UI experience and Win32.  What Windows Desktop would NOT include is the Start Screen, Contracts, or the ability to run Windows Store apps.  Why not?  Well, the claim is that desktop users hate those so why not keep the environment clean?  (And if did include them, then it would just be Windows 8 as we know it!)

What would have happened in this scenario?  Well we know what would have happened to Windows Tablet because Windows RT is Windows Tablet V1.0.  Windows RT is struggling due to the relative paucity of Windows Store apps.  The main criticism of Windows RT is that it can’t run desktop apps.  But the ability to run desktop apps is exactly what gives you the “jarring” and unnatural experience that people complain about.  So analyzing the first few months of Windows RT suggests that a pure tablet OS from Microsoft would have failed.

As I mentioned a Windows Desktop 8 would have seemed more like a “.5” release than a real next generation.  One reason for that would be that a lot of resources were tied up on the Tablet effort.  But the bigger one is quite simple, we’re exploring this scenario because many users claim they don’t want the paradigm to change much.  And as long as that is true then most of what you do is pretty minor.  So what would Windows Desktop have done in terms of the market?  Nothing.  Absolutely Nothing.  The decline in traditional PC form factor sales would have continued, at a rate no higher or lower than we are already seeing.  People who need a new PC would buy a new PC independent of if it ran Windows 7 or Windows Desktop 8.  No one would rush to buy a new PC just because of Windows Desktop.  Corporate adoption of Windows Desktop would be no quicker because they would stick to their existing schedules.  It might actually slow down because they were busy deploying iPads or other tablets and, with Windows Desktop 8 not offering a Tablet alternative, they could afford to skip it.  Consumer upgrades of Windows 7 systems to Windows Desktop 8 would probably be a bit more robust, but that was already a fairly insignificant business for Microsoft.  Basically, it wouldn’t have moved the needle on the health of the overall PC business.

So the bottom line for the above scenario is that Microsoft would be at best in the same position it is in today, and more likely in much worse condition as it would have completely bombed out in the tablet space.  They would have been tagged with “they don’t get it” and relegated to the dustbin of technology leadership.

For our next scenario let’s just take a minor variation on the above and suggest that Windows Tablet was not the Metro experience of Windows 8 as we now know it, but rather an evolution of Windows Phone 8.  Now this is interesting in that Windows Phone already had some momentum amongst developers and a well-regarded user experience amongst end-users.  But it had no traction in the market.  And its Achilles Heal is the same application library problem as facing Windows RT.  So I don’t see how things would be different in this scenario.  Windows Desktop would be a highly regarded addition to a shrinking market segment.   Windows Tablet would be Android 3.0 from a market perspective (e.g., Phone on steroid experience rather than being evolved for the tablet, OK Phone app library but almost no apps designed specifically for a tablet, etc.) and critics would be quick to hammer home the comparison.

If Windows Phone market share was exploding then a WP-tablet strategy might have been more successful short-term for Microsoft in the tablet space.  But with Windows Phone struggling, this strategy would not have yielded short-term (nor probably long-term) success in tablets.

Next up would be the strategy the most frantic Windows 8 critics believe Microsoft should have followed.  Retain the Classic Windows user experience and evolve it to (optionally) be more touch friendly.  Add a mode that makes the Start Menu more touchable.  Add spacing and size to common controls etc. as Office 2013 does and as Windows Mobile 6.5 did.  Improve on the existing app model, perhaps by basing it on .NET or by a somewhat evolved Win32.  Etc.  It all sounds good except when you consider two things.  (A) Been there, done that, and all I have to show for it is a T-shirt and (B) it makes the assumption that the technical mishmash that would be created would be cleaner, less jarring, and any better received than today’s Metro/Desktop duality.

On the first point, Microsoft has a long history of trying to evolve the Win95 desktop UI model to address mobile computing.  When Windows CE was created the UI was modeled on Windows 95, Start button and all.  The Handheld PC made no headway against the Palm Pilot, and much of the criticism of the HPC was specifically aimed at having used the desktop paradigm.  So for the Pocket PC Microsoft started to move away from trying to look like Windows 95 and the more it did that the more successful it was!  This culminated in Windows Mobile 6.5 (which itself borrowed from the abandoned “Book of 7” Windows Mobile 7 plan), that went the next step in making that UI family truly finger friendly.  It was too little, too late, as Apple had already redefined the entire user experience and app model expectations.  Windows Phone 7 dispensed with WM compatibility in order to leapfrog the iPhone.

Of course Microsoft also tried to evolve mainstream Windows to be more touch friendly, from the Tablet PC work itself to Origami to having Windows 7 support modern capabilities at its core (capacitive multi-touch).  What they didn’t do was try to alter the basic Windows desktop UI model nor fix the app model (so, for example, uninstall actually worked or apps were sufficiently isolated to keep them from interfering with one another).  Lots of Windows 7 PCs and convertibles were available with touch capability.  None sold in great quantity.  Few that were sold actually saw fingers hitting screens.  In fact once the iPad was out the notion that Windows 7 was touchable was considered laughable.  And that makes the notion that one could have sufficiently evolved the existing desktop UI model to be competitive, highly questionable.

The second part of this scenario is that leap of faith that when you were done evolving the existing user experience etc. the result would be any better than what Microsoft did by moving to the Metro experience.  I happen to believe it would be the opposite.  The closer you got it to something that would work in the tablet market the more bizarre an evolution it would seem to desktop users.  The more you catered to desktop users the more likely it would seem like a kludged force feed of the desktop PC on to a tablet.

My actual bottom line on this strategy?  It would have accelerated the decline of the PC business while failing to gain any traction in tablets.  All the criticisms leveled at Windows 8 would have been repeated, with only modest changes in wording.

So if the alternate scenarios are worse than what Microsoft actually did, yet Windows 8 has not yet captured the hearts of the masses, what “scenario” would work?

Here is the problem plain and simple.  Windows 8 is a V1 product and it needs to be a V2 product.  Take the Start Screen vs. Start Menu debate.  It isn’t that Microsoft needed to retain the cascading start menu, it is that it needed to provide a reasonable alternative for desktop workstation users.  In previous blogs I’ve thrown out an example.  Why isn’t there a snapped view for the Start Screen?  Then when in the desktop you could have Start bring up the snapped view (assuming a monitor that supports it, which is overwhelmingly the case) instead of losing the desktop to a full screen Start Screen.  Why on high-resolution monitors can’t you have multiple snapped views, or even a couple of “full screen” views?  That would mitigate the desktop user complaint of  Metro not being suitable for large monitors.  I don’t think Microsoft completely ignored these questions (and as I’ve mentioned before, I once saw a Windows 8 build demoed with a snapped view on the left and one on the right with full screen view in the middle), I think they just didn’t make the cut for “V1”.

Overall I think the direction Microsoft chose, and the decision to force accelerate the move to the new app and user model, was the correct one.  I would more question some of the individual tradeoffs that were made to make sure the release was in the market for the holiday 2012 shopping season.  And, as my previous post discussed, is Microsoft’s current penchant for secrecy making this situation worse?  Imagine if Microsoft was talking about, and perhaps even demoing, my idea for a Snapped Start Screen (or some other alternative) already.  And publicly promising a free upgrade to the release containing it for Windows 8 purchasers.  And even talking about that release being later this year.  I doubt Microsoft would have lost a single Windows 8 sale.  In fact, I think the change in criticism (from “Windows 8 is a disaster for desktop workstation users” to a softer “some users would be better off waiting a few months for the update” would actually boost overall Windows 8 sales.  When it comes to computing devices, even consumers often buy-in to where you are going more than buying the specific product.

Posted in Computer and Internet, Microsoft, Windows | Tagged , , , | 150 Comments

Now I get it, the Consumer guys at Microsoft are just plain wrong!

I’m an Enterprise guy.  In the mid-70s I set a career goal of dethroning IBM from its then total dominance of the enterprise computing space.  By the end of the 80s I needed a new goal, and it’s always been around furthering the use of computing in the Enterprise.  So I like to think, and have plenty of evidence to support, that I know enterprise computing as well as anyone on the planet.  But outside of being a consumer, and applying common sense and observation, I claim no particular expertise in bringing technology to consumers.  So I tend to give the so-called experts in the consumer area the benefit of the doubt when they claim some behavior that violates my sensibilities.  But I’ve concluded they are wrong.  Horribly wrong.

In the Enterprise space we realized something decades ago:  Customers don’t so much buy your existing product, they BUY-IN to your strategy.  Your product can have numerous weaknesses and even look bad against the competition in some critical areas, but as long as the CIO and other key decision makers like where you are going they will still choose you.  And so we always have been willing to tell our customers, often but not always under NDA, where we were going.  It worked.  They bought.

But the consumer experts, using Apple as an example of a successful strategy, have argued that you don’t say anything until shortly before shipping.  There are good reasons for this.  You want maximum press coverage very close to availability.  You want to avoid the Osborne Effect.  You want to avoid over-promising and under-delivering.  A CIO might understand that you had to change plans because of strategy changes, technology shifts, or just engineering expediency.  Consumers aren’t so forgiving.  Once you tell them about something they aren’t very tolerant of a failure to deliver.  Basically, in the consumer realm “shock and awe” rules.

Problem  number one with “shock and awe” is that it doesn’t work when you need developers and other partners to succeed.  Apple doesn’t say anything about iPhone x more than days before availability, but it does release the SDK for the new version of IOS months in advance.   Microsoft did this for Windows Phone 7.0 and 7.5, but for Windows Phone 8 it didn’t release the SDK in advance.  So four months after Windows Phone 8 devices started shipping we still see very few apps that take advantage of new features.  That’s a FAIL Microsoft.

The latest catch-phrase being attributed to the Windows Phone team is “shut up and ship”.  Really?  That’s what you did for Windows Phone 8 and it didn’t work.  You alienated developers.  You alienated power users.  You alienated the faithful.  You alienated the influencers.  And you are continuing down that path.  True the volume of buyers doesn’t care a lot about these things.  But they take their cue from those that do.

Of course this doesn’t just apply to the Windows Phone team, but to Windows as well.  Microsoft had a history of saying too much too far in advance and then being unable to deliver.  The purpose of PDC was to give developers an advance peak at what was being worked on, and get their feedback.  This gave both developers and Microsoft time to react before a product was finalized.  The disaster that was Longhorn let Steven Sinofsky bring his philosophy to Windows, and it was the complete opposite.  Don’t talk to people, even under NDA, about what you are doing until it is almost fully baked.  Don’t let developers talk to customers.  Impose secrecy (and impose it even on enterprise customers, which is a horrible mistake).  What did this get Microsoft?  A bunch of rookie mistakes, including failure to address the Start Menu/Desktop situation in a way that traditional form factor users find acceptable.  80%, 90%, perhaps 99.99% of dissatisfaction with Windows 8 is all tied up in this one issue.  Failure to disclose intent early, and respond to the resulting feedback, is holding Windows 8 back.

And the Windows 8 mistake continues.  Early disclosure of the direction for Windows Blue and Windows 9, or whatever the next couple of releases are called, would go a long way towards assuaging power user discontent.  It would also give Microsoft feedback on if what they are doing is sufficient, in time to actually react to it.  But no, that is not the philosophy of the Windows team.  Nor Microsoft in general these days.  The irony is that when Microsoft was considered at the height of its arrogance it was actually listening to customers very closely.  Now that it is fighting for its right to continue to be called an industry leader it displays the greatest sense of arrogance.  “We know what’s right for you and we’ll tell you what that is when we attempt to shove it down your throat”.  If my Microsoft friends don’t believe that is their attitude, then it’s because they are on the inside looking out.  The view from outside is enlightening.

Microsoft needs to improve customer engagement, dramatically.   And the first thing that will take is to recognize that the consumer teams’ attitude towards communicating futures is just plain wrong.  That doesn’t mean abandonment of controlled information release, it means applying it more intelligently.  It means disclosing platform direction early.  It means bringing developers on board early.  It means giving general technology direction to the public early (ala what BillG used to do) without talking specific releases or products.  It does not mean releasing every detail of a product in advance.

Microsoft has to make a clear differentiation between platform and product.  They need to nurture the ecosystem around the platform.  That requires openness.  They need to reserve “shock and awe” for the product.  The problem I see from the consumer guys is that as much as they say the word “platform” they don’t get it.  Even those that used to get it now seem to have amnesia.  They drunk the consumer Kool-Aid.

Let me contrast three strategic thrusts going on at Microsoft.  Windows, Windows Phone, and Azure.  Windows and Windows Phone are in the “shut up and ship” camp.  Azure is in the ENGAGE camp.  It seems like every week Scott Guthrie is announcing new Azure technology previews or releases.  Everything about Azure is exciting.  Amazon, Salesforce, and a few others defined cloud computing.  Azure is displacing them.  It has the Big Mo.  Let me make this clear, AZURE IS GOING TO WIN the cloud computing infrastructure and platform battle.  Meanwhile Windows and Windows Phone continue to alienate their ecosystems.  It is unclear if Windows Phone will ever amount to a significant third ecosystem.  It is unclear that Windows will be able to halt an overall market share decline against IOS and Android tablets.  Azure developers are excited.  No, it’s beyond excitement.  Windows and Windows Phone developers?  Not so much.  They are, at best, conflicted.  Azure is doing platforms right.  Windows and Windows Phone?  They prefer to “shut up and ship”, even if it risks no one caring what they ship.

It turns out that the consumer ecosystem as well as the community of influencers craves direction and interaction with the vendor every bit as much as Enterprise CIOs do.  Put simply, the consumer guys are wrong about secrecy.  At least at Microsoft.  I hope they figure it out soon, because “shut up and ship” is not helping their cause.

In closing let me remind everyone that I think Windows Phone and Windows 8 are great products.   It is the failure to engage with the ecosystem in a way that Microsoft well understands, and continues to do successfully in the enterprise (STB) space, that I’m criticizing.  Forget that conventional “shock and awe” consumer wisdom.  It was wrong.  Return platform evangelism, including willingness to discuss “futures”, to the forefront and watch Windows and Windows Phone adoption explode.

Posted in Cloud, Computer and Internet, Microsoft, Windows, Windows Phone | Tagged , , , , , , | 53 Comments

Losing patience with Windows Phone

I have to say that my patience with Windows Phone is wearing off.  There are two reasons for this.  The first is the lack of real progress on the application front.  The second is the feeling of abandonment surrounding the Nokia Lumia 900 (or more broadly, Microsoft’s inability to even get a minor update like Windows Phone 7.8 fully rolled out).

Back in April and June of 2012 I wrote blog entries bemoaning the lack of real world apps on Windows Phone.  I won’t remake those arguments, so I suggest you read those pieces to understand my frustration.  The bottom line here is that Windows Phone does not help me manage my life nearly as well as an iPhone (or Android-based phone) would.  It is 11 months since I wrote that first blog entry and as far as I can tell NONE of the real world interaction apps I was missing have appeared on Windows Phone.  And I include both WP7.x and Windows Phone 8, so it isn’t that moving to a WP8 device would help.

The second issue is less important, but actually related to the first.  Microsoft “screwed the pooch” on the rollout of Windows Phone 7.8.  It is now March and AT&T still hasn’t offered me the update for my Lumia 900.  Well, right now I think updates are on hold while Microsoft fixes a bug with Live Tiles.  But every day that goes by I care less and less about the update.  Basically, thanks for the cosmetic improvements but if and when 7.8 actually comes to my Lumia 900 it won’t make the device measurably more useful to me.  Microsoft has managed to turn WP7.8 from something intended to mollify WP7 device owners into something that rubs their nose in the lack of upgradeability of devices they acquired less than a year ago.

The reason I feel these two issues are related is simple.  If WP8 solved my app problem I’d be salivating over WP8 devices and forget all about the WP7/WP8 transition. Instead I just view WP8 devices as more major cosmetic improvements that don’t get me even 1% of the way towards closing the usefulness gap with the iPhone.  They aren’t giving me what I really need, and thus any momentary lust for new technology is overwhelmed by the fact that I’ll shell out $500+ of my own money and after a few weeks of technological masturbation be just as unsatisfied as I am today.

So next month, when I would normally do my mid-contract upgrade, I’m not sure what to do.  One thing is clear, the certainty that I’ll be getting a new Windows Phone is gone.  I just don’t see the compelling value proposition.

Posted in Computer and Internet, Microsoft, Mobile, Windows Phone | Tagged , , , , , | 55 Comments