Updating my Desktop Apps on ARM entry

I noted that one of my predictions, an edition for Tablets (SoC-based systems) appears to be true while failing to mention that my prediction that Microsoft would support desktop apps on ARM was (Office 15 aside) wrong.  I don’t know if this was always the plan, or if plans changed.  One possibility is that Microsoft originally had a plan for supporting desktop apps on ARM but growing confidence in Metro/WinRT lead to the decision to drop desktop app support.  That explanation supports why there was so much conflicting information out there, and why Microsoft was being so coy until today.

Posted in Computer and Internet | 1 Comment

Windows on ARM (WOA)

I was going to hold off on further Windows 8 comments until the Consumer Preview shipped, but since Steven Sinofsky posted detailed information on Windows 8’s ARM support today I thought I’d make a few remarks.

The most important clarification that Steven made today is that Windows on ARM (WOA) won’t be supporting desktop (aka Win32) applications except for a version of Office 15 (and internal Windows utilities).   If you need to run legacy desktop applications, for either business or personal reasons, then you’ll want to use an x86-based system.  I don’t see this as really being a significant problem for users as the expectation is that both Intel and AMD will be producing x86 System-on-a-Chip (SoC) offerings competitive with ARM SoCs later this year.  And OEMs will be producing x86 tablets that are competitive with ARM tablets.  Sinofsky made clear that the ARM-based systems will have branding that differentiates them from the x86-based systems so that consumers won’t be confused on which to buy (i.e., they won’t buy an ARM-based machine thinking it can run legacy applications).  That’s an important revelation.  A related, and important, revelation was that Steven seemed to say there will be a single edition of Windows 8 for SoC-based (as opposed to classic motherboard-based) systems, be they ARM or x86 and it would be priced competitively.  In an earlier blog posting I postulated that Microsoft would produce a tablet-specific edition of Windows 8 that was priced to allow Windows tablets to compete with Android.  I’d say Sinofsky confirmed my speculation.

The biggest overall take away from the blog posting is that ARM-based systems have both feet in the new world while x86-based systems can have one foot in the new world and one in the legacy world.  One of the biggest challenges we face as an industry is that our legacy holds us back from moving forward.  On x86 it can take years, or decades, to get rid of the legacy Microsoft no longer wants.  However ARM has no legacy, and Microsoft sees no reason to bring the unwanted parts of the Windows legacy there.  So ARM-based systems get a fresh start.  Sinofsky points out one of the most obvious benefits, that nearly all of the legacy issues that caused Windows to have security problems over the years will be absent from ARM-based systems.  Also most of the things that cause battery life problems, system instability, and cross-application interference.  Gone.  That’s the good news.

The bad news, or potentially bad news, is that without legacy application support it is hard to see why a consumer would opt for a Windows 8 ARM-based tablet over an Apple iPad.  Sinofsky is obviously very confident that the number and quality of Metro applications in the Windows Store will grow very fast and negate the significant advantage that Apple’s iPad will initially enjoy.  Most of my friends, both current and ex-Microsoft employees, are skeptics.  As much as they love Windows 8, they think that Microsoft’s advantage the first couple of years is in the ability to run legacy apps.  And so they don’t see how ARM-based tablets can succeed initially.  I tend to be in this camp.  Microsoft has to hit the 100K Metro app mark before ARM-based systems will start to be attractive relative to x86-based ones, and that could take a while.

One of the more controversial revelations today is that Office 15 will be the only Win32 app allowed on WOA systems.  This is a compromise I’m sure Sinofsky didn’t want to make, but given the choice of no Office (because they couldn’t move to WinRT in time) and this weird situation he gave in.  The big controversy is both “hey, how come I can’t run my Win32 app too” and “Is this legal?”.  The former question I think Sinofsky addressed well, the latter he didn’t touch on but I will.  It’s the difference between “IS” and “HAS”.

Microsoft isn’t a monopoly, Microsoft was found to have a monopoly in a specific market.  And that market wasn’t “computer operating systems”, it was specifically operating systems for Intel processors.  The reason for this is simple, the government has to narrow the definition of the market enough to show monopoly power but keep it broad enough to justify sticking its nose in it.  If, in the 90s, one were to consider all computer operating systems then Microsoft did not have monopoly power.  How many Power/PowerPC processors ran Windows?  MIPS? SPARC? ALPHA? VAX? ARM?  S/360?  Not many.  So if you included servers, embedded systems, minicomputers, mainframes, and PDAs in your market definition then Microsoft did not have a monopoly.  The government solved this by narrowing the market definition to Microsoft operating systems running on Intel processors.  Ok, that’s then and this is now so could the definition change?  Well, the latest numbers on what I consider the total Personal Computing market space (desktops + notebooks + tablets + smartphones) show Microsoft has as little as 40% of the total market!  It’s share in the tablet space is around 1.5% and its share in Smartphones is also in the single digits.  Throw in servers and embedded systems and Microsoft’s overall share of the computer operating system business is potentially as low as 10% (total guess, it could be lower).  In other words any attempt to redefine the market makes the notion that Microsoft is a monopoly look silly (and almost certainly fails in court, should Microsoft decide to fight such an attempt).  Sticking with the adjudicated claim that Microsoft has an operating systems  monopoly on Intel processors means that Microsoft’s freedom of action on Intel processors is limited, but those limitations don’t apply to its operating systems on ARM processors.  So locking down ARM-based systems with Apple-like restrictions that on x86-based systems would violate the consent decree, and even allowing Office to use Win32 APIs while denying those to third parties, is legal.

In truth most of Microsoft’s competitors will probably be happy that Microsoft is not providing the ability to run Win32 apps with WOA.  The main competitor to Office is Google Apps, and the combination of great HTML5 support and lack of Win32 support seems to play into Google’s hands.  In fact Windows 8’s overall focus on HTML5 is great news for Google.  Creating a WinRT version of the Chrome Browser might be a challenge for them, and that could create some friction, but overall I think Google should be happy about Microsoft’s moves.   What about Open Office and its relatives?  Well, they are all multi-platform offerings so they should be able to do WinRT ports if they desire.  And to be clear, although Office 15 is not a Metro app it has been developed specifically for the Windows 8 environment.  That is, for touch, good power and resource utilization, etc.  So even if Microsoft desired (or were forced) to allow 3rd party Win32 apps comparable to Office 15 to run on WOA it likely could impose those requirements on the apps.  In other words, legacy apps would still not be supported but rather a bastardization of Win32 that met rules similar to those imposed on WinRT.  It appears Microsoft considered this approach and decided instead to bite the bullet and accelerate the move to WinRT.

The bottom line for WOA, or for Windows 8 in general, is how successful Microsoft is at quickly building a large library of Metro apps.  There are moves they could have made, such as directly supporting Windows Phone apps on Windows 8, that could have caused that library to near instantaneously exceed 100K apps.  They’ve chose the more difficult route of forcing apps to be converted, rewritten, or written for Metro/WinRT if they want to run on WOA (and in general if they want to be great tablet apps).  In the long run that is the right decision, but it sure adds a lot of risk to Microsoft’s Windows 8 ambitions.

Posted in Computer and Internet, Microsoft, Windows | Tagged , , | 5 Comments

A perspective on Big Data, NoSQL, and Relational Databases

Other than a brief comment on Map-Reduce back in 2008 I’ve avoided commenting on the topics of NoSQL or Big Data.  That’s for two reasons.  First, I really didn’t have much of an opinion because second, I intentionally have stayed away from the database arena for several years.  The reason I stayed away, by the way, is that I had other interests I wanted to explore and every time I stick my toe back into the database world I get pulled in 200%.  But a friend kept bugging me for an opinion on NoSQL, and so I’m going to give it.

Once upon a time there wasn’t a Software Engineering discipline, and few schools taught Computer Science.  Programming was largely self-taught and narrowly focused.  For example, a physicist might teach themselves Fortran so they could perform some computations for a problem they were working on.   As the use of computers expanded in the 60s and 70s there was a deep division in the ranks of computer programmers, with a small number of “System Programmers” supporting a very large number of “Application Programmers”.  System Programmers tended to be deep technologists, perhaps with Computer Science educations, while Application Programmers had modest (self-paced or occupational-style) training and were more subject matter experts (e.g., they understood retailers’ business processes) than computer scientists.  The System Programmers would, for example, write libraries of routines that isolated the Application Programmers from the gory details of the underlying systems.  As software technologies advanced, for example the introduction of database management and transaction processing systems, the gap between the average Application Programmer’s abilities and the demands of these new software systems grew to the breaking point.  And there weren’t enough System Programmers to bridge the gap.  One response was that Colleges started to focus more on teaching programming and related technologies (e.g., data structures), both producing more Computer Scientists (and later Software Engineers) as well as information technologists (e.g., BA in Management Information Systems).  The other was for vendors to subsume the role of the System Programmer and create new generations of software that were more flexible and more easily used directly by Application Programmers.  It turned out Ted Codd’s Relational Model could address this gap in the database world, and so in the 1980s the race was on to turn the relational model into practical products.  By the mid-1990s Relational Database Management Systems (RDBMS) had become the predominant enterprise database management system, and by the mid-2000s were dominant in every aspect of computing from mobile phones to the largest data centers.  SQL, which had become the standard (but not only) language for formulating database requests, is now part of the technology that even the self-taught hobbyist programmer learns and uses.  RDBMS, and SQL, had won the day.

One of the problems with having a standard (e.g., ISO SQL 2008 being the current formal standard, although ISO SQL 1992 remains the most widely implemented), being it formal or philosophical, is that product evolution can be slow.  For example in the case of distributed systems the SQL single-system image philosophy (meaning the application program can’t tell a distributed database from one running on a single system) has kept this technology from advancing much beyond where it was in the early 1990s (and in fact it has actually regressed, with fewer vendors devoting much energy to it).  Meanwhile business problems have been moving fast, and particularly the explosion of data being generated and captured over the last decade; the so-called Big Data problem.

Big Data (an explosion of data volumes, sometimes needing real-time processing, and often not lending itself to the structuring rules and processes implemented by RDBMS) went from a theoretical discussion to overwhelming problem in less than a decade.  Those who needed to address this problem quickly created their own solutions, such as Google’s Map-Reduce and the Hadoop open source solution that it inspired (and is now the focus of most Big Data efforts).  These solutions became known as NoSQL.

I find a couple of things interesting about NoSQL.  The first is that custom, special-purpose, or non-relational efforts are nothing new but NoSQL is the first to gain real traction.  Many Software Engineers have preferred to attempt their own data storage solutions over using a packaged one, but generally they fail to understand just how difficult it is to reproduce the technologies already incorporated in RDBMS.  And most non-relational attempts at products usually get superceded by relational systems.  For example, Object-Oriented Database (OODB) products had slow adoption (outside the CAD/CAM world) and the relational guys eventually incorporated object-oriented features condeming pure OODBs to the dustbin of history.  The same has largely happened with dedicated XML databases.  (Note this is similar to what happens with Moore’s Law.  Most times you build special purpose hardware you find that Moore’s Law allows general purpose processors to surpass it in performance and cost within a generation and thus render the special purpose hardware obsolete.)  But NoSQL has, so far, defied this trend.

The second thing about NoSQL that is interesting is that in a world in which people (i.e., programmers) are expensive but computers are not a people-intensive technology should not be gaining traction.  What’s changed?  Well, first of all we’ve gone from a world in which most programmers learned from a self-paced programming course to one awash in hundreds of thousands (if not millions) of programmers with Computer Science/Software Engineering (or similar) degrees.  And we have a set of companies, including Google, Microsoft, Facebook, etc. that are willing to hire thousands of the best and brightest of these people.  And so when it comes to solving an immediate, high-value, problem the path of least resistance is to throw people at it.  And right now a lot of companies have a Big Data problem that is overwhelming them and the expertise to use NoSQL technologies to address that problem.

But “regression to the mean” applies to the Big Data problem just as it has to earlier problems.  Most organizations can’t afford (or don’t have) the talent to exploit today’s NoSQL technologies, and even those that do will grow tired of the expense.  The race is on to make the Big Data problem more tractable for organizations with less expertise and resources than the Googles and Facebooks of the world.  Most of those efforts build on NoSQL.  Even Michael Stonebreaker’s VoltDB, the relational world’s real first shot across the bow of the NoSQL movement, has announced integration with Hadoop.  Stonebreaker was also the leader of the RDBMS industry’s counter-assault on OODB with his Postgres research and Illustra product (as well, of course, as being one of the original RDBMS pioneers).  Will lightning strike twice (or rather, a third time)?

One thing that NoSQL has going for it is that the big relational vendors (Microsoft, Oracle, and IBM) have all adopted Hadoop for their Big Data efforts.  No matter what their long-term plans in Big Data might be they saw the rapid customer takeup of the Hadoop technology and didn’t want to be left behind.  Because Hadoop is open source it was easy to jump on board.  Not only that, they have a lot of customer needs to meet and trying to advance transaction processing and data warehousing capabilities to meet customer needs is already taxing their ability to evolve RDBMS products.  Having a separate effort around Big Data (e.g., as Microsoft did for its OLAP store that evolved into Analysis Services) is one way to allow quick movement without risking killing their core product.

The key question on the table is what the long term approach is to the Big Data problem and will NoSQL dominate or be supplanted by another generation of SQL-based products?  When I first looked at Map-Reduce back in 2008 my view was that Google had essentially extracted a primitive that you’d find in a distributed RDBMS and exposed it for direct use by programmers.  Will Hadoop simply become a processing environment that most people access exploit through a SQL RDBMS?  Certainly that’s the way the relational world seem to be going (e.g., Microsoft SQL Server and Rainstor as well as VoltDB).  Although their offerings are currently basically connectors,  I think in the long run relational database vendors will treat Hadoop as an operating system service that they hide under the covers of new capabilities in their core product offerings.  Moreover I think they’ll add many more Big Data features to their core relational product offerings.  The combination will make SQL-based rather than NoSQL-based solutions the primary way most organizations attack their Big Data problems.

Now the real reason I come to this conclusion is not because I’ve written a couple of RDBMS, but because of all the startups in the Big Data space.  There are dozens, perhaps hundreds, of companies that are trying to make Big Data a more approachable problem to bring it to a larger audience.  They are looking to solve many of the same problems that lead the industry to abandon navigation-based DBMS for relational DBMS.  They are starting to add features from SQL to Hadoop.  NoSQL will eventually get to a place where they are offering an environment that is comparable to, and yet arbitrarily different from, a SQL RDBMS.  At which point, if the RDBMS vendors do their job, one will question why not simply use a SQL RDBMS to begin with?

If I did the Gartner thing and assigned probabilities to outcomes I’d assign my prediction a probability of 0.6, meaning I’m not all that confident.  The momentum behind NoSQL is rather strong, and with the relational vendors helping it along it is quite possible that alternative solutions will gain unassailable positions before RDBMS-centric solutions can catch up.  But I am placing my bet nonetheless.

Posted in Computer and Internet, Database | Tagged , , , , | 2 Comments

Surviving on a smaller piece of the pie

I always cringe at the phrase “post-PC era”, primarily because it has such a poor definition.  Does it mean that desktop-devices no longer matter?  Does it mean that devices with keyboards no longer matter?  Does it mean that people will no longer create content, a primary differentiator between the classic PC and the trendy Tablet, but just consume it?  Following on to that, does it mean Information Workers are passe?  Or are people simply trying to say that Microsoft’s run is over?  And just what does that mean?

In truth we have crossed an important threshold in computing in that the “PC era”, epitomized by the 1990s in which two companies totally dominated the core of personal computing technology, is over.  Microsoft and Intel had 90%+ shares of their respective areas for a very long time.  But in any modern definition of personal computing their market shares have declined dramatically.  I’m not talking about the rise in market share of Apple’s Mac, although that certainly adds to the idea that the world has become less homogenous.  A modern definition of personal computing would have to incorporate both Tablets and Smartphones in addition to desktop and notebook computers, in which case both Microsoft and Intel’s market shares have already (by unit volume) dropped from 90+% to more like 50%.  That’s the new reality, and the fortunes of both companies are dependent on how well they understand that reality and what they do about it.  This is also a key point for analysts, reporters, and observers.  Most still look at Microsoft and Intel through the lens of companies that so totally dominated the market that their only possibilities are to (figure out how to once again) dominate or fade into oblivion.  That is nonsense.

Former GE CEO Jack Welch observed that you wanted to be number 1 or 2 in each business you were in, and Steve Ballmer is a fan of Welch’s business philosophy.  And so while Steve has said Microsoft always strives to be number 1 in each of its businesses, it is clear he is trying to adapt the company to a world in which being number 1 doesn’t mean having 90+% market share.  For example, we shouldn’t think of Windows 8 as a “Hail Mary” pass by Microsoft to regain its former dominance in operating systems.  No one is naive enough to think that they can sweep away Apple’s IOS, or keep Android from achieving a significant market share in tablets.  But they can take enough tablet market share to maintain their overall #1 position in (non-Smartphone) personal computing operating systems.  And perhaps (eventually) become #1 or #2 in the more narrowly defined tablet space.  Analysts also keep predicting that Windows Phone will achieve the #2 position in Smartphone operating system market share, a prediction many find far-fetched.  However, if true, not only would this meet Jack Welch’s bar for a business you want to be in, but it would allow Microsoft to maintain its overall #1 position in personal computing operating systems using the broadest definition of that market (desktops+notebooks+tablets+smartphones).

Take a look at Microsoft’s various businesses.  Server and Tools (STB), its second largest business last quarter, is neither number one overall nor does it have the market share that the Windows and Office businesses achieve.  But it has been the most consistent growth business within Microsoft throughout this century.  The Interactive Entertainment Business (IEB), which primarily means XBox, has achieved the leading position in gaming consoles but by no means is dominant (or even totally secure in its number one ranking).  Still, its position is strong enough that Microsoft has a good shot at have a long run as the leading home entertainment system vendor.  Take Bing, which has crossed the number 2 threshold.  It is slowly, very slowly, gaining market share and Microsoft can now focus on new opportunities (e.g., mobile) as a means of side-stepping Google’s browser-based search market share as well as generating greater revenue per search.  That’s a far cry from its position a few years ago of having to spend most of its energy just trying to create a credible offering.  It doesn’t need to take the number one position away from Google (not that Microsoft wouldn’t like to do so) to be successful.  Microsoft Business Solutions (aka, Dynamics) was envisioned as a potential $10+ Billion business when Microsoft acquired Great Plains Software and Navision a decade ago.  After years of trying to consolidate the product lines, enter the SaaS business, wavering on pushing up scale to address larger enterprises (vs. its original SMB target), and trying to decide if it was a supporting player to the Office business or a business in its own right, could the Dynamics business finally be ready for a breakout?  A $10B Dynamics business would provide another healthy dose of diversification to Microsoft’s revenue stream, as well as boosting both the STB and Office businesses.  But Microsoft isn’t going to have the dominance in this space that SAP had in the 90s.  And then there is Windows and Office.  While they are likely to remain the premier businesses for Microsoft for some time to come, in the long run they will start to look more like the rest of the portfolio over time.  Meaning, leadership but not dominance.

Besides Microsoft’s increased focus on businesses beyond Windows and Office there are other signs that Microsoft has (finally?) recognized that the era where it totally dominated the businesses it was in is over.  Most recently we are seeing a wave of support for non-Microsoft mobile platforms and browsers.  Xbox Live on IOS and Android as well as Microsoft CRM on IOS, Android, and Blackberry are a bit of a shock to observers.  Microsoft CRM support for Firefox, Chrome, and Safari also seems like a bit counter-strategic.  Bing has been producing clients for non-Microsoft devices for a couple of years now.  Plus we can expect Skype to continue to support the broadest possible array of clients rather than pull back to focus purely on those running Microsoft operating systems.  And it looks like Linux support is coming to Windows Azure.  This all runs very counter to the thinking of the 90s or even 2000s, where there was constant tension between those who thought multi-platform offerings made sense for their business and those who wanted to keep the focus purely on the Windows platform.  If only one business was coming out with an IOS client one could write this all off as that unit having decided to butt corporate culture, but with most businesses apparently looking for ways to use popular non-Microsoft platforms to reinforce their business objectives it seems likely that a broader corporate culture change has taken place.   No doubt it’s still Windows first and foremost, but it’s now culturally acceptable for targeted non-Windows operating system support when that is important to the overall business case.

The bottom line here is that people need to stop thinking of Microsoft as the dominant, and domineering, PC software company of the 90s and start thinking about it as the GE of the high-tech world.  A company that is in a lot of market segments, and wants to be number one in those segments, but who will never again achieve the market dominance it once had.  It will be different from GE in that Microsoft is a collection of mutually reinforcing businesses rather than a conglomerate of independent businesses.  It will also be different from its former self in that it can no longer rely on its dominance to leave the “messier” parts of the business to the ecosystem.  Microsoft will have to take on more of the end-to-end solution, as it did with XBox, in order to succeed.  These changes will, over the course of the next decade, make Microsoft a very different company from the one whose image still inhabits our brains.  It might also make it a more successful one.

Posted in Computer and Internet, Microsoft | Tagged , , | 2 Comments

Forget Mix, Remember TechEd

Now that Microsoft has formally killed it’s Mix conference let’s talk about how it is going to bring Windows Phone 8 and updates on Windows 8 to developers.  First a little on Mix, then a discussion of the difficulties of conferences in general, then a discussion of what all reports I’ve seen so far miss, and a prediction.

As others have mentioned it was time for Mix to die because its original purpose, convincing Web Developers to use Microsoft technologies, no longer warrants a conference.   Mix was started in an era when there was extensive HTML fragmentation and Microsoft was trying to get developers to focus on the Internet Explorer variant of HTML,  AJAX (much of whose technology Microsoft had invented) had become the new new thing yet Microsoft was considered anti-AJAX, and when the real threat for web domination was being posed by Adobe Flash.  Microsoft responded to Flash technologically by introducing Silverlight and Expression Blend, and Mix was invented as a major tool in reaching out to web developers and, even more importantly, the design community that had coalesced around Flash.  With HTML5 effectively slaying both fragmentation and Adobe Flash (and Silverlight), the justification for Mix ended.

For the last two years Mix was in transition away from its web roots towards being a Windows Phone Developer Conference, and I’d expected this to continue (with a name change).  This wasn’t a planned thing, it was a matter of convenience.  Given the planned  Fall 2010 release of Windows Phone 7, a Spring 2010 developer conference made sense.  And since the Windows Phone 7 app model was based on Silverlight, and its design/development environment based on the same tools as Silverlight, why not use the spring Silverlight-oriented conference as the launchpad?  The schedule and justification remained the same for the Mango release and Mix’11.  And so my expectation had been that Microsoft would repurpose and rename Mix and use its time slot for a new mobile developer oriented conference.  That this didn’t happen is most likely another indication of the synchronization and synergy that is building between Windows Phone 8 and Windows 8.  Why have a separate Windows Phone-oriented developer conference if you are going to have a common app model, tool set, etc.?  Future Windows Phone-specific information should just be a track within a larger Windows developer conference.

What is bothering me right now is a timing issue.  With both Windows 8 and Windows Phone 8 likely hitting RTM this summer or fall isn’t a spring developer event of some form needed?  I think the answer is yes, and I can guess at how Microsoft will address this problem.

Setting up big conference is a nightmare, starting with acquiring a location.  For conferences as big as PDC/Build space is such a huge problem that you generally reserve it a few years in advance.  Taking a little tangent here, when I was running the Quests technical strategy process at Microsoft we had a very tight schedule that was created a year or more in advance to lock down executive calendars.  The entire process had to fit between Labor Day and a set of Executive Briefings (SteveB et al) that were scheduled in December.  This included a conference of the top few hundred technical leaders in the company that had to take place two or three weeks before the Executive Briefings.  One year a late decision came down to hold a PDC, and the only week that a venue could be secured was the week of the Quests conference.  With so many technical leaders involved in PDC we agreed to move the Quests event, but the only date in the narrow range we had to hit we could secure a venue was the same week as a big event Microsoft Research (MSR) holds in China each year.  After discussing this with MSR leadership I decided we’d have to live with diminished MSR participation at the Quests event.  To prevent this scenario from happening again the team behind PDC went back to reserving space for future events a few years into the future.  I don’t know if they are still doing so, but it makes sense that Microsoft already has a location locked down for a potential Build conference this fall.

For any conference that Microsoft would like to hold in which 1000 or more people will attend it is likely that space is already locked down.  Mix would have been a case where the team locked down the “Mix ’12” space a year ago, and so it would have been very convenient to just co-opt that space and create a “Spring Build” conference this year.  It could still happen (i.e., assuming they’d locked down space for Mix in April then a new conference announcement at Mobile World Congress (MWC) would allow barely enough time to get people signed up), but Microsoft’s statement that there would be a single developer conference all but nixes the idea.  Mix was a much smaller conference than Build, and (unless Microsoft planned this at least a year ago) wouldn’t have the space reserved that it would need for the larger joint audience.

While a spring conference of some sort isn’t out of the question, what about a fall event?  A fall event makes sense and I can imagine a large developer conference at that time.  There is only one problem, such a conference will be too late to influence the initial set of applications for Windows Phone 8 and Windows 8.  In this day and age you can educate developers through other means, such as virtual conferences done over the Internet,  road show of events in cities around the world, and other on-line training and resources.  No doubt Microsoft will use all of these.  But that still leaves a gap where you are trying to build developer excitement and give them access directly to product group personnel.  The articles I’ve read about the Mix cancellation have been saying the only event currently on the calendar is MWC, but they are wrong.   The biggest annual event on Microsoft’s calendar is still there, TechEd 2012 is scheduled for June 11-14th in Orlando.

It used to be that TechEd (North America) was the Microsoft event of the year.  The rules of TechEd are to focus on shipping or about to ship products, and so periodically a PDC was held to introduce developers to longer lead time products.  Last year’s Build conference met this need for Windows 8.  With Windows 8 and Windows Phone 8 very close to shipping, it makes a lot of sense for TechEd to include quite a few sessions on those.  TechEd is also an IT (including IT developers) oriented conference.  That’s an audience that Microsoft has not focused enough on the last couple of years, and so re-emphasizing TechEd as part of the Windows 8/Windows Phone 8 launch process would be an excellent move.   In fact, with IT-oriented developers amongst the strongest critics of Metro one might claim that it is imperative that Microsoft strongly court this community.  And I can’t think of a better way to do so than to turn TechEd into a major Windows 8 event.  Likewise one of the biggest complaints about Windows Phone is how it largely abandoned Enterprise use in favor of a consumer focus for Windows Phone 7 (and only a modest improvement for Mango).  Windows Phone 8 likely returns the Enterprise to being an equal focus for Windows Phone, once again making TechEd an ideal forum for re-engaging the IT audience.

TechEd has other advantages as well.  It is not just a single US-based event, but rather a series of worldwide events.  For example TechEd North America in mid June is quickly followed up by TechEd Europe in Amsterdam in late June.  In some regions (e.g., India holds its TechEd significantly before the North America event) this leaves TechEd out of sync as a Windows 8/Windows Phone 8 launch event.  But the content remains relevant throughout the year (so it can still be used at TechEd India 2013).  No doubt Microsoft would use other means to reach develoeprs in those regions.

About the only downside I see to using TechEd as the springboard for Windows Phone 8 developers, and an update for Windows 8 developers, is that it isn’t appropriate for gaming or other purely consumer centric app developers.  But nothing is perfect, and you can use other events or tools to reach those for whom TechEd is not the best forum.

So, is TechEd the next really big event on the calendar for Microsoft-oriented developers focused on Windows 8 and Windows Phone 8?  That’s the conclusion I’m coming to as I read the tea leaves.  I won’t be surprised if I’m wrong.  But if I’m right I do think it would turn out to be an inspired move on Microsoft’s part.

 

Posted in Computer and Internet, Microsoft, Mobile, Windows, Windows Phone | Tagged , , , , , | Comments Off on Forget Mix, Remember TechEd

Untangling some of the people moves at Microsoft

Yesterday Mary Jo Foley broke the news that the father of Windows NT (and DEC’s RSX-11M and VAX/VMS) Dave Cutler had moved from the Windows Azure team (of which he was one of the founders) to the Xbox team.  Other bloggers have noted various other movements of long-time employees to the Interactive Entertainment Business (IEB) as well.  What’s going on here?

Generally speaking there are three things going on.  The first is a continuing set of reorganizations and strategy changes that have caused an unusual level of turnover in the senior ranks.  The second is the “Sinofskyization” of the cultural and organizational structure that has reduced the opportunities for senior people in many organizations.  And the third is that IEB is one of the growth areas in the company and with the increased focus on entertainment in addition to gaming, and a next-generation XBox apparently on the way, that organization is one of the few with exciting work for senior people.

While Microsoft is always in continuous reorganization, some have bigger long-term impacts than others.  Back in 2009 Microsoft decided to eliminate the Connected Systems Division (CSD) and merge its responsibilities with the SQL organization to create the Business Platform Division (BPD).  As BPD went through various organizational and strategy changes many of the former CSD leaders found their projects and responsibilities altered or cancelled and decided to seek opportunities outside BPD.  Technical Fellow Brad Lovering left Microsoft.  Technical Fellow John Shewchuk moved to DAIP to drive cloud identity-related efforts.  And a surprising number of ex-CSD people made their way to IEB, such as Distinguished Engineer Don Box and at least three former CSD General Managers.

Next up is “Sinofskyization”.  Steven Sinofky’s management philosophy does two things that severely impacts senior people.  One is to collapse the management chain, eliminating most general management roles (including Product Unit Managers, General Managers, and the occasionally used Group Manager role) and functionalizing (Dev, Test, PM) up to the Corporate Vice President (CVP) or higher level.  The other is to eliminate the role of (standalone) Architect.  I am not going to go into all the reasons for this, some of which make good sense to me and some of which I disagree with, but if you are a senior person the message often is “you aren’t wanted here”.  You can see this quite dramatically in the Windows organization, where nearly the entire architecture staff fled a few years ago.  Because Steven’s philosophies do have a history of producing high quality software releases in a reliable fashion they are spreading throughout Microsoft.  The Server and Tools Business is currently implementing Sinofsky’s philosophies, leaving many former GMs, PUMs, and Architects seeking new roles either within or outside their current organizations.

Of course the last thing is that IEB is a place that is doing exciting things and moving rapidly in the process.  So if you are looking for something new to do it becomes a rather obvious place to go.  One of the “core competencies” historically required to work at Microsoft is “a passion for technology”, and so most Microsoft engineers are already into gaming or leading edge home entertainment systems even if their day job was all about systems and enterprise software.  It doesn’t take much of a leap to combine your professional expertise with your personal passion to go build the next generation of home entertainment systems.

And how does all this relate to news that Dave Cutler and at least one other senior member of the Windows Azure team have moved to IEB?  While I don’t know any specifics one can certainly read various tea leaves.  Windows Azure started off as an incubation outside of the normal Microsoft engineering structure but has now gone through a number of organizational changes such that it is part of the Server and Cloud Division with Windows Server exec Bill Laing in charge.  Technical Fellow Mark Russinovich moved from Windows to the Windows Azure team and now appears to be its technology leader.  And no doubt there is now strategic focus on keeping Azure and Windows Server more in sync.  Anyone who joined the Windows Azure effort for a somewhat free-wheeling freedom to innovate might be looking elsewhere to satisfy that urge. Also as Sinofskyization spreads across STB a good role for Dave Cutler would become less clear.  Sinofskyization tries to eliminate the need for, and thus influence of,  heroes and Dave is the hero’s hero.  Moreover, ever since giving up his management role (as head of Windows NT) Dave has worked on whatever interested him at the moment.  He may have just decided that what interested him about Windows Azure is now done, and was seeking something else interesting to do.

Sadly the movement of senior people around Microsoft has little to do with executive decisions to move key resources to places they are needed.  Rather it is because senior people are finding their existing roles eliminated and few interesting roles to be had.  What this means for Microsoft in the long run is one of the great debates amongst current and ex employees.  A debate that likely won’t be answered until the next seismic shift in the industry.

Posted in Microsoft | Tagged , , , | 5 Comments

Skype: Get real about what to expect after the Microsoft acquisition

There is a lot of noise in the system this week questioning when we’ll see the first fruit of Microsoft’s acquisition of Skype.  I think it is important to put this acquisition in context, because I find people’s expectations a bit naive.

Skype has been part of Microsoft since mid-October 2011.  Sure the acquisition was announced in May 2011, and certainly discussions between the companies on ways to work together could have been going on for many months before that.  But on that day in May when the ink on the agreement between the companies was still wet a funny part of (U.S. at least) anti-trust law kicked in.  Neither company was allowed to make any changes to their product plans, strategies, business plans, etc. until the transaction closed.  The only allowable activities were around planning for what would happen once the acquisition was complete.  This is meant to keep the two companies “whole” in case the transaction is blocked by regulators.  But sadly it means that anything that Microsoft and Skype want to do together couldn’t even be started on until mid-October.  Why haven’t we seen anything out of the acquisition yet?  Well, it’s only been 3 months!

While the impact of anti-trust law is not a Microsoft-specific factor, other factors are.  As soon as you become part of Microsoft the pressure becomes intense to make your product a Microsoft product.  How quickly can you move to the proper installation technology?  Help technology?  Meet globalization guidelines?  Address all of the legal requirements Microsoft is subject to around the world?  And perhaps most importantly, bring your products into line with the Security Development Lifecycle (SDL) process.  You can continue shipping existing products without addressing these issues, but as soon as you release a new one you have to be on the path to it being a Microsoft product.

No you don’t have to have everything “Microsoftized” in a single release.  For example on the SDL front you likely have to do an evaluation to see where you are, address anything serious, and then seek exceptions to ship with more minor issues (e.g., False Positives in your code).  Those exceptions don’t last forever, so after that first release you are going to have to devote resources to finish the SDL work as soon as practacle.

There is more flexibility on fronts other than the SDL, but still there will be pressures to address them quickly.  Let me take a legal example, the placement of international borders and the names of places.  Where these are in conflict Microsoft (and other companies) have had to make decisions on how to present them (e.g., on maps) that balance the requirements of the conflicting parties.  Failure to do so can result in Microsoft products, and not just the offending product, being banned from sale in those countries.  Imagine releasing a Microsoft Skype and having a China ban the sale of Windows and Office because Skype offended the Chinese government (even though as a standalone company Skype was allowed to slide on addressing China’s concerns).  In addition to its immediate negative business impact, the PR hit is unacceptable.

Beyond the integration impacts I’ve mentioned it is also important to decide what you want to do that immediately says “we’re a Microsoft product now”.  In other words, show some positive benefit of the combination.  Usually that means some kind of real integration with other Microsoft products.  The most obvious one of those for Skype would be to support the use of Live ID as an alternative to Skype’s own identity system.  Is it necessary that they do this in a first release, no.  Would it immediately send a message and add 100s of millions of users to Skype’s user base?  Yes!  Are they doing this?  Eventually of course.  But would they hold the first release to make an initial stab at it (e.g., still require creation of a Skype ID but allow it to be linked to a Live ID) for a first release?  I would.

I expect new Microsoft Skype products to start shipping later this year, perhaps as early as this spring (depending on what they were working on prior to May 2011 and how much they try to Microsoftize things).  But we all have to keep our expectations in check.  The acquisition process is not pretty, and the real work of integrating the companies and the products has only been going on for 3 months.

Posted in Computer and Internet, Microsoft, Mobile, Telecom | Tagged , , | 5 Comments

Microsoft Trustworthy Computing (TwC)

Today marks the 10th Anniversary of Bill Gates’ Trustworthy Computing email to all of Microsoft.  Most consider this a transitional event for Microsoft, in particular being the point at which Security assumed its proper position as the most important of the many “ities” that products must address.  Prior to this memo, or more specifically a decision by the Windows team to halt development of a new version while it took about a year to secure Windows XP (with Service Pack 2), Security had just been another one of the “ities” that products addressed.  Yes development teams took it seriously, but not seriously enough.  When I joined Microsoft in 1994 I found its attention to Security lacking compared to what we’d had at DEC (for quite a number of years).  After the start of the Trustworthy Computing (TwC) initiative no sizable commercial product organization on the planet took it more seriously than Microsoft.

TwC has led to a number of great changes in how Microsoft builds products and responds to threats, but in many ways I think the biggest change was in making it OK to break backward compatibility to address serious security issues.  This was the biggest struggle the Windows team faced in creating XP SP2.  It was also the biggest challenge we faced in SQL Server.  I know I’ve written about the blank SA password problem before, but this is an appropriate time to re-tell the story.

Sybase had originally shipped SQL Server with the SA, or System Administrator, login password defaulting to a blank (meaning no password at all).  Microsoft, in its porting role, had retained that default.  Over time many organizations had failed to create passwords for the SA account, and many millions of lines of scripts and application code depended on the SA account not having a password.  I really wanted to change that situation, but as we worked on SQL Server 7.0 there were many bigger fish to fry and we didn’t need to introduce yet another compatibility problem.  For SQL Server 2000 I owned the central Program Management team and thus the Security PM reported to me, as well as the Setup team, so we took a serious look at the problem.  What we decided to do was to retain the ability to have a blank SA password, but change that from being the default to one that the person installing SQL Server (be it a new installation or an upgrade) would jump through hoops to select.  Then in a later version we’d look at eliminating the SA login entirely.  And so SQL Server 2000’s Setup implemented this change.  Meanwhile we also wanted the management tools to issue a warning every time someone connected to a server using SA/nopassword, but they were far behind schedule and didn’t implement this functionality.  In the pre-TwC world this was an acceptable tradeoff, whereas in the post-TwC world it would not have been.

In truth we could have been even more draconian in our attempts to eliminate SA/nopassword.  For example, we could have forced you to actually run a separate setup.  Or disabled features when you didn’t have a password.  Or created a mapping mechanism thatactually meant “grab the password from this other location” so that code would not need to be changed yet a password would indeed exist.  But at the time addressing this problem, particularly as it was bad practice by users rather than an actual bug in the system, wasn’t paramount.

SQL Server also had two different authentication modes, Windows Authentication and Mixed-Mode (ie, Standard plus Windows) Authentication.  The entire SA/nopasswordscheme was part of the legacy Standard authentication.  Ever since Microsoft first ported SQL Server to Windows NT the philosophy had been to not enhance Standard Authentication as a way to encourage movement to the more secure Windows Authentication.  And so SQL Server 2000 didn’t have any of the modern protections such as requiring complex passwords for Standard Authentication.  Over time we realized that Standard Authentication wouldn’t die because of scenarios where Windows Authentication was not practical (e.g., access from a Unix system), and so for Yukon (SQL Server 2005) a proposal was made to enhance Standard Authentication.  My guess is that this feature would have been cut had it not been for the TwC initiative.  Once Microsoft’s overall priorities shifted towards favoring security work the bad situation with Standard Authentication could not be tolerated and thus SQL Server 2005 brought it into the modern era.  Note that there were third-party tools that added good password management to SQL Server, and we did investigate licensing and shipping such a tool with SQL Server 2000.  However the philosophy around wanting to deprecate Standard Authentication kept this effort from bearing fruit.  Instead I co-opted an MCS consultant to write and make available a series of stored procedures that analyzed SQL Servers for bad practices, like SA/nopassword.  These were the precursor to the Best Practices Analyzer (BPA) that was released a few years later.

SQL Slammer was the SQL Server teams personalized wakeup call on the importance of the new security efforts.  For SQL Server 2000 we’d introduced a new networking model to eliminate the complexities of configuring SQL Server’s network libraries.  This was part of a prime directive that had guided us for both SQL Server 7.0 and SQL Server 2000 of focusing on ease of use.  Unfortunately a small coding error in the new networking support was exploited to create SQL Slammer and effectively take down the Internet.  The bug itself is something that hopefully would not have occurred in the post-TwC memo environment due to the tools and practices that were introduced with the Security Development Lifecycle (SDL).  But the more profound effect was for Microsoft to backtrack on ease of use and disable network access to SQL Server by default.  This is most noticeable in the Desktop (now Express) Edition, which for a number of reasons (most notably that an Enterprise could have thousands of copies running without a clue that they were there) was the vector that really let SQL Slammer get out of control.  I’d been the “godfather” of the new networking approach (and of MSDE 2000) so SQL Slammer weighed very heavily on me.  In one of the more poignant events in my career the developer who’d written the buggy code contacted me and apologized for it.

The truth was that the bug exploited by SQL Slammer had in fact been patched a few months earlier.  But this was prior to Microsoft Update and patch application was a manual process.  Few customers had installed the patch, and even those who had missed the “embedded” MSDE installations which it turned out were near unpatchable.  For the SQL Server team finding a way to automate patch distribution and installation became a priority, and with TwC pushing all Microsoft products to address this problem it led to the creation of Microsoft Update as an extension to the existing Windows Update.

The MSDE patchability problem was itself a side-affect of our being forced to prematurely adopt Windows Installer.  We were under tremendous pressure to move to Windows Installer for SQL Server 2000, but realized we couldn’t make the move in time.  However, MSDE needed a new setup anyway and so we placated the powers that be by using Windows Installer to create it.  Unfortunately its style of allowing one to embed an installation within another became the patching problem for SQL Slammer (as you really needed the top-level Setup, usually a third-party application, to distribute the patch).  And it actually made the SA/nopasswordsituation worse.  Early versions of Windows Installer had no way to keep parameters from ending up in plain text in a log file, and so MSDE’s setup couldn’t allow a password for the SA account to be specified when it was invoked.  It would perform a silent install without a password, and then it was up to the top level setup (or a user) to change the password to something sensible.  When the SQL Server team brought this issue up with the Windows Installer team resolution was pushed off to a future release.  This would never happen in the TwC era, but then I doubt a team operating under the SDL would ever have allowed their design to have this problem in the first place.

The security problems that started surfacing in the late 90s had other early impacts on SQL Server.  For example, changes in Outlook to keep malware from spreading via email broke our SQLMail feature.  While that lead Kurt Delbene (then Outlook GM) and I to have a couple of heated arguments (that did result in a workaround), it was a good early example of starting to favor security over backward compatibility.

Over the last 10-12 years security has gone from being just one of many important characteristics one must address, to a characteristic that has a fairly high minimum bar that must be met if you expect customers to use your product.  As I’ve written about several times things are far from perfect, and today’s minimum bar has become unacceptably too low.  But you have to crawl before you walk, and walk before you run.  We were crawling before TwC, now the industry is walking.  Hopefully soon we’ll graduate to running.

Posted in Computer and Internet, Microsoft, Security, SQL Server | Tagged , , | 2 Comments

Is 2012 the year people break the Cable habit?

Will this be the year you break the cable habit and move entirely to getting your “TV” entertainment over the Internet?  The Wall Street Journal’s Kevin Sintumuang has.  For those who watch a very limited amount of  TV, have a great Internet connection, and can either do without live sports coverage or can pick enough up over the air, it is a fine option.  For the rest of us though the conditions aren’t yet ripe to make the move.  I think the trend away from traditional cable towards Internet-based non-sequential programming continues to spread slowly for about five years and then hits the knee of the curve.  But it’s at least a decade before Cable truly starts to fade into the sunset.

Cable has two problems.  The biggest is that it forces you to pay for a lot of programming that you never watch.  My favorite personal example was receiving a letter from my then Cable company announcing that we were getting five more channels and as a result our monthly cable fee was going up.  Those channels consisted of three more home shopping channels and two Spanish-language channels.  We don’t watch home shopping channels and we don’t speak Spanish, and since we couldn’t opt-out of this change it was nothing more than a (substantial) price increase.  That was 15 years ago, and that trend continues so that now we have hundreds of channels but there is “nothing on”.  Not only do Cable (and Satellite) companies keep adding channels few want, and raising prices to cover them, but existing channels blackmail the Cable companies.  Want to carry Channel X then you have to carry (and pay for) Channel’s W, Y, Z, A, B, and C from the same company (e.g., NBC Universal or Disney or…).  Or, pay us more for Channel E or we’ll cut your viewers off from their favorite programming.  ESPN and other sports channels now represent such a large chunk of cable programming costs that Cable companies are considering creating packages that don’t include sports programming at all!

The second dynamic is that “there is nothing on” one.  Of course that’s not true, but it always seems that way.  My wife and I have a terrible problem finding something we both want to watch.  We both love House and it’s just about always on, but the episodes are all repeats.  Worse, we didn’t watch the first few seasons and we always find the repeats are ones we’ve seen and not the ones we haven’t seen.  Or Starz is about to start a new season of Spartacus, a show I didn’t watch in its original 2010 season.  A friend suggested the series and now I want to watch the old season before the new one starts.  Cable is not good for this, Streaming or Video-On-Demand services are.

Now what are the dynamics holding us, and the world, back from a rapid move away from Cable are extensive.  Of course the Cable companies are fighting the move in various ways (e.g., their own Video-On-Demand services, included in the package, attempt to mimic the benefits of Internet-based services).  And content-providers are conflicted, fearful of angering their primary channel (Cable) by making content too easily available over the Internet and worrying that they won’t be able to make enough in the Internet world compared to the Cable world.  But there are two bigger dynamics at play, the current state of the Internet infrastructure (at least here in the U.S.) and perhaps most importantly Age.

Let’s get the Internet infrastructure out-of-the-way first.  Very few Americans have Internet service capable of providing reliable, high-quality, video streaming.  Most don’t even have it available to them.  And even when they do from a spec standpoint the infrastructure behind it doesn’t have the capacity to support everyone who wants to stream video.  My 10 Mb/s home service usually gets at least 6 Mb/s to my Internet provider but their connection to the rest of the Internet frequently seems to overload so that I’m getting 3 Mb/s or less.  Often during busy periods it might drop to 1-2 Mb/s.  And so a service that in theory should allow for reliable watching of high-definition programming rarely is capable of doing so.  Most Internet video either degrades dramatically in quality for periods of time (with Netflix for example), stalls or even reports the loss of the Internet connection (pretty much all streaming services other than Netflix) during the course of a program.  Even the 25 Mb/s FIOS service at our second home has these problems, though less frequently.  These problems at best degrade the overall viewing experience and at their worst ruin an otherwise well-planned evening.  Until most people can reliably watch streamed media (or quickly download to local storage and then reliably watch) they aren’t going to start dropping their Cable or Satellite service.

But the biggest factor hindering the move away from Cable, and the one that will eventually accelerate it, is the Age of the customer.  Or rather the generational viewing habits.  The remaining members of the “Greatest Generation” are never going to make the switch.  For Baby Boomers, who grew up alongside Cable, our viewing habits are too well ingrained for most to entirely make the switch.  We are going to augment our Cable with these services, and we may even move to reduced content Cable programming packages.  But by and large we like channel surfing and instant gratification too much to give it up.  But the younger you are the less attractive Cable becomes.

We are now 20 years into the DVR revolution and 15 years into DVD revolution.  More realistically we are about 10 years into the general acceptance of both.  When I talk to parents I find that very few allow their children to just turn on the TV and search for something to watch.  Generally they record specific programs on their DVR that they want (or will allow) their children to watch and supplement that with DVD (or more recently Internet-based services) programming.  A simple look at the timeline means that we are just entering the period in which millions of kids who’ve never been hooked on the Cable habit leave the nest and make their own decisions about the entertainment they bring into their home.  Getting their entertainment on-demand over the Internet is going to be the norm.  Cable is going to be a hard sell.

My five slow years followed by a knee in the curve, and then a real decline in Cable in about ten years, is predicated on both improvemenst in Internet infrastructure and the generational shift that is underway.  It is also predicated on content providers “falling into line” and making it easy and cost-effective for consumers to obtain their content on-line.  So for the next few years they’ll offer some resistance to protect Cable and DVD sales, but eventually enough consumers will choose to give up programming unless they can get it over the Internet (without a Cable subscription) that content providers will be forced to treat it as a first class citizen.  There are other things that need to develop as well, such as better ways to create virtual channels and channel guides, and improvements to sampling programming, but those will come along soon enough.  Even ignoring the dozens of other options, a battle supreme is emerging between Microsoft’s Xbox, Google TV, and whatever comes next from Apple towards defining the future “TV” viewing experience.  At least one of them will nail the viewing experience to the degree that most people, even those unwilling to drop Cable, find Internet-based TV truly compelling.

So the bottom line is that 2012 will be another year for early adopters, but for the vast majority of us Cable or Satellite will remain our primary means of bringing entertainment into the home.  Many of us will follow the 2012 Presidential elections on the Internet, but mostly turn on Cable TV to view debates and watch the results roll in.  But in 2016 a significant minority, and perhaps most first-time voters, will use only the Internet to follow the elections and their results.  Because that is the only thing they’ll have.

Posted in Home Entertainment | Tagged , , | 7 Comments

Will Enterprises (aka Business) buy Windows 8?

There has always been a significant divergence in PC Software buying behavior between Consumer and Enterprises.  Today we see lines at Apple stores to purchase the latest ithingy when they go on sale at midnight, but in 1995 that was people lining up to buy a Windows 95 upgrade.  Even today, when Windows 7 came out the consumer PC market shifted to it immediately.  But the Enterprise story is completely different.

Enterprises tend to shift operating systems, and other software, very slowly.  First they have to evaluate the impact on their installed base of hardware.  Some will be upgradeable, others will need replacement.  Since we are talking about large numbers of PCs, often tens of thousands (and in a some cases 100s of thousands), it can take considerable time (i.e., years) to obtain enough budget to refresh an organizations PCs.  Next they have to make sure their applications run properly on the new operating system.  This itself requires dedicated staffing for a considerable period of time, particularly if applications need to be modified.  And they may have to wait for a 3rd party to support the new OS, and then look at the impacts of upgrading to the 3rd party’s latest version.  Then they need to manage the logistics of rolling out the operating system upgrade, potentially with new hardware, and potentially concurrent with new applications.  And potentially with new training for employees.  And before they start any of this they need to be convinced that the new operating system offers enough benefit for the cost and effort it will take (which, for example, is why so few businesses deployed Windows Vista) and that it is stable.  Put all of this on a timeline and it’s a multi-year process, and often a many-year process.  Which has led to an interesting dynamic: New Operating Systems come out before Enterprises finish rolling out the previous version, and thus they usually skip a version.  Enterprises universally upgraded to Windows XP, most skipped Windows Vista, and most are now in the midst of their Windows 7 upgrade cycle.  Thus the conventional wisdom is that most Enterprises will skip Windows 8.  I think the picture has become far more complicated.

Windows 8 has two strikes against it for Enterprises.  First is the life-cycle problem I describe above, and the second is that with a dramatically new user interface the employee re-training requirements are the highest since the introduction of the Chicago shell in Windows 95.  On the other hand Microsoft continues to show that it understands and is avoiding key mistakes it made with Windows Vista such as requiring major hardware upgrades and breaking application compatibility.  Windows 8 will run on all hardware that Windows 7 runs on, and likely run better on it than Windows 7 does.  And application compatibility also remains extremely high, meaning existing Windows 7 applications won’t need work to run on Windows 8.  Windows 8 also likely won’t have dependencies that force infrastructure upgrades (e.g., you can likely deploy Windows 8 clients without upgrading your Active Directory version).  Microsoft has avoided the third strike.

Are their good enough reasons to rush out and upgrade all the PCs in your shipping department, call centers, factory floors, retail outlets, or even Information Worker desktops to Windows 8 after you’ve just deployed Windows 7?  No.  But I am now going to argue that is unimportant, because I believe most business are going to deploy hybrid Windows 7/Windows 8 environments.

Enterprises are as impacted by the shift to tablet computing as much as Consumers, and in many ways perhaps more so.  The problem they face today is that as their Information Workers, or business units, want to incorporate tablets the real choices are iPads or a dizzying array of highly fragmented (from OS version and UI standpoint at the very least) Android devices.  Application compatibility with their existing apps?  None.  Training applicability with their corporate PC setup?  None.  Management compatibility with existing practices?  None.  Ability to enforce information security policies? Limited.  Ability to write new applications that work on their desktops and notebooks as well as their tablets? Limited (to websites basically).  Expertise within their development organization to write applications for IOS and Android devices?  Limited.  And the list could go on.  The bottom line is that, from an IT perspective, a Windows 8 tablet is far easier to incorporate into their existing an environment than any other option.  So while they may wait for “Windows 9” before taking the Enterprise through a full upgrade cycle, Enterprises will begin targeted deployment of Windows 8 surprisingly quickly.

If Microsoft is really lucky (and has done a great job) the way this will start is by the CEO (and other CxO level executives) deciding they want a Windows 8 tablet and essentially forcing IT to let them put it on the corporate network.  This is what happened with iPhones and iPads being allowed access to corporate email.  It will then spread to other executives, and hopefully IT will find supporting them fairly easy.  Other employees will have purchased Windows 8 tablets for personal use and be pressuring IT to let them use the same technology at work.  IT will be forced to develop policies for allowing and supporting Windows 8 tablets within the environment, which then opens up Windows 8 deployment more generally.  In other words, this time it will be Microsoft riding the “consumer-driven” IT trend.

More likely though is the path of a business unit having a requirement to deploy tablets (e.g., traveling sales or service reps) and IT responding defensively with a preference for Windows 8 tablets.  And once IT has the capability in place to support Windows 8, and the policies to support bringing it into the Enterprise, business units will start to drive non-tablet deployments.  The pace at which this happens is perhaps the most debatable point in this entire blog posting because it depends on how quickly Enterprise-oriented ISVs write Windows 8 Metro applications.   And that is a chicken and egg problem.  Enterprise ISVs aren’t going to switch focus to Metro until they have confidence that their customers are going to start deploying Windows 8.

I want to be clear that I’m not claiming fast deployment of Windows 8 in the Enterprise, I’m only claiming Enterprises won’t skip it completely (as many expect).  Adoption will be near non-existent in 2012 and tepid in 2013.  But by the time Windows 9 ships most Enterprises will already have deployed Windows 8 to a surprisingly broad part of their employee population.  And then they can do the full cycle rollout of Windows 9 to replace both Windows 7 and Windows 8.

In the end it doesn’t matter for Microsoft if Enterprises move completely to Windows 8 or retain Windows 7 as their desktop/notebook operating system.  What matters is that as companies deploy tablets they deploy Windows 8 tablets over competing offerings.  A hybrid Windows 7/Windows 8 environment is total victory for Microsoft.

And what could go wrong?  First, I completely dismiss a Vista-like scenario in which Microsoft ships something that has lots of negatives and no compelling value.  The biggest threat to Windows 8 in the Enterprise is that in a Consumer-driven IT world the consumer rejects Windows 8.  In that case the pressure for IT to make alternate devices, particularly Apple devices but also Android devices, equal partners to PCs in their computing environment could become irresistible.  For those Enterprise IT guys who are frustrated by Microsoft’s consumer focus these last few years, that is the scenario that it is up against.  To win in the Enterprise it must have a strong position with consumers.  Failure to do that puts Windows in the same camp as MVS (z/OS), VMS, Solaris, etc.  Enterprise-oriented operating systems that can continue to be important and produce healthy revenue for a decade (or decades) beyond their prime, but long ago lost their central role in the future of computing.  And Microsoft is just not ready for that to be Windows’ fate.

 

Posted in Computer and Internet, Microsoft, Windows | Tagged , , , | 3 Comments