I’ve long been thinking of doing a series of postings on both Microsoft and the industry’s use of telemetry and was about ready to start when I realized I’d rather put a cart before the horse. Many have scratched their head about Windows RT, and in particular its lack of support for third-party “desktop” apps. Ultimately I think Windows RT is the result of heavy reliance on telemetry. Those who have bones to pick with Windows RT will, of course, think of the adage “there are three kinds of lies: lies, damn lies, and statistics” since it is statistical analysis of telemetry that we’re really talking about. On the other hand, reliance on statistical analysis may explain why the end-user reaction to Windows RT and Windows 8 overall seems much better than that of pundits and power users. It’s hard to be positive on something when you are looking at it from a perspective more than a couple of standard deviations from its design center.
Lots of inputs go into every decision, most importantly data. With all decisions data is in short supply (thus the adage that management is the art of making decisions with incomplete data). One might imagine that if you had perfect data then decision-making becomes easy, because the answer becomes obvious. Typically you are forced to fuse data from multiple sources. The more sources the more potential error that is introduced into the analysis process. Often those sources are themselves the output of analytical processes rather than raw data, introducing their own errors. Thus you get some of life’s truly head-scratching bad decisions, like New Coke.
But what if you had near perfect data? What if instead of small sample sizes, limited sampling techniques, reliance on anecdotal data, etc. you had a sample of both overwhelming size and accuracy that you knew without a doubt that it clearly represented reality? Could you make better decisions? In particular could you make better decisions when high risk and complexity were involved? Well, that is something the grand Windows 8 experiment will eventually tell us.
When Windows Phone 7 was being designed the App Platform team would look at the top 100 iPhone apps to make sure that the platform could do a good job of supporting them. Customer usage patterns were thus driving decisions, but it was an indirect data set. The Windows 8 team could look at this same kind of data, but it could also look at the massive amount of telemetry Microsoft collects from those who opt-in to its Customer Experience Improvement Program (CEIP).
Anyone who followed the Building Windows 8 blog could see how seriously Microsoft used the CEIP telemetry in making decisions about the Windows 8 user experience. Dropping the Start Menu in favor of having even desktop-focused users jump into the Start Screen is an example of a decision driven by what telemetry told Microsoft about average user usage patterns. Of course if you are in the minority of users with a usage pattern far from the average than you aren’t happy about what Microsoft did. And further, just because the data told Microsoft how people actually did things doesn’t mean the resulting design decisions Microsoft made are the right ones. The data didn’t say users wanted a jarring transition out of the desktop whenever they needed to start a new application. Microsoft could have made some design tweaks that stuck with the Start Screen idea but made those transitions less jarring.
It may be a little off topic but let me give you a simple design change that might have made the Windows 8 experience smoother. I don’t think it violates anything one would learn from the telemetry, but rather on a statistical level would make even those out around two standard deviations happier. Provide a snapped view of the Start Screen that could be invoked from the desktop. I’m guessing that half of the negative Windows 8 reviews would switch to neutral or positive.
So let’s go back to Windows RT and the Surface and how telemetry might have figured into the key design decisions there. Recall Netbooks. Five years ago the notion that most computing was moving to the web, and thus all users needed was an inexpensive web browsing device, took hold. Moreover it was recognized that these devices would often be secondary devices that complemented rather than replaced existing PCs. Since Netbooks didn’t have to run existing Windows-based apps Netbook manufacturers initially focused on Linux as the Netbook OS. The rapid unit growth in Netbooks forced Microsoft to pay attention, initially offering a lower-cost Windows XP and then introducing Windows 7 Starter. Despite costing (from my observation) about 15% more (both from OS costs and the need for slightly beefier hardware than their Linux-based counterparts) Windows-based Netbooks eventually captured over 90% of the Netbook market.
Netbook market share growth then topped out, becoming a sizable niche within the PC space, before being hit by a triple whammy. By 2009 Apple’s introduction of, and rapid growth in, the iPhone’s App Store had provided an alternate model to the movement of all apps to simply being web sites and it had decent web browsing capability as well. Then the distinction between Netbooks and “Thin-and-Light” Notebooks blurred as user demand for both focused more on the 11″ screen size. Finally Apple’s introduction of the iPad provided a much better alternative to the Netbook for a device focused on web browsing while at the same time bringing the iPhone’s App Store along for the ride. Netbooks all but disappeared.
Windows 8 design started before the introduction of the iPad. Even as late as the iPad 2 launch many analysts still considered Tablets to be no more than a Netbook-like niche. And until very recently the impact of Tablets on PC sales has come entirely from the shift of the previous Netbook market to Tablets. So from a hard data perspective what Microsoft mostly had to go on in late 2009 and 2010 in designing Windows 8 and Windows RT was the CEIP Telemetry.
Let’s step further away from overall Windows 8 and focus just on Windows RT. It was clear by 2009 that a huge ecosystem was growing around ARM processors. Microsoft had been tracking, and even working on, porting Windows to ARM since early that decade. Deciding to port Windows to ARM was the easy part. Deciding what to do with it is where the telemetry probably came in. And let me be clear this isn’t based on any knowledge of what actually happened, but I’d put money on it being within one standard deviation of the truth.
With the Windows Phone team focused on phones, and ARM processors clearly not being up to the task of (nor having any advantage in) powering full Notebook or Desktop PCs it was pretty clear that the Windows on ARM design center was for the classes of devices between the two. At the time only one class device had substantial market share between phones and notebooks, the Netbook. Again the iPad didn’t exist yet so neither its precise characteristics nor user acceptance of them was known. That tablets with user experience characteristics similar to the iPhone could be predicted (especially since Microsoft’s internal push on Natural User Interface as well as its own Tablet PC experience suggested it as well). So it was pretty obvious that Windows on ARM needed to target Netbooks, touch-enabled Netbooks, and tablets with similar characteristics to Netbooks. How, in the absence of what we know today (disappearance of the classic Netbook and rise of the Tablet) could Microsoft make design decisions? Telemetry.
Why did 90+% of users choose to pay more for a Windows-based Netbook than to go with a Linux-based Netbook? If these devices were simply used for web browsing than the user behavior doesn’t make sense. We can speculate on this of course. Familiarity of UI, compatibility with devices such as printers, ability to run Windows applications (even though that is counter to the original idea behind netbooks), etc. As I said we can speculate. And analysts can survey customers and make their claims. But Microsoft? Microsoft has precise data from the CEIP.
Microsoft could look at data and see how much users printed and what printers they used. Microsoft could see how often they used the USB port and what they did with it. Microsoft could see how often they docked the netbook to make use of larger monitors and better keyboards and mice. Microsoft could see how often they used WiFi, hardwired Ethernet, or 3G. Microsoft could see what percentage of the time they used the web browser and what types of web sites they visited. Microsoft could see what other applications they ran and how much time they spent using them.
And what do you think Microsoft got from the CEIP telemetry? I’m guessing that they saw the vast majority of Netbook usage was for web browsing, with use of Microsoft Office representing a much smaller but still substantial portion. And then I’m guessing they saw a dramatic fall-off with no apps really registering as significant. Netbooks were basically web browsing plus Office machines. Then they looked at the web usage and saw that a great deal of it matched the kinds of “consumption” apps that were popular on the iPhone and that they were going to target with the new Windows 8 “Metro” app model. And they saw heavy use of traditional Windows features like broad peripheral support, network connectivity, etc. Combine the actual usage data on Netbooks with the emergence of Natural User Interface and the re-invigoration of local apps that was demonstrated by the Apple App Store and you have Windows RT.
Some have asked why Windows RT doesn’t have the ability to run arbitrary x86 applications via emulation. Well first that doesn’t seem all that technically viable. DEC’s Alpha ran x86 apps via emulation, but recall that in any given semiconductor generation the Alpha was faster than the equivalent x86. That allowed it to run emulated apps with reasonable performance. In any given semiconductor generation ARM processors are notably slower than the equivalent x86 (though to date they’ve been more power efficient). So emulating x86 apps on ARM would make most apps unusable. But perhaps more importantly, if data from Netbooks shows that users didn’t run apps even on a native x86 machine in this class why would you need to emulate them on ARM?
Emulation isn’t attractive, but why not have supported third-parties who wanted to port their x86 “desktop” applications to ARM by providing the tools and allowing installation of third-party desktop apps? Well of course there are those issues around power consumption, memory use, security, etc. that the new app model addresses but desktop apps would largely still suffer from. Microsoft could have made third-party developers pay attention to those issues, as Microsoft Office did. One issue is that it would have detracted from efforts to get third-parties to write to the new app model. Moreover, why bother if the data from Netbooks showed that users didn’t actually run those apps on this class of machine?
It is unlikely many users ran Photoshop on Netbooks. If they used Netbooks for photography then they likely used lighter weight apps of the type that were appearing on the iPhone and Microsoft expected would quickly appear in its own Windows Store. As they analyzed the telemetry from Netbooks I think they found this to be the pattern, with the netbook experience proving that there was little actual customer usage of arbitrary desktop applications on a device in this class.
So take a look at Windows RT, or even better the Microsoft Surface, and realize what it is. The Surface is the intersection of Netbook meets iPad. It brings exactly what most users liked about Windows on Netbooks into the modern era while dispensing with much of the Windows world that Netbook users simply didn’t take advantage of. It is exactly what users told Microsoft via their actual usage data, extrapolated from the historical Netbook world into the modern device world, they wanted.
Want another possible proof point? Domains. On one hand it seems odd that you can’t connect a Windows RT device to a domain, yet on the other how many Netbooks were Domain-joined? Microsoft may have had many reasons for not including the ability to join a Domain in Windows RT, but whatever those were they could look at the Netbook data and conclude that the ability to join a Domain was not critical to this class of device. Take a look at many of your own questions about Microsoft’s Windows RT decisions and you’ll likely find the answer is in Netbook usage patterns.
The use of Telemetry may explain why Windows 8, Windows RT, and the Surface seem to do better with average users than the pundits and power users out around and beyond two standard deviations. Windows RT and the Surface are designed to actual usage data on a segment of the computing spectrum that was also derided by many pundits and power users. A segment that garnered (as I recall) about 20% of PC unit volume before being obliterated in the “post-PC” shift. If Microsoft has used its wealth of telemetry to build something that nails the real world usage scenarios that originally made Netbooks popular, while also being roughly as good as the iPad for the scenarios Apple optimized for, than they have a huge winner. Even if pundits and power users don’t seem to like what they’ve done.
And if Windows RT fails? Well it could be the result of pundits and power users convincing the target audience not to give it a chance. Or it could be the result of poor design decisions being made despite having excellent data. Or it could be a series of marketing, sales, and partner missteps that have little to do with the product itself. Or it could be that particularly vicious form of lies known as statistics.