Another review of a Microsoft product (or in this case service), another black eye. The most recent controversy is over Bing, but let’s face it, a lot of the problem with Windows 8 is how the preponderance of reviews didn’t like how its new user experience played out on traditional desktop/notebook devices. That negativity, echoed by the Power User class that the reviewers are part of, spread to the general public and became a major drag on acceptance of Windows 8. People who have never seen nor used Windows 8 walk into computer stores asking for Windows 7 machines. A computer store I was in yesterday keeps a supply of a third-party Start Menu add-on in-stock and offers to sell it to all buyers with new PCs. “Does it make Windows 8 work just like Windows 7?” asked the buyer. “Yes” said the sales rep. How could Microsoft have avoided this? Let’s time travel back to the pre-Internet days for a solution.
When I joined Microsoft in early 1994 I discovered what I then considered a somewhat odd way of driving product development. I would sit in Bill’s reviews of product plans and many would have as a key goal of the release “Win Reviews”. As an enterprise guy this was odd because at the time reviews played a minimal role in that space, but for end-user products reviews were critical. Also, whereas enterprise software teams could talk directly to a high percentage of their (existing or potential) customers (CIOs, VPs of Operations or Development, DBAs, etc.) reviewers became a key proxy for end-users.
Recall that we are talking (effectively) pre-Internet. The dominant force in communicating information about computer hardware and software were a handful of print magazines. As the PC era reached its pre-Internet peak these publications had grown to the size of small (and sometimes not so small) phone books. And they were filled with reviews of new products and comparisons of competing products. Trying to decide between Microsoft Word, WordPerfect, and Lotus Ami Pro? Or Windows vs. OS/2?Articles in these magazines were going to weigh heavily in your decision process.
The major publications took this one step further by getting into a “lab war”, building out hardware labs and hiring technical staff so they could dig more deeply into products. Even enterprise-oriented products like SQL Server found ourselves fighting lab wars in a couple of publications. But whereas for SQL Server this was mostly a marketing activity (supporting reviewers by providing resources to help them make sure they’d configured the product correctly, understood the new features, etc.) for end-user products making sure you would come out on top in reviews became a product driver.
So “Win Reviews” drove actual product requirements. This meant understanding what reviewers would care about, what they liked and disliked about various products, how you might wow them with your new release, etc. It meant engineering the product with reviewers in mind as the proxy for your end-users. One could debate if that was really the right thing to do for end-users, and I believe that’s one reason why “Win Reviews” fell out of favor as a product requirement driver. Circa 1992 reviewers were probably a fair representation of the end-user community. By 2002 PCs were so ubiquitous that reviewers just represented the Power User niche, and catering to them made it difficult to address the needs of the broader user base. Somewhere between those two dates worrying about reviews moved out of the product requirements arena and became purely an outward-focused marketing effort.
For the most part this transition away from letting reviewers drive product requirements was a good thing. It was facilitated by a dramatic growth in ability to directly communicate with the user community. Of course there was the much mentioned explosion in telemetry. And Microsoft (through a growing sales force and closer partnerships with OEMs, ISVs, consultants, etc.) greatly increased its direct communications with customers. Plus the Internet provided a means for users to express themselves directly. The way I found out how badly I’d missed the boat by leaving Declarative Referential Integrity out of SQL Server 7.0 was by following newsgroups and forums and getting blasted about it. The formal communications channels had not brought it up at all! So adding DRI became one of the top priorities for SQL Server 2000.
Now jump forward to the modern era. On one hand you could expect that reviews should play a smaller role than ever in driving product definition. But consider the marketing side of things. When a market participant is the overwhelmingly dominant player they have little incentive to worry about reviews. Reviews, in essence, can only benefit the underdog. However, when markets are highly competitive reviews can make a significant difference in purchasing decisions. Today every segment Microsoft plays in is competitive and often highly competitive, with significant areas in which they are the underdog. Even in an area they are dominant, desktop computing, they have tremendous competition from their own legacy. Windows Vista had to compete with Windows XP. Windows 8 has to compete with Windows 7. Reviews matter.
Consider where Windows 8 would be today if the preponderance of reviews had lauded it as a great follow-on to Windows 7. Consider if Windows 8 had amongst its goals “Win Reviews”. It would have taken a very small number of concessions to the purity of the evolution, and/or prioritizing taking a few steps further along the evolutionary path, to swing the balance of opinion on Windows 8 from negative to positive. Windows 8 is a great release but to many reviewers, and the Power User class they are part of, it seemed like Microsoft was intentionally rubbing their noses in excrement. And they’ve repaid the favor by lumping Windows 8 in with Windows Vista (which is a totally ridiculous comparison).
We see this in other areas as well, security being a prime example. Microsoft Security Essentials/Windows Defender hasn’t been doing that well in published reviews by testing organizations. When you look at why you discover that there is a disconnect between the methodology the testing organization uses and the way Microsoft thinks of its collection of capabilities. The methodology often bypasses parts of Microsoft’s offering while exercising the equivalent parts of its competitor’s offerings, thus Microsoft comes across as having the weaker product. Microsoft addresses this as a marketing problem, trying to get testing organizations to change their methodology and explain the situation to the public. What it hasn’t done is step back and say “what do we need to change in the products to win the reviews?”
Given where Microsoft is in its various markets what it needs right now is the preponderance of reviews of all of its products to be overwhelmingly positive. And you can’t get that via outward focused marketing activities. What Microsoft needs right now is to take a tip from the PC software market of 20 years ago and make “Win Reviews” part of its product planning process.