The change in Windows strategy

There are have been a number of rumors out there about the change in direction coming to Windows as a result of the “One Microsoft” reorg.  And some opinions of what the rumors mean (e.g., is Microsoft backing away from the who metro/modern thing?)   So I thought I’d put my stake into the ground on what is going on.

Microsoft really now sees that there are two different market sub-segments for Windows.  The first is a mobile experience that is touch first, and crosses the divide from Phone up through Tablets. It is an environment in which, as suggested, the primary way of approaching the device is through the touch-centric and other natural UI we’ve come to expect ala IOS and Android.

The second is a more classic desktop experience that is Mouse and Keyboard first, although it can have touch and can run the mobile apps.  This environment boots into the desktop, has a real Start menu (replacement), and assumes you live in the desktop day in and out.

Gone is the notion that the touch-first mobile environment takes over the desktop and thus the desktop only exists as a transition tool.

I don’t know how much progress we’ll see on this front in updates to Windows 8.1, or if the real shift will wait for Windows 9.  I expect that it is mostly something we don’t see until Windows 9, where Microsoft could also introduce significant packaging and licensing changes, update documentation and marketing materials, etc. to match the new model.

This will of course be welcome news to most users, although I think the devil is in the details.  For example, will Microsoft remove the desktop from the mobile SKU?  Will the desktop SKU be able to run in full mobile mode to support 2-in-1s?  Or will the differentiation be more stark.  Will mobile (aka, Metro or Modern) apps always run on the desktop in the desktop SKU and in the full/snap mode in the mobile SKU?  Will the mobile SKU get a more powerful windowing model that is independent of the desktop?  What improvements will the Win32 development environment see now that the desktop is considered an ongoing vibrant application environment?  Is ARM still limited to only the mobile SKU?  Is a Surface-style device a mobile device or a desktop device in this world?And so on.

What we don’t know far outweighs what we know.  I think what we do know, or can safely assume, is that apps written for Windows Runtime will continue to play a major role on both the mobile and desktop SKUs.  But beyond that, it’s all speculation.

And that’s why Microsoft might need to reveal its Windows 9 (aka Threshold) plans at Build 2014.  With the rumor mill already active Microsoft needs to give developers information on its future direction or watch as developers back off Windows-related development pending that direction.  And that would be a potentially fatal blow to the Windows franchise.

 

Posted in Computer and Internet, Microsoft, Mobile, Windows | Tagged , , | 14 Comments

Is Windows 8 the new Vista? (Redux)

Paul Thurrott has been posting recently that inside Microsoft people are referring to Windows 8/8.1 as “Vista”.  Of course many outsiders (but not me) have referred to it as being as bad as Vista ever since Windows 8 was revealed to the world.  I have a problem with the analogy, though I understand why insiders would now be using it.

The problem I have with the Windows 8 as Windows Vista analogy are the quality problems that Vista had.  It just didn’t work.  That’s not a problem Windows 8 had, so I really don’t like the comparison.  For me its a visceral thing.

Another problem Windows Vista had was that it offered little compelling user value.  Yes the security was much better, but the pain level that it forced on users was higher than the perceived benefit.  This was the result of many things, a key one being a failure to balance the benefits of the User Account Control (UAC) feature with its potential intrusiveness. A second being that UAC exposed the architectural flaws in many applications (that required them to run as administrator), and Vista took the blame for the application vendors’ tardiness in addressing them.

But the third problem is one where the Windows 8 and Windows Vista analogy does work for me.  They both exhibit an arrogance towards users that needlessly alienated them.  That is, they don’t offer enough compelling value for the majority of users compared to the pain level they extract.  They both expected users to accept change and pain just because Microsoft said it was good for them.

In Windows Vista’s case it was literally forcing change without explanation or any significant benefit.  In Windows 8’s case it was forcing desktop users to accept changes that were necessary to enter the tablet market, and that pointed to a future direction for all users, before they provided any benefit to those desktop users.  Dropping the ability to boot into the Desktop?  Totally arbitrary, unnecessary, and insulting in the eyes of most users.  Dropping the option of a cascading Start Menu?  Almost as bad, though I continue to believe that if the Start Screen had been designed a little differently it could have been accepted.  Still it’s clear that what users wanted, and Microsoft could easily have offered, was Windows 8 with a “Tablet First” mode and a “Desktop First” mode.  Then subsequent releases could have made those more nuance than reality.

The feedback on this was clear from the first appearance of the Windows 8 Developer Preview, but the Windows team’s reaction was the opposite.  The ability to turn on boot to desktop and the cascading start menu were removed after that preview and the feedback.  Power Users saw it as pure arrogance.  And while I think most were irrationally negative on the new Start Screen, and other aspects of Windows 8, the Windows team showed little sensitivity to their input.

And therein lies a key problem with Windows 8 and the development philosophy of Steven Sinofsky.  The secrecy.  The unwillingness to bounce things off customers early enough to make changes.  A worship of  schedule and process above wisdom and expertise, even if the result is the wrong thing shipped on the promised timeline.    Steven will no doubt dispute this, but this is how those outside his sphere see his way of developing software.  And in the case of Windows 8 it caught up with him, in a really big way.

And that’s why insiders are equating Windows 8 and Windows Vista.  There is no more damning thing they could do.  Steven’s leaders threw Vista in the face of anyone who criticized their decisions or the processes by which they got there.  It was Microsoft’s version of playing the “race card”.  It was almost impossible to defend a position in the face of the Vista card.  Windows 7 seemed to prove the Sinofsky way of doing things as the right way.  He became President of the Windows Division with almost free reign over Windows 8.  And the ability to win almost any battle with other divisions.  The result?  Failure by most definitions, complete failure by some.

Calling Windows 8 “Vista”, and getting ready to name a release Windows 9, is much more than an attempt to distance Windows from Windows 8.  It is the way to distance Windows, and Microsoft overall, from the Sinofsky era.  Rumors that Microsoft may discuss Windows 9 (nee, Threshold?) at Build 2014, if true, is an external acknowledgement of this.

Meanwhile “Windows 8” has probably become the new “race card” at Microsoft.  A symbol of how process over product fails.  A symbol of how failing to listen to your customers results in product failure.  A symbol of how failing to exploit collective wisdom and experience results in product failure.  And unfortunately, as much as we can hope otherwise, a way to shut down discussion rather than pay attention to constructive criticism.

Let me be clear that Windows 8, like Windows Vista, has far more good in it than most critics (internal or external) are willing to acknowledge.  And the software engineering process focus that Sinofsky brought to Windows was a sorely needed change from the disarray that lead to Vista.  Windows 7, almost certainly the best release of Windows ever, couldn’t have happened without Windows Vista.  Windows 7 also couldn’t have happened without great development processes.    It sounds like Windows 9 will likewise build on Windows 8, and will likewise build on solid processes that have been remolded to (hopefully) merge the best of the Sinofsky-era processes with the best of the pre-Sinofsky era and subsequent learnings.

But if you want to know the real lesson from all of this it’s that the Windows team doesn’t have, and has not had since Windows 95, good processes for doing revolutionary advances of Windows.  The much beloved Windows XP and Windows 7 releases were done with completely different philosophies and processes, but were very much incremental improvements over their predecessors.  The two releases intended to revolutionize Windows, Longhorn/Vista and Windows 8, both failed largely because they didn’t have processes suitable to their scope.  In the first one could argue it was too little process discipline, and in the latter too much.  Yes there are many other factors, but the lack of having the appropriate processes for revolution rather than evolution stands out as a point of commonality.

So is Windows 8 the new Vista?  I still don’t like it, but if making the analogy helps the Windows team create a truly great Windows 9 release than I’m all for Microsoft using the analogy internally.

Posted in Computer and Internet, Microsoft, Windows | Tagged , , , , , , | 24 Comments

Windows XP: Apocalypse Now

I’ve been sounding the warning about the coming demise of Windows XP for the last three years, and this is the last warning before it actually happens.  In 2011 I wrote about why XP should die.  In August of last year I reminded people that the apocalypse was approaching.  On April 8 of 2014 support for Windows XP, and all Microsoft technologies running on Windows XP, comes to an end.  Is this really an apocalyptic event?  Perhaps so, at least for many people.

Keep in mind that there are several reasons why a PC might continue to run Windows XP past the April 8 deadline, and they include both high risk and low risk scenarios.  Having a home PC that is connected to the Internet and is used for reading mail, surfing the web, playing downloaded games, and/or exchanging other kinds of files is part of the perfect storm that is coming if it still runs Windows XP.  On the other hand, a Windows XP (likely embedded) system that is not connected to the Internet and never has external media (CDs, USB memory, etc.) connected to it is minimally vulnerable and we probably shouldn’t panic over it.  I emphasize “never” above because the greatest piece of malware of recent years is Stuxnet, and it didn’t get on to Iranian centrifuges via the Internet.  Viruses can still spread the old-fashioned way, through any kind of media that touches multiple machines.

So I’m going to continue to urge that Windows XP systems be upgraded, or more likely replaced, ASAP.  Preferably before April 8.  But in future blog entries I’ll also opine on how to keep your Windows XP systems moderately protected after that date.  I’ve seen others write on that topic, but I think they fail to recognize (or at least point out) the pitfalls of their suggestions.  I’ll try to do a reality check as I make mine.

One of the big problems with securing a Windows XP system against the apocalypse is that many such systems are tied to Windows XP in some difficult to change way.  For example, a common suggestion is to switch from Internet Explorer 8, which will no longer be patched, to Google Chrome, which will be supported on XP for an additional year.  But many corporate apps were written specifically to IE8 and won’t work with Chrome.  Or it might be Windows XP is running on an embedded system and you can’t install Chrome even if you want to.  You need the vendor to do it and they have not rewritten that app to work cross-browser.  And they want you to buy a new system instead, or perhaps they have gone out of business entirely.  So you are stuck with Windows XP and stuck without the ability to implement most suggestions for making it more secure.

That’s the kind of thing I hope to dig into in blog posts over the next couple of months.

I once again urge you to migrate off Windows XP in the next two months if at all possible.  But if not, I will give you my take on ways to survive for a while in world in which your system is the prey of choice.

Posted in Computer and Internet, Microsoft, Security, Windows | Tagged , , , , | 20 Comments

Accessories are the Windows 8 tablet achilles heel

One of the most frustrating aspects of the last three months in the world of Windows tablets is the non-availability of accessories.  Oh plenty were announced, but even today they can be hard to find.  Or impossible.  Or don’t live up to the hype.

The Dell Venue 8 Pro (DV8P) was one of the most in-demand tablets of this holiday season, but Dell has struggled with the availability and quality of its accessories.  The active stylus that was announced with the tablet became available, received poor reviews, and has now disappeared from the Dell website.  Hopefully this is a temporary measure while they address the stylus’ quality issues, because I know a lot of DV8P purchasers were looking forward to this accessory.

When the DV8P was launched there was also the promise of a keyboard accessory.  The limited description suggested that the keyboard would fold over on to the display and act as a cover.  After months of waiting the Dell Tablet Wireless Keyboard finally started shipping in the last week or so.  Mine arrived yesterday.  It arrives as a case (that is different from the Dell Folio) and a Bluetooth keyboard.  The case includes a cover flap that folds behind the tablet and under the holder for the (unavailable) active stylus.  The keyboard can hold on to the outer case cover using magnets, but those magnets are so weak you might be tempted to think Dell actually left the feature out.  The keyboard is used totally detached from the cover or device, the magnets are purely intended as a means of making the combo easier to carry.  But the weakness of the magnets, and the flexibility of the cover, mean you can never trust the keyboard to reliably cling to it in transport.  Dell really needed to add a strap to hold everything together.

The new case is a tad lighter than the Folio (6.2oz vs. 6.4oz) but only allows you to prop up the DV8P in a single viewing angle.  The Folio is much more flexible for viewing angles, but won’t allow the keyboard to be right up against the tablet, which probably explains the design change.  I find the new case’s stand design results in too much give when you try to touch the screen, leading to lack of recognition of touches or gestures  The keyboard itself weighs 8oz.  So add the case and keyboard to the 13.5oz DV8P and you are carrying around a 1lb 12oz package.  So a little on the heavy side as you’d have to figure a more Surface-like keyboard cover would allow for similar functionality that was 4-8oz lighter.  But quite competitive when you look at the iPad Mini plus a third party keyboard case.

My biggest problem with the keyboard arrangement is that it is very much in the way when you don’t want to use it, making the DV8P clumsy as a tablet.  Or you must have somewhere you can put down the keyboard out-of-the-way.  So the bottom line is that this is a keyboard you’ll probably leave in your briefcase or handbag for retrieval when you absolutely need it, not something you’ll keep with the DV8P at all times.  In that case any Bluetooth keyboard would do as well.  And the Folio is probably a better case.  So I rate this one a miss on Dell’s part and don’t really recommend it.

For a while I was considering getting a Nokia Lumia 2520, but I’ve heard its battery keyboard cover made the combination quite weighty.  So I’ve been trying to see one before buying but they are nowhere to be found.  A sales rep at the Microsoft Store said they weren’t out yet.  A sales rep at an AT&T store said they’ve seen exactly one, and the product manager at AT&T responsible for them won’t answer the store’s requests for information about availability.  One piece of good news for Microsoft is that the AT&T rep said the 2520 is doing well for them.  They only get one or two per shipment and those are immediately snatched up.  I think this is a case where AT&T wants to do a good job selling the 2520 but is being held back by Nokia.

At this point the 2520 has probably missed the window where I would buy it as I’d rather wait to see what a Surface with LTE support, or another OEM, brings to the market for my next 10″ class tablet.  So Nokia loses a sale because they couldn’t get the accessories into the market in a timely fashion.

Then there is the Surface Pro dock many people have been waiting for.  Microsoft always said it was an early 2014 item, despite a few trickling through the system in 2013.  Well, it’s early 2014 and I still can’t get a dock.  Microsoft also announced a powered version of the Type Cover for early 2014, but I haven’t heard a peep about it since then.  The Microsoft Stores (and Best Buy, etc.) I’ve visited have plenty of first generation Touch covers available, but the much improved second generation Touch Cover 2 is rarely seen.  I know a lot of first generation Surface owners who wanted the Touch Cover 2 as an upgrade, so availability problems cost Microsoft some money last quarter.

A lot of 8″ tablets have micro-USB connectors so you need an adapter cable to connect USB peripherals.  These cables are nearly impossible to find in a brick-and-mortar retail store.  I’d almost suggest that one should be included in the box with the tablet, but of course that makes no sense when you are trying to absolutely minimize the price of the tablet.  I’d rather see the price of the DV8P and similar spec devices driven to the $199 range than burden them with costs that prevent it.  But it sure would make sense to offer the micro-USB to USB cable as an option when ordering these tablets, and encourage retail stores to carry them.  Using my favorite example of the problem here is my Fitbit.  It requires a USB dongle attached to the Windows device.  So now, even with the Metro version of the Fitbit app supporting syncing, I can’t use my DV8P to sync until I get my hands on that cable.

The good news is that accessories are finally starting to trickle into the market, both from the device vendors and third parties.  You can tell how popular the DV8P has become by the number of vendors now claiming to have custom cases for them.  Too bad those aren’t showing up in brick-and-mortar retailers.  But the sad truth for 2013 is that vendors left a lot of money on the table by failing to have accessories in the market when people where buying devices.  And they probably lost device sales as well when buyers were comparing Windows 8 tablets against the iPad/iPad Mini and Android devices with their very rich accessory ecosystems.

Now that Windows 8 tablets have somewhat broken through its critical that device makers focus on having a rich accessory ecosystem that can flood the market with appropriate accessories very near to device introduction.

Posted in Computer and Internet, Microsoft, Mobile, Windows | Tagged , , , , | 10 Comments

The Smart TV Market

I (and many others) have opined on the need for Microsoft to have a device to compete in the lower-priced mass market segment for smart TV add-ons.  I want to refine my thinking and explain why Microsoft may not have done so yet.  At the same time I’ll put Microsoft and Apple into the same boat.

First off I believe we need to start really analyzing Microsoft in the context of a separation between its Devices and Services businesses.  As I’ve pointed out in previous postings on this topic one of my fears about Microsoft’s current positioning is that the services side (e.g., Xbox Video) is not represented on non-Microsoft smart TV offerings.  So, for example, it makes more sense to buy Season 3 of Downton Abbey on Amazon Prime than on Xbox Video because with the Amazon offering you can watch on a much broader range of devices.  You might watch some episodes on your Internet-connect TV (e.g. in the bedroom where there is no gaming console), some on your iPad, some on your PC, and others on your Family Room Xbox One.    You can’t do this with Xbox Video.  And that will continually push people towards Amazon and away from Microsoft.

Of course this doesn’t necessarily argue for Microsoft delivering a sub $100 device as a way to promote Xbox Video, but it does illustrate that devices and services are mutually reinforcing businesses.  By pushing customers away from Xbox Video and towards Amazon Microsoft is making the tablets like the Kindle Fire family more attractive.  And leaving more of an opening for Amazon to introduce its own smart TV offering.  So I consider it crucial for Microsoft to make Xbox Video (and Music) broadly available services that appear in TVs as well as Roku and similar devices.  As far as I’m concerned that is a necessary component to long-term success in the entertainment space.

Now let’s switch gears to the devices world and here is where things get very interesting.  To begin with, sub-$100 smart TV add-ons have not been particularly successful for two reasons, one being that many TVs sold in the last few years have come with this functionality built-in.  For example I have a mid-range LG TV purchased 5 years ago that is Internet connected.  The user interface is primitive (e.g., its Netflix app can only play things from your queue), but I still use it to watch video.  It was on special when I bought it and the Blu-Ray player they gave us also was Internet connected.  I put it on the TV in our second home’s bedroom.  So now that condo has two smart TVs without me investing in any special smart TV add-on boxes.

The second reason sub-$100 smart TV add-ons haven’t taken the world by storm is that the early adopter target community for smart TVs had a huge overlap with the gaming console ownership community.  So they experienced the smart TV revolution primarily through their Xbox 360 or PS/3.  Take a software update or buy a new box?  The software update was the easier path by far.

So now let’s talk about the smart TV capabilities as I think of them.  I see three classes of smart TV capabilities: Basic, Enhanced, and Transformative.  Five years ago all the focus was on Basic capabilities, like enabling you to watch Netflix on your TV.  Or display some content from your home network.  Since few TVs actually had this capability (two years ago about 25 million smart TVs were in American’s hands with about half actually connected to the Internet) devices like the Roku and Apple TV came along to add these capabilities to existing TVs.  There continue to be improvements to what constitutes basic capabilities, but the point is that with most new TVs containing the basic ability to view Netflix, Hulu Plus, Vudu, and other services the market for low-priced add-ons in the “Basic” segment of the market is at best modest.  In effect, Basic capabilities are now free (though the services themselves, of course, are not).

Then I think there is a market segment for an Enhanced experience over Basic.  For example, the ability to use gestures or voice to control your TV.  Or the ability to control the TV from other devices, primarily devices made by the same manufacturer as the TV.  The cable and satellite companies are also adding the ability to integrate multiple set-top boxes in your home, make their own video services more attractive, and integrate media beyond just the content they offer.   Over the next few years I think these Enhanced experiences will take over the mainstream of the market, meaning that they essentially become free as well.

Ok, that was a lot of background on my thinking so let’s jump into Microsoft, Apple, and the market segment they both want to address.  What I call the Transformational segment.  There is continual talk about when Apple will bring its “real” Apple TV product to the market and its become a bit of a joke.  Although Apple entered the market with a Basic product they quickly realized how uninteresting that is.  It does address my earlier point about the need to make Xbox Video universal as it pertains to iTunes.  And given that Apple has distanced themselves from industry standards for media sharing in favor of Airplay, it Airplay-enables TVs as well.  But otherwise it is a me-too product offering no functionality outside the Basic capabilities found in smart TVs.  Apple wants to transform the big screen home entertainment experience, and it makes no sense for them to introduce another product in the TV space until they have something they believe can successfully do that.

Microsoft is in the same position as Apple with one key differentiator, it has the Xbox family as an already in-market device.  The Xbox 360 lead the way in terms of the Enhanced category I described above.  And Microsoft has made numerous attempts over the last 20 years to transform the TV experience.  From being one of the early creators of video-on-demand technology, to an early player in the DVR space, to trying to integrate living room media experiences with Media Center and its original 10′ experience, to being a player in software for set-top boxes, to….  The Xbox One is the place where Microsoft is bringing all these efforts together to create a transformative experience.  And while I very harshly criticized the state of the initial software, they are clearly on track to lead in this space.

If Basic, and soon Enhanced, capabilities are essentially free than it makes little sense for Microsoft to offer a low-cost device unless it is actually part of the Transformative segment of the market.  Looking at the initial release of the Xbox One software we can see that Microsoft struggled to bring a transformative experience in to the $500 gaming-dominated part of the market, so it makes sense that they couldn’t (or wouldn’t want to) bring an immature solution to a less tolerant mass market with a less expensive device.  I’m still hoping that next year we’ll see both software updates that address my complaints and further the experience Microsoft is trying to promote, as well as a lower cost addition to the device family.

I’m not expecting Microsoft to introduce a sub $100 device in 2014 for two reasons.  The first is that it may not be possible from a cost perspective to introduce a transformative device at that price point for another couple of years.  The second is that absent competitive price pressure from other transformative devices there is no reason for Microsoft to take the margin hit.  Those who want the transformative experience will pay for it, though for non-gamers you might need a price point about half of where the Xbox One is today.  So could we see something like a $299 offering in 2014?  Sure.

By the way, I’m sure you are thinking that I haven’t defined what I mean by “transformative” and I have to say that is because if I knew exactly what it was and how to do it then I’d have formed a startup and be on my way to a $multi-billion IPO!  But there two things I think about as necessary components.

The first thing is a new user experience that puts streaming media on the same level as broadcast media and unifies the experience.  It must make finding, organizing, scheduling, and viewing of all this content seamless.  It must allow for extreme personalization of the experience at both a household and individual level.  It must bring the gamut of technologies we have become familiar with in the last twenty years (e.g., search, natural language speech recognition, apps, etc.) to bear on the problem, and be designed for rapid exploitation of the technologies that will be introduced and mature over the next twenty years.

The second thing is that the transformative TV experience is an end-point on an ecosystem-driven environment rather than being standalone or even the center of the entertainment experience.  It’s the whole “3 screens and a cloud” thing on steroids.  My phone, tablet, PC, smart watch, smart glasses, and other devices will be equal parts with TV of the overall experience.   One implication of this is that ecosystem ubiquity is likely to be a strong factor in success or failure.

There are only three players who I believe have the right set of assets to succeed in creating this transformative experience.  Only Microsoft, Apple, and Google.  Not that others won’t try, but only those three are in a good position right now.  Microsoft is already in market, has a strong set of assets, and is probably the most committed to the concept.  Apple is obviously working on it and will eventually find something to wow us with.  They can expect great success with the (large number of) faithful, but its unclear if they can create something that brings in a massive number of new customers as they did with the iPhone and iPad.

Google wants to transform TV and keeps throwing things at the wall to see if anything sticks.  With Google TV they made friends (e.g., Sony) in the OEM community but alienated the content owners.  A big part of Google’s problem is finding an answer that fits in their business model.  If they can do that, maintain the OEM relationships, and come to an understanding with content owners, then they’ll be a formidable competitor.

Right now Microsoft is in a good position, but if they don’t solidify it fast they could easily go from first place to third.  On the devices front they need to evolve the Xbox One media capabilities quickly.  And they need to address a price point more appropriate for media consumption for non-gamers.  On the services front they need to make Xbox Video and Music more ubiquitous by getting them into TVs, set-top boxes, Roku and friends, Sonos, and (non-Microsoft Auto-based) vehicle entertainment systems, etc.  That will help make the ecosystem ubiquitous without Microsoft having to play or succeed  in non-strategic segments of the devices business.

Microsoft has the opportunity to win this, but they have yet to demonstrate they can pull it off.  2014 is likely the pivotal year, perhaps the last year, for them to go from contender to presumed winner.

Posted in Cloud, Computer and Internet, Google, Home Entertainment, Microsoft | Tagged , , , , , | 8 Comments

More on Credit Card data breaches

This is an unfortunate follow-up to my posting a few days ago.  This morning one of my credit card providers called because there had been a compromise of my credit card information by a third-party.  This can’t be Target, because (a) it was for a card I haven’t used in a few months and (b) I didn’t shop at Target during the period that has been publicly claimed as when they were breached.  In fact, this is a card that (a) I phased out of usage about 6 months ago and (b) had been replaced because of a previous breach not long before that.  Frustrating to say the least.

Of course I now have to contact the few parties who were still using that card for a recurring charge.  I wasted a half-hour on the phone with the bank and I still have more time to waste cleaning things up.  And they aren’t all websites, so phone calls will be required Monday morning.  This one won’t be a biggie because of my previous phaseout, but it still will cost me an hour or two of time.

There are two things that would really help make this situation better, and neither necessarily involve “chip and pin” cards.  The first is to give me a way to provide anyone with a unique (though potentially recurring for that specific merchant) credit card number tied to my account.  Then an individual breach doesn’t have to impact any other merchant.

Note that single use credit card numbers has been tried before with little success, but that was before the age of smartphones.  And they literally were single use, rather than single merchant, making them unattractive for typical web usage scenarios.  Now I could just have an app on my smartphone that gives me a unique credit card number.  I could request it for Single Use or Single Merchant (Recurring Use) and then hand it out appropriately.

For physical card use there is an alternate solution which is to use 2FA.  You’d enable the feature with your bank in which case they wouldn’t approve the charge until you accepted it on your smartphone (via either an app or SMS).  If this feature were enabled then they wouldn’t automatically force you to cancel and replace a card in case of a data breach.

There you have it.  The current system is broken.  Totally broken.

Posted in Computer and Internet, Privacy, Security | Tagged , | 15 Comments

The New Windows Organization

The departure of Jon DeVaan and Grant George reignited my interest in writing about the organizational structure that Terry Myerson has put in place for Windows.  Now realize I’m doing this with very little information and lots of assumptions, but I think what I say here will be mostly right.

First a dive into history.  Originally there was a Windows (9x) team and a Windows NT team.  The Windows NT team built a Server product and a client product called Windows Workstation, but the priority was on Server.  Then the Windows team was merged into Windows NT and the Client business began to dominate the merged organization.  To fix this Windows was broken into three organizations: Core OS, Windows Client, and Windows Server.  Core OS consisted of most of the shared components of Windows, though there was sharing between the Client and Server organization as well.  For example, Printing was the responsibility of Windows Client even though it was important to Windows Server as well.

Originally Windows Core, Client, and Server were really three branches of one Division but over time they gained more independence.  While Windows Core remained a technology organization, Windows Client and Windows Server became business units.  Then Windows Server moved from Windows into a new Server and Tools Business.  This structure continued through Windows 7.  When Steven Sinofsky became President of Windows and Windows Live at the beginning of the Windows 8 effort he eliminated the separate Core OS organization and merged it with Windows Client.  He then had the combined organization report to him at a functional level, leaving no identifiable individual below him responsible for delivering Core OS functionality to Windows Server.  Or, other “clients” of the Core OS functionality.

Even when Core OS existed as its own Division (COSD) the Server guys often found themselves buried by the requirements and priorities of Windows Client.  But with the organizations merged the pain level grew dramatically.  Fortunately Bill Laing, the CVP who ran Window Server before it was functionalized to Satya Nadella and currently runs the Development function, had worked out processes for Server to work with the Windows organization.  But this did result in considerable overhead, stress, and lack of agility.  And then the situation became intolerable on a company-wide scale.

With Azure, Windows Phone 8 and Xbox One building on the core of Windows the lack of a separate Core OS organization went from being something that the Server guys just had to deal with to something at the heart of friction within Microsoft.  This isn’t the real topic of this post so I’ll stop here.  Microsoft now builds several products including Windows (Client) Windows Phone, Xbox, Azure, Windows Server, and other things on a common Core Windows.  And they are all moving to share a larger set of common services, not just the lowest levels they share currently.

Last summers reorganization put the three end-user oriented Windows offerings into a single organization, combining Windows (Client), Windows Phone, and Xbox into a new organization led by Terry Myerson (who had led Windows Phone).  Then Terry did his top-level reorg, and we can learn a lot about how he is thinking from that reorg.

Terry essentially split Windows into three (ignoring the consumer services work under Chris Jones):  Windows and Windows Phone, Xbox, and “Core OS”.  Wait a moment you say, what is this Core OS thing?  Well, that is the thing that Terry kept functional to him.  The work that is shared across all Windows platforms belongs to the organization that has David Treadwell running Program Management, Henry Sanders running Development, and Mike Fortin running Test.  And this is also where I think most of the analysis of the departures of Leblond, DeVaan, and George falls short.

Using Jon DeVaan as an example, he ran development for all of “Windows Core” and “Windows Client” (for lack of a better way to say it).  Henry only has development for Windows Core.  So it’s a much narrower job.  Ok then, so why Henry over Jon?  First of course is the question of if Jon was even interested in a significantly narrower job that he had for Windows 8 and 8.1.  Then, of course, is the point that Jon owned all (Dev, PM, and Test) of COSD for Windows 7.  So this new assignment would have been much narrower than anything he’d been responsible for in many years.  My guess is that he wasn’t interested, though I have no information to support that view.  Meanwhile for Henry, one of the former development leaders in COSD and head of development for Windows Phone, this is a very natural next step on his career path.

A similar story exists for both Grant/Mike and Antoine/David, though the Antoine story is probably more complex.  Keep in mind that Antoine had only been leading Windows PM for a few months at the time Terry took over.  And he’d only moved into Windows (from Office) at the start of Windows 8, to create the Windows Store.  So besides being on a course that had him with shrinking responsibilities from the days when he owned all of Office client, which I’m sure wasn’t attractive to him, he wasn’t a platform guy.  Meanwhile Treadwell has been one of the platform leaders, across multiple Microsoft organizations, for many years.  A lot of people went home and celebrated when they discovered he’d be running PM for the Core OS.

With Core OS taken care of Terry created two client teams, one for Windows and Windows Phone and one for Xbox, and set them up with their own leaders instead of having them report functionally to him.  It’s pretty obvious that the Xbox client team would have a leader from Xbox given the unique requirements of that platform.  So I’ll say no more about it.  Let’s dig in on the Windows and Windows Phone organization a bit.

The first thing that should be obvious from putting Windows and Windows Phone in a single client organization is that Terry (and Microsoft) now think of them as flavors of the same OS.  In the future that is, the story will be more like (if not identical to) the iOS and Android stories than today’s bifurcated OS story.  Then the obvious question is who to lead the effort.  Let’s take the available executives and consider them.

Jon DeVaan could certainly have been a contender from the standpoint of the broad set of efforts he’s lead in the past.  But look more recently at Jon’s participation in the Windows 7, Windows 8, and Windows 8.1 efforts.  Jon lead COSD for Windows 7 and then the development discipline for 8/8.1.  Meanwhile the Client effort is a Program Management-dominated effort.  Design, user experience, relationship building, etc. are at its core.  That doesn’t sound like Jon (not that he couldn’t lead such an organization, by putting the right person in charge of PM for example), but Terry already had one of the gods of these things available to him.  Joe Belfiore.  Given the scope of the organization Joe was a pretty obvious choice to lead it.

What about Antoine?  I think I already covered that in the discussion above.   All other factors aside, he just didn’t have enough time to prove himself in the OS space making Joe a much stronger candidate.

The bottom line on all of this is that Microsoft has returned to the general structure of a Core OS organization along with a set of client organizations.  In doing so the responsibilities of all of Terry Myerson’s direct reports narrowed significantly in one or more dimensions from the reporting structure that Steven Sinofsky and then Julie Larson-Green had in place.  And that left Jon DeVaan, Antoine Leblond, and Grant George without appropriate positions in the Windows organization.

Posted in Computer and Internet, Microsoft, Windows, Windows Phone | Tagged , , , , , , , , | 10 Comments

Data Breaches ARE a big deal

I didn’t catch his name, but a little while ago I heard the “co-founder” of Square say on Bloomberg TV that the Target data breach was no big deal.  He said that people didn’t lose any money as a result, either because their account was never actually charged or the bank covered any losses.  Ok, but this misses the point.  And sure, anyone in the credit card business (and that is how Square makes its money, so reluctance to use credit cards would harm Square rather directly) is going to try to minimize this thing.  But it shouldn’t be minimized, it’s a serious problem that requires a serious response.

Let do some simple math to illustrate this.  Whenever one of my credit cards has been compromised, either directly or in a data breach, the credit card company cancels and replaces the card.  Now this can result in numerous complications.  Let me give examples.

We were in Thailand last month when my wife noticed she had voicemail.  It was the bank for one of her credit cards reporting that it had been compromised in a data breach.  They were cancelling her card and sending a replacement.  Great, so now a new credit card would be sitting at home in the U.S. but her credit card was useless on our trip.  Fortunately we carry multiple credit cards on our trips in case of just such a situation.  Imagine if we’d actually followed conventional advice and taken only a single card?

Want to extend this?  If you used a Debit Card at Target at least one bank limited use of those cards until they could issue replacements.  Now imagine you are away from home, and away from a branch of your bank, when this happens.  Or it’s just a weekend.  For extremeness sake let’s say you were out of the country.  And you were relying on ATM’s to get local currency for your trip.  And now your debit card either doesn’t work or is limited to small withdrawals.  Worse, if you use it multiple days your bank will assume it is stolen and block it.  How do you get the cash to complete your trip?

Besides the point that just the occurrence of a data breach can cause significant real world repercussions, data breach lead to a huge cost in time, effort, and incidental costs.  My wife was on the phone with her bank for about 30 minutes, and that just the initial effort, while the hotel car service (which charged by the hour) waited to take us into town.

Let me broaden this example.  First, there are the charges that are in-flight when you cancel a card.  You applied for a policy under Obamacare and gave a credit card for the initial payment, a convenience many insurance carriers provided.  But that was submitted on paper (even if then faxed or scanned and emailed, because that’s what they also required) and they don’t charge the card until close to the due date.  When they do go to charge the card the charge fails because the card has been cancelled.  How much time and effort does that take for you to correct?  Or in the worst case, what if you miss the due date and end up without insurance.  Data breaches have consequences!

Note, the Obamacare scenario is not so far-fetched.  We sent our new insurance carrier information before we left the country and returned very close to the deadline.  Had a problem occurred, and the insurance carrier only notified us by U.S. Mail (which seems to be the only form of communication they understand), we could have ended up without insurance on January 1st.  My highest priority on our return was to collect the paper mail from the post office and find confirmation from the insurance carrier that we were insured.

Keep in mind all the places that have your credit card information for automatic bill pay, or just making life simpler.  Another example, you go to use Amazon’s One-Click and it fails because the credit card was cancelled.  One-Click just became a thousand clicks as you go through the screens to enter a new credit card.  So you go logging in to web site after web site changing your credit card information.  In some cases you need to do it by phone.  In others, by filling out and mailing paper.  In my experience the total time expended on recovering from a credit card breach adds up to between 1/2 and 1 day of effort.  And that’s assuming no actual identity theft or serious fraud occurred.

Let me quantify this on a larger scale.  Assume a median U.S. income of $50,000/year.  Assume 210 work days per year for a daily income of $238.  Take the lower end of my time expenditure range and it cost (“time is money”) the average person $119 to deal with the data breach.  It also cost them data on their data plan, postage, the cost of phone calls, and perhaps opportunity costs (e.g., the price of the item you were trying to buy on Amazon went up while you struggled with the inability to use your credit card).  A more realistic estimate of what the data breach cost the average consumer is on the order of $150 per credit card.  In costs that neither Target nor your bank nor anyone else is going to reimburse.  And the co-founder of Square says “no big deal”?

If we play this out then 40 million credit/debit cards compromised at Target turns into a non-recoverable cost of $6 Billion to Target’s customers.  And the co-founder of Square says “no big deal”?

$6 Billion dollars is a big deal.  $150 is a big deal to the average person.  And even if you don’t quantify this financially, wasting a half-day of your life every time a business you’ve entrusted financial information to fails to protect it, is a big deal.  A VERY BIG DEAL.

Posted in Computer and Internet, Privacy, Security | Tagged , , , , | 14 Comments

Jon DeVaan and Grant George

I’m not surprised to see that Jon and Grant are leaving Microsoft, though I don’t know that this constitutes “wrath” on the part of Terry Myerson.  More on that in another post.  Now just a few comments on Jon and Grant.

I interacted with Jon a little in his time in Online Services and then we worked together extensively (on Quests) when he was running the Core OS group during Windows 7.  Steven Sinofsky got the public credit for how good Windows 7 is, but most of the improvement that you see (over Vista) is because of Jon.  The quality improvements, the focus on improving memory usage and boot time, bringing the relationship with OEMs on OS support back from the dead, etc. are Jon.  Even the Service Pack work that made Vista usable is Jon.  I’ve rarely found executives in our industry that were as devoted to producing great software as Jon.

I can’t recall if Jon was a VP when I joined Microsoft, but if not he became one soon thereafter.  It seemed like all of his peer executives left Microsoft long ago, and ever since the online services days I wondered if Jon would “retire” after completing a particular project.  This was especially true after Windows 7 and then Windows 8, but in each case it appears there was another big thing that interested him enough to continue on.  So I’m not surprised that a reorg that left him searching for something that would make staying at Microsoft worthwhile would lead to his leaving.  Or, was it some indication from Jon that he wanted to leave part of the trigger for Terry’s reorg decisions?  We may never know.

I never worked with Grant though we met back in the 90s when we were teammates as part of one of Microsoft’s leadership development offsites.  Grant lead the two largest and most critical test organizations in his career at Microsoft, and the results were obvious to any observer.  The quality of Office and then Window 7 and 8/8.1 were outstanding.  With my limited visibility I’d say Microsoft has lost a great test leader.  But it is also clear that having lead the organizations he did there were likely no roles of similar scope available in the current org structure.

With the departure of Jon and Grant Microsoft has lost a significant portion of its senior leadership talent.  This may be a natural part of the evolution of the company, but that doesn’t make it any less sad.  I wish them both well in their future endeavors.

 

Posted in Microsoft | Tagged , | 5 Comments