An OEM (HP) is making my life difficult. Good!

When Microsoft first revealed the Surface and Surface Pro a little lust crept into my heart.  While I realize I have a ridiculous number of computers, most of them old things that serve limited special purposes, I spend most of my computing time on three.  One is a fairly recent HP Touchsmart 520 that is actually my wife’s but occupies a very convenient location in our house.  The second is a Toshiba Portege R705 that is my primary work computer.  If you see me on a consulting project that’s what I’ll have with me.  The third is an original iPad that I take everywhere with me.  Of course it isn’t getting IOS6, got painfully slow with more recent IOS5 updates, has a slower 3G radio, and the battery life is down noticeably.  My own desktop is an old Dell that I find actually painful to use, which partially explains why blog entries like this are usually written on the HP or Toshiba.  So with the exception of my wife’s HP my technology base is due for a refresh.

When Microsoft revealed the Surface it immediately went to the top of the “must have” list.  Ever since then I’ve been thinking about Surface, Surface Pro, or both?  Ok, why this dilemma?  Well I really need something extremely portable to replace the iPad and the Surface looks like it would meet that need.  I mean, I literally take the iPad everywhere with me and now feel “naked” leaving home without it.  So a replacement can’t add significantly to size or weight.  But I need a modern replacement for the R705 too.  One that can run x86 desktop apps, support Visual Studio, and let me spin up a VM or two, and be a reasonable device for writing documents and creating Powerpoint presentations.  And still be a good tablet when I’m traveling.  The Surface Pro looks like it is potentially the best compromise here.  But it also looks like it might (with the Type Cover) be a little too heavy and a little too big for a take everywhere device.  Of course without seeing them in person  it is really hard to know for sure.

Then there is the Application problem.  Out of the gate the Surface, and other ARM-based Windows devices, will be handicapped by a lack of “Modern” applications.  Meanwhile the Surface Pro and other x86 devices will always have the rich (if not touch-centric) library of desktop apps available.  So it is a bit of a leap of faith to get a Surface, and a no-brainer to get a Surface Pro.  Decisions, decisions.

Last week at IFA many OEMs revealed their initial set of new Windows 8/Windows RT devices.  I didn’t see much that reduced my lust for the Surface/Surface Pro until I saw PocketNow’s video of the HP Envy x2 convertible.  Wow, I think I’ve found a replacement for the R705!  Now my dilema is that with its larger 11.6″ screen the tablet portion of the Envy x2 is probably too big to be my take everywhere device.  So that still leaves an opening for the Surface (or maybe even the Surface Pro).  My head hurts.  And when my wife reads this serious eye-rolling will no doubt occur.

So why is the HP Envy x2 so much more exciting than previous, or other recently announced, convertibles?  It’s HP’s attention to detail.  HP has been in the convertible/hybrid business since Microsoft first introduced Tablet PCs a decade ago.    So they understood better than anyone what customers have liked and disliked about these kinds of devices in the past.  And by combining it with good observation about what users like about modern tablets like the iPad they’ve come up with something that hits the sweet spot.  For example, the docking and/or hinging mechanisms have always been the mechanical Achilles heel of this segment of the market.  The Envy x2 looks like it might have finally nailed it.  Putting a micro-SD slot in the tablet portion means it has the storage expandability needed for more serious use than is typical of tablets.  The keyboard adds a full-sized SD slot, another battery, and a good set of connectors.  And build quality is reported as being excellent.  Basically, whereas previous covertibles have either been decent notebooks with some tablet capability or tablets with a (mechanically unsound) keyboard dock the Envy x2 looks like it will be both a good notebook and a good tablet.  Maybe even great.

There will be the inevitable comparisons of the Surface Pro and the Envy x2.  My initial impression is that the Surface Pro is a tablet that is (with the Type Cover) able to reduce or eliminate the need for a notebook in many scenarios.  But it is definitely a tablet-first approach.  The Envy x2 is more balanced towards notebook usage scenarios and much more suited as an alternative to an Ultrabook or MacBook Air for someone who also wants a tablet.  So someone thinking “I want a tablet but I need to be able to create documents….” will go for the Surface Pro.  Someone thinking “I need a new notebook but it would sure be nice to have a tablet as well” will lean towards the Envy x2.

Of course until I see them in person I really won’t know what will work for me.  It’s just great to see one of Microsoft’s OEMs apparently nail a spot on the portable computing spectrum.  A point that hasn’t been well served in the past, and hasn’t been served at all by Apple or the Android ecosystem.   And what about the other announcements at IFA?  It’s too early to tell.  The PocketNow access to the Envy x2 let them show it off rather extensively compared to other devices.  It may be that when late October rolls around there will be many more devices (announced at IFA or yet to come) that I’ll actually see and consider buying.  For now though the Envy x2 lives up to its name.

Posted in Computer and Internet, Microsoft, Windows | Tagged , , , | 8 Comments

Vendors are missing out on a malware fighting technique

There is a growing controversy over the business of selling zero-day exploits, that is bugs in software that can be exploited by malware.  I say controversy because it is perfectly legal in the U.S. and many other jurisdictions for someone to discover a zero-day, not report it to the software vendor, and then sell information about it to third parties.    And there is a rising chorus of calls for government to intervene in this practice.  It occurred to me that software vendors are ignoring an existing legal tool that would let them crack down on these practices.

I went and scanned both the Windows 7 EULA and GPLv3 and I can find no language that prohibits someone from disclosing a zero-day exploit to a third-party nor requiring disclosure to the software vendor.  Recall that software is not sold, but rather licensed, and the author retains substantial rights over your use of the software.  So it should be possible for software vendors to include language in their licenses that make it a violation of the license to sell zero-day exploits.

There are models that could be followed for creating restrictions on zero-day exploit disclosure.  Many years (actually decades at this point) ago Oracle added language to its database system license to prohibit disclosure of benchmark results.  Other database vendors eventually followed.  This later spread to other software and, for example, the Windows 7 EULA places some restrictions on the publishing of .NET benchmark information.

It even seems to me that the GPL, as well as other Open Source licenses, could be modified to limit disclosure of zero-day exploits.  One would think that mandating disclosure to the original copyright holder before any other disclosure is in the spirit of the GPL.

Can this work?  Well, it can help.  Legitimate entities like Vupen have little choice but to adhere to licensing restrictions or face crushing legal consequences.  And while black hat hackers will largely laugh at these restrictions, it does open up another avenue for targeting their activities with the legal system.

And what of those calls for further government regulation?  Well I’d say that chances are 99% that any new laws or regulation will exempt sale or other disclosure to government entities.  And while that seems like that might be ok it has many negatives.  First, it keeps alive the business of selling exploits.  Second, it creates a loophole that allows sales to government entities that may be less than friendly.   Third, it legitimizes keeping vulnerabilities in software unpatched to allow for cyber “warfare” or other government sponsored attacks.  Fourth, it could lead to all kinds of unintended consequences such as bringing more software under munitions control regulatory schemes.

Even if you dismiss my concerns about the negative consequences of additional government regulation, any such regulation will leave gaps that my proposed solution can fill in.  For example new U.S. law will have limited impact on foreign actors, but a software vendor can create contractual obligations that apply in most jurisdictions without requiring new authority from the governments of those jurisdictions.

Posted in Computer and Internet, Security | Tagged , , , , , , | 20 Comments

Anatomy of a Startup – PredictableIT Part III

With writer’s block out-of-the-way it’s time to finally finish this story.  I apologize in advance for not polishing this more, but the stops and starts have caused problems.  And I either push this out now or who knows when it will see the light of day.

Let’s start with our value proposition.  When my partner first mentioned his consulting engagement he had a great story to tell.  He’d interviewed the client’s CEO who said “I told them I wanted the systems backed up every day, but I have this feeling it isn’t happening”.  When he asked the office people who were supposed to be running the (VAR-developed backup procedures) they said “The CEO thinks we are backing up the systems every day, but it isn’t happening”.   This was also the height of fear and (justifiable) worry about security of desktop systems, and few customer PCs were being reliably patched, running and updating Anti-Virus/Spyware/Malware software, etc.  And customers were paying VARs for Time and Materials to come in periodically and update software, remove malware, etc.  This was an unpredictable and often large expense.  So turning a very unpredictable IT expense, unconstrained security problem, and endless headaches into a predictable service seemed like a business opportunity .  Give us $99 or $129 (or whatever, depending on package) per user per month and we take care of the rest.  So we decided to go for it.

The customers we were targeting were small businesses, those who didn’t have IT staffs of their own.  We initially targeted 5-25 users, though we took on some “Friends and Family” with fewer users (e.g., my wife’s Horse Boarding business) and were in discussions with potential customers in the 25-50 range.  Small business is a notoriously hard nut to crack, which is why you don’t tend to see dominant players in the market.  But that also makes it a place one can target without getting bulldozed by the Microsoft’s, SAP’s, Oracle’s, etc. of the world.  Oh they target small businesses to various degrees, but their success rate is poor.  For example Microsoft’s efforts in this 5-25 user category have pretty much all fallen flat.   bCentral was abandoned.  A few years later another attempt was made as “Office Live Small Business”.  It too was discontinued.  Two attempts at small business accounting software have come and gone.  Now Microsoft is targeting this market with Office 365, and the jury is out on if they will finally succeed.  However, niche offerings such as Quickbooks do well, and so we looked to incorporate those into our service.

We also discovered that small businesses could be quite complex.  A principal often owned multiple businesses and didn’t want complete isolation (e.g., he might use a single email account with multiple addresses).  While the general employee base had yet to discover smartphones (recall this was circa 2005) often the principal had (the Pam Treo line was particularly popular).  Or try to imagine the shared calendaring problems of handling two fractional business aircraft where the owners, pilots, aircraft, etc. all had to be involved in scheduling.  People think small business is much simpler than large enterprises, and they are, but often not by the magnitude most assume.  A small medical clinic has to conform to HIPPA every bit as much as a huge hospital chain.

We did talk about an alternate service offering, remote management of desktops for example, but concluded that was something for the future.  Or how another company might emerge to do this as a competitive approach to ours.  We found some nascent attempts, but they didn’t really seem like threats.  We also considered how Microsoft’s efforts to automate patching and address security and other cost of ownership issues might eventually eliminate much of the problem we were trying to address.  Well, here we are eight years after our initial discussions and while Microsoft has addressed many of the problems (good automated patching, building anti-malware into the system, etc.) third-party software is still years behind.  The situation, though significantly improved, hasn’t really changed.  This is pretty much what we expected, and didn’t believe it harmed our overall value proposition though we clearly understood our message would have to mature just as the PC software ecosystem matured.  In that regard we seem to have gotten it right as the Cloud, VDI, etc. finally seem to be living up to the promise that we’d anticipated.

As you read through the previous two blog postings you probably were thinking you saw things we could have avoided doing in the first release of the service offering, and as I’ve mentioned before on some level you are right.  But now imagine an environment where, for example, we hadn’t implemented the virtual machine system (VDI) for running (semi-) arbitrary apps.  No Quickbooks?  There are one or two users per company  (often including the principal) whose desktops can’t be locked down.  No CRM solution?  There goes all the sales people’s desktops.  Etc.  So how do we offer any value proposition around patching, backups, security, etc. when a significant portion of the desktops at the customer live outside our solution?  We couldn’t.  And so we solved for all the key general business software and even could have handled vertical industry software.

In fact we really only had one out-and-out failure scenario, the Graphic Designer at one of our customers.  There was just no way to run Photoshop on a server and make it usable via Terminal Services.  Recall that we are talking 2005 with Windows Server 2003 and technology for accelerating graphics (RemoteFX) wasn’t added to Terminal Services for another 7 years, with Windows 2008 R2 SP1 in 2011!  That user was allowed to have a non-locked down PC, though they still did non-Photoshop work through Terminal Services.

So with that out-of-the-way it’s good to describe one of our big mistakes, how the initial system was developed.  As I’ve mentioned previously we started with a plan that required just a website and ended up building a full automated ordering, billing, provisioning, etc. system.  When we started we contracted out the web work to a developer who had experience in tech support and had done web development work.  Unfortunately as our needs changed we stuck with the same developer.  That was mistake 1(a).  Then when we realized as we moved on to some more advanced work that it was stretching his abilities we considered replacing him, but instead helped him a bit and things seemed to improve considerably.  Not replacing him when we realized he was struggling was mistake 1(b).  Then when we realized he was struggling again we were going to replace him, but we were so close to the finish line and had no prospects to replace him that we decided to tough it out.  We knew we were going to rewrite the stuff for Phase 2 anyway, so we just needed to get to launch of the service asap.  That was mistake 1(c).  Finally we concluded it just wasn’t ever going to be done, so we let the contractor go and I dropped all my other duties and became the full-time developer.  It took me about a month to clean up and rewrite a lot of the modules so we could launch the service.  And every time I looked at another module I asked myself how I could have let this happen. I should have just written the system myself.  Of course then we would have had a similar problem on the operations side.

Our failure to fire the contract developer early and find an alternate way to build the system, either by doing it ourselves or finding a more appropriate contractor, is the thing that my partner and I beat ourselves up about the most.  Would it have changed the outcome?  Probably not.  But we would have either failed faster (and thus spent less money to do so), or ended up with an asset we could have gotten a little of our investment back by selling,  or changes path, or….  For example our automated provisioning was something a lot of people were looking for, but the code we had from the contractor was not something we could have gotten $0.02 for.  I’m guessing (and I’m fairly confident) that if I’d written the code then we could have sold it for at least what we invested in the company.

Now to the two real business issues.  Let’s start with the cost structure.

When you looked at our cost structure you could basically break it down into four major buckets:

  1. Microsoft Licensing (particularly the Service Providers License Agreement or SPLA) model and costs
  2. Data Center Cost of Operations
  3. Support
  4. Everything else

First we’ll dig into the Microsoft SPLA.  I haven’t looked into how it works today, but it remains a real issue for people trying to create services.  The SPLA is Microsoft’s model for how a service provider licenses its software for renting out as part of a service.  It’s complex, but essentially you are paying a fee per month per (named) user for each piece of software (and sometimes functionality within software).  The problem for us was really around what Microsoft made available under the SPLA, how they priced it, and the lack of any way to transfer existing licenses into the SPLA world.

Pricing first.  Because this is a rental model it makes no sense to tie it to a specific software version.  That is, you rent Microsoft Office not Microsoft Office 20xx.  In Microsoft’s packaged product world this is done via something called Software Assurance (SA).  With SA you pay a certain amount per year and get any new versions that come out during the terms of your contract.  For this Microsoft charges a premium compared to just purchasing a specific version of the product.  Well, SPLA pricing is based on SA pricing.  There is only one problem with this, small businesses don’t (or at least didn’t at the time) buy Software Assurance.  Not only that, but small businesses would typically not upgrade for much longer than Microsoft assumes in its pricing further raising the effective cost.  That is, SPLA pricing was based on the life of a product being some number of months (say 30, though I don’t recall the actual number) but a typical small business might go twice that before buying a new version of Office.  So a business comparing the cost of going into Staples and walking out with Microsoft Office Professional to the cost of renting it from us would discover that they paid tremendously more to rent it from us.  And that was before adding in our cost of operations, support, G&A, etc.

But it gets worse.  Microsoft doesn’t license everything under the SPLA.  Using Office once again as our example, they only licensed Office Standard and Office Professional.  One could not license individual Office applications like Word, nor could you offer Office Small Business under SPLA.  So now our customers were comparing the much lower priced Office Small Business Edition price at Staples et al to being forced into Office Professional from us.

Third was that Microsoft offered no way to credit a user for already having a packaged product license.  So let’s say a company had a bunch of PCs running Office 2003 and they were now investigating using our service.  They were going to pay to re-license Office 2003 even though they already owned it.  That they would automatically get Office 2007 when it came out for no additional cost was nice, but they just couldn’t get past the idea that they were paying twice for Office 2003.

So from the start Microsoft’s licensing policies handicapped us tremendously.  While we knew pretty early on that the SPLA presented us with challenges we didn’t know how serious this would be until we started actively marketing our service.  Early research indicated that we needed a $99 per user entry point, and we figured out we could offer this even with SPLA’s issues.  Later we would discover how deeply some people would dig into our pricing, and discover how much they were effectively paying Microsoft.  And as I looked into how we might build a second generation system I made an interesting discovery.  I’d done a design for a much lower cost data center and was trying to see how it impacted our ability to offer a much lower price of entry.  Plugging in various lower costs of operations into our business model produced far less improvement than I expected.  Finally, out of frustration, I just zeroed out our operations costs and to my surprise we still weren’t profitable much below $99/user.  Further analysis showed that because the SPLA model had no scalability built into it we eventually became nothing but a means of transferring money to Microsoft.  There was no opportunity to charge for our added value!

From the start we’d considered moving away from a pure Microsoft software offering in order to lower our cost structure.  For example, we could offer Open Office instead of Microsoft Office as part of an entry-level service offering.  The problem was that customers weren’t aware of Open Office and we didn’t feel like our business model offered a way to sell them on the idea.  This theme, that we were taking on a huge customer education problem in everything we did, is important.  It would become key to our decision to shutter the business.

What fascinated us about SPLA pricing becoming such a dominant factor in our cost structure and deliberations is that we’d always expected the cost of operations to be our pain point.  After all, we were creating a service.  And indeed our initial design and use of a Managed Hosting company was a cost problem.  And it was going to get worse.  First of all our initial offering benefited from “Friends and Family” pricing,  But future systems would not.  We’d anticipated that, but hadn’t anticipated that they were changing their business model in a way that would impact our assumptions.  In particular, they pretty much eliminated their hardware rental model in favor of a lease model (essentially transferring the financing of hardware from their balance sheet to their client’s balance sheets).  Finally, they just weren’t geared towards being what we would today call a Cloud Data Center.

As I mentioned in one of the earlier posts we found that we were duplicating a lot of the services that in theory we were getting from the Managed Services Provider.  Patch Management?  I was doing it.  Anti-Spyware?  I was doing it.  Anti-SPAM?  My partner was doing it.  etc.  We realized that they were expensive and we weren’t getting much for our money.  So we looked at moving to another hosting situation.  We found much cheaper alternatives for raw hosting.  I designed a storage and backup system that quite literally would cost us 1/10th what we were paying the managed hoster.   We’d always had a problem with the managed hoster’s Internet connectivity (a historical artifact that I won’t go into), and the new hoster solved that by offering connectivity through multiple network providers.  Basically, we were excited about the opportunities for creating our second generation hosting environment.   Unlike the SPLA situation, this is something that was really within our control.

In our initial business modeling we were shocked to discover how dominant support costs would be.  The problem was that given how new the environment was to the customer they would need lots of handholding.  This would be particularly true in the first few months after a customer moved to our system.  Then we expected that someone inside their organization would have enough expertise to help new users, and our support costs would drop.  Still we modeled high costs because we couldn’t be certain of when (or even if) and by how much they would actually drop off.  We invested in the trouble ticket system and wrote knowledge base articles early on so we could point users at those.  We tried to refine the line between what constituted included support and what would be paid time and materials assistance.  We looked at innovative ways to farm out support (e.g., people who would help do migrations).  But through to the end support remained our big unknown.  It was the one thing that the more we succeeded the closer we might  be driven to failure, each new customer demanding attention from us that potentially we couldn’t deliver on.  We could add machines to handle the load fairly quickly.  We’d automated things tremendously.  But we couldn’t add trained support people quickly or economically.  Particularly if we succeeded wildly.

The rest of our costs were the usual and we kept them low.  I mentioned at one point that our  business model was to avoid taking on fixed costs, employees, etc.  My partner’s brother was our accountant and his firm gave us friends and family rates.  We shared an office.  We answered our own phones (and hosted our own Asterisk-based PBX, which we also offered to customers as a service).   We incorporated rational customer acquisition costs into our plan, including compensation for third parties such as VARs who brought us customers.  There is nothing here that really stands out other than to say that we were very complete in modeling and understanding our overall cost structure yet quite frugal about spending that wouldn’t move the business along.

Even as we discovered potentially fatal roadblocks, like the SPLA situation, we pressed on.  We knew we were pioneering Software-as-a-Service and that the world would change as we did so.  I made the rounds at Microsoft, including with senior executives such as Bob Muglia, explaining what we were trying to do and where Microsoft was killing us.  In some cases (basically technical issues) they offered assistance, but I didn’t walk away with anything on the business front.  I didn’t expect to.  What I expected was to get Microsoft thinking about these issues so that once we were successful I could come back and really press them to make changes.  The Microsoft people were really interested, from the Terminal Services development team to the SPLA licensing guys to executives such as Bob.  Now granted I had contacts, for example a long history with Bob, that made this easier.  But the SPLA guys, for example, didn’t really know about that history.  They were just interested in talking to one of their customers.

Sadly I had less luck with other software companies.  Intuit, for example, was totally uninterested in discussing the licensing roadblocks they’d put in front of companies trying to host their products.  At first I thought it was me, but then a competitor called to ask me if and how we were going to host multi-user Quickbooks.  I discovered from her that no one had figured this out, and that Intuit hadn’t been interested in talking to anyone about it.  I’ve said it before and I’ll say it again, Microsoft was way ahead of other software companies in trying to address the hosting market.  But we knew others would eventually have to come around.

With our system ready to go and a path forward understood, or at least the roadblocks well understood, we launched our service.  For the first few months we knew this would be largely experimental in nature because we didn’t know exactly what would work.  And we’d previously shelved our attempts to hire a VP of Marketing and Sales, which I’ll get to shortly.

Initially I concentrated on search-engine optimization and search advertising while my partner concentrated on following up on inquiries, converting trials into paid customers, working with VARs or other potential partners, and a test direct mail campaign.  We learned a lot, much of which I’ve already mentioned (e.g., potential customer’s really latching on to the fact they’d already paid for Microsoft Office and not wanting to pay a second time).  We also learned that startups weren’t a good target for offering because they had a different focus on costs.  The CEO of one startup told us quite simply that they were willing to take the risk of losing data or having machines infected with viruses rather than burn through their cash to protect against those possibilities.

And we learned that search advertising wasn’t all it was cracked up to be.  At the time people talked in terms of “pennies per click”, but that was already no longer true.  One issue we had was that keywords that might be uniquely associated with a business such as ours had absolutely no search traffic.  Literally it was so bad that Google would shut down our campaign for lack of results.  So we had to look at keywords that lots of others were competing for, like Anti-Virus or Backup.  The only directly applicable keywords with reasonable activity were things like Hosted Exchange.  These kinds of key words were going for several dollars, to even $15-20, per click!  Ads that did a good job describing what we offered got light clicks while ads that offered a more general description of us solving a problem got heavy clicks.  But those heavy clicks were at the expense of being poorly qualified.  Paying $20 just to have the person hit “Back” in the browser when they read the first few lines of our website was quite discouraging.  We got better at this as time went along, and also focused more on SEO instead of paid advertising, but it was also another lesson in the maturity of the market.

Customers weren’t looking for the solution we were offering.  That was a lesson we got from studying which keywords were being searched on.  It was also a lesson we learned from our direct mail campaign.  Indeed the strongest interest we got was from IT professionals, such as VARs, who were looking for ways to offload the headaches their customers were causing.  We’d always expected this community to become a channel for our product, so we started to focus more attention on it.

As we approached the need for another capital infusion we reflected on everything we’d learned.  We came to one key conclusion, that given customers weren’t looking for a solution like ours we were going to have to educate them.  And that was going to be a multi-million dollar effort, not something we could self-fund.  As we thought about how much money we’d need to raise, and what it would mean for our ownership and control of the company, we realized it made continuing on unattractive.  My partner was willing to put in one more chunk of money to see if we could reach a more attractive position before accepting outside investment.  I was semi-willing, though I was being asked to return to Microsoft and was really tempted by the opportunity.  We decided to pull the plug rather than throw good money after bad.

As I look back there is a lot we did right and a lot we should have done differently.  Conceptually we got to a great place, even though the code was pretty much throw-away.  Had we started with the volume market as our target initially we would have made a number of different decisions.  We probably wouldn’t have taken on the consulting customers and their distraction.  We wouldn’t have put development of a complex system into the hands of a web developer (and I would have really led the development effort).  We would have accepted outside investment (which we were offered, but turned away), both to have a proper development and operations team and dedicated marketing/sales personnel with substantial funding for a marketing campaign.

Or we could have stuck to the original vision for a modest cash-generating business and probably done quite well.

The truth is we had one of the greatest experiences of our work lives.  It was fun.  We learned a huge amount.   We tried some very unconventional things.  We got in early (too early sadly) on what is now one of the hottest areas in computing.

I’d do it again.  Hopefully with a better ending.

 

Posted in Cloud, Computer and Internet | Tagged | 4 Comments

What a different Best Buy experience

My wife and I have a love/hate relationship with Best Buy. For a long time they’ve been the only complete Bricks and Mortar game in town, but the shopping experience has been horrific. When you are neither the low-price leader (ala Wal-Mart and Costco) nor the service/experience leader (ala Nordstrom, Apple Store, etc.) then you’re in trouble. And they are. The question is, can they remake themselves into something that leaves them the last nationwide survivor of the dedicated cross-vendor technology chains?

Comparing our experience buying a camera a few years ago with buying one this weekend offers some hope that Best Buy is navigating a path forward. A few years ago, and this experience has been repeated multiple times for various products, we went in looking for a camera.

Like most products at Best Buy the demo units are connected to security devices which were then attached to coiled wire. This makes it impossible to actually hold the camera as it would be held in real life, feel its weight, etc. you would be repeatedly accosted by salespeople who it would turn out knew nothing about cameras. Worse, when you told them you wanted to hold the camera without the security device they would tell you it wasn’t allowed. If you pestered enough of them long enough they would find a manager who would come remove the security device so you could really hold the camera. Questions? Sorry, we don’t know how to really answer those! And so frustrated, and not really wanting to repeat the experience of trying to get a different camera removed from the security device, we left and went to a nearby camera store. We bought a camera for more than it would have been at Best Buy and much more than it would have been from Amazon, B&H, or a half-dozen other reputable web stores.

Indeed in most cases I’ve walked out of Best Buy without something, whether then purchased at another bricks and mortar store, or most likely online, it has had little to do with a modest price difference. Either the Best Buy shopping experience made me want to run home and take a shower, or they didn’t have what I wanted.

Our plan this weekend was to go camera shopping at the local camera store we’ve had good experiences with. But we decided to stop at Best Buy first to get an idea of prices. Surprisingly we were greeted by an experience closer to an Apple Store than the Best Buy of old (though clearly they could do more). There were greeters to direct us. While looking at a (broken) demo unit a manager came by and asked if we needed help. He then went to find “Clint”, the guy who really knew about the cameras. And know them he did, Clint was able to answer our questions about different models and what the trade offs were. And when we mentioned the broken demo unit Clint simply said “Let me open a box and get one out for you”. A short while later we’d decided on a camera and were loading up on accessories and plunking down my credit card. Clint went one better. I had read a review suggesting the use of a wrist strap instead of the included neck strap and asked Clint about it. He pulled out a box with wrist straps that were left over from demo units and offered me one for free.

Yes I could have saved about 7% at Amazon, but the service was worth the extra cost. Sure when I know I want the RouteCo 15237-QX router I’m just going to buy it online. But when I need to hold and try something, I want an experience like we just had buying the camera. And I’ll pay a modest premium for it. So if this experience was indicative of where Best Buy is going they may just have a shot at survival. At least I’m going to feel a lot better about going there when I need to buy consumer electronics.

Posted in Uncategorized | Tagged , , | 3 Comments

Is Microsoft the big winner in Apple’s win over Samsung?

Bill Gates would be the first to point out all the cases where Microsoft’s competitors made a key mistake that let Microsoft succeed.  Office is perhaps the most notable example, where competitors’ reluctance to support Microsoft Windows took Word and Excel from second-tier status to leadership as customers shifted from DOS to Windows.  Xbox 360 triumphed over the PS/3 because Sony created a platform that was both late to the market and too hard to develop for.  Borland started a price war that Microsoft was better positioned to endure.  Etc.  That is why Microsoft historically keeps the pressure up even when its cause looks hopeless.  One could probably apply this to both the  Windows Phone and Bing efforts in today’s world.

If OEMs have to pay both Microsoft and Apple patent royalties then Android could be more expensive than Windows Phone.  Google may remove popular (and expected by consumers) features from Android to avoid it or its OEMs paying Apple royalties, but that will diminish Android’s competitiveness.    So indeed Windows Phone has the potential to benefit from Apple’s win in this case.

OEMs and Carriers will no doubt hedge their bets by giving Windows Phone a little more prominence over the next 12 months while the dust settles.  But I doubt the big players actually run away from Android.  It’s up to Microsoft to take advantage of this disruption to Android’s trajectory.  For example, by HEAVILY promoting Windows Phone 8 itself rather than leaving promotion completely up to the OEMs and Carriers.  We’ll just have to wait and see if that’s what they do.

Posted in Computer and Internet, Microsoft, Mobile, Windows Phone | Tagged , , | 47 Comments

What’s Next for Microsoft

We all know that in the coming few weeks there will be many more details available on Microsoft’s 2012 product launches.  Final Windows 8 details, Surface details, the launch of Windows Phone 8, the launch of Office 2013, the Xbox Dashboard update, etc.  But what comes next?  Forget everything you think you know about Microsoft, because it seems likely that 2012 was just the tip of the iceberg in terms of change.

A lot of the news I expect to hear between now and the end of the year is around reorganizations.  The Server and Tools Business (STB) recently reorganized many of its divisions into a functional organization reporting to President Satya Nadella.  Soon we’ll here of more reorganizations, including changes in Steve Ballmer’s direct reports.  That this is an ideal time for reorganizations should not come as a surprise.  Those will be in the details.  Today’s Microsoft organizational structure largely reflects the “old Microsoft” and now it needs to reflect the “new Microsoft” and what it needs to accomplish.

There may be some huge surprises in the upcoming reorganizations but the only thing I’ll speculate on is what happens to Windows Phone.  It makes no sense as a standalone business reporting to Steve, so where does it go?  Speculation, even before the Entertainment and Devices business unit was split apart with Robbie Bach’s departure two years ago, was that Windows Phone would eventually become part of the Windows business unit reporting to Steven Sinofsky.  This still seems likely.  However, Microsoft has also made previous attempts to combine various telecommunications offerings together into a single business unit and it is possible they’ll try that approach again.  Recall that the smartphone business was once part of a broader Mobile Communications Business (MCB), but as part of gaining focus on Windows Phone MCB was consumed by its child and non-WP efforts abandoned.  However, with the acquisition of Skype one could imagine combining Windows Phone, Skype, Lync, and perhaps other telecom-oriented assets (e.g., Mediaroom) into a single Telecom Business Unit.  It’s a long-shot, but it makes sense.

It should also be no surprise that Microsoft has already started some planning work on the next version of Windows.  Early disclosures about Windows 8 included comments about work done prior to Windows 7 RTM.  Rumors are calling this Project Blue or Windows Blue and wondering why it isn’t called Windows 9.  Some are speculating that’s because it is something between Windows 8 and Windows 9.  The explanation for this apparent change in project naming schemes can run from extremely simple to wildly speculative.

Let’s start with the simple.  Sometime (relatively late) in the Windows 8 cycle it was decided to give Windows on ARM a different name and some different characteristics (no Win32 apps, no joining to Domains, etc.).  The “Windows 8 Project” thus produced two products , Windows 8 and Windows RT.  Calling the next project “Windows 9” thus makes no sense, even if the Windows on x86 product is called Windows 9.  You may now just have a project called “Windows Blue” that will produce two products, Windows RT and Windows 8.1/9/whatever.  That would be the simple explanation.

The more complex explanations run a broad gamut.  Microsoft could be signaling that it is considering a naming scheme change for the x86 product and thus using a number in the project name makes no sense.  Or the name “Project Blue” may encompass more than just Windows, a topic I’ll get to later.  Most likely it is also an acknowledgment that everything we know about Windows (and other product) release cycles is outdated.

I’ve blogged previously about how Microsoft might change its release cycles to match modern expectations, but we won’t really see what they’ve decided to do until about 12 months from now.  I believe Microsoft is going to standardize on 6 month releases for Cloud offerings and 12 month releases for packaged, or on-premises, products.  They’ll do this for four reasons.  First is that technology is in a rapid change phase and the 24-36 month cycles that had evolved over the last couple of decades no longer accommodate that change.  Second is that this is the expectation that major competitors (Apple, Google) have set, particularly for consumers.  Third is that as Microsoft provides common products for Cloud and On-Premise use they can’t allow the two to deviate to far from one another.  A six month cloud release cycle and 24-36 month on-premise release cycle means that customers would have to implement hybrid solutions using products that are as many as six releases apart.  Fourth, with IOS dominant and Android ascendant, Microsoft has to become far more agile if it wants to see it’s “re-imagined” Windows bet pay off in the tablet space.

Before all the enterprise types freak out about 12 month release cycles keep in mind that back in the 80s this was more typical for enterprise products.  At one point DEC’s Rdb/VMS was releasing every 6 months!  The key here is to keep the change from release to release very well controlled so that neither apps, nor user training, are broken.  It is more of a slow evolutionary approach.  From an IT perspective, these releases (despite considerably more functional change), should look closer to what is required for deploying a Service Pack than what is required today for a new release.

The last topic I wanted to cover is a change which many may think has already happened, but it is just starting.  Although it looks like Microsoft did a very good job of coordinating and planning its 2012 releases the reality is that it was simply serendipity.  For example, the Office team did not start out with Windows 8 support as the high-order bit in its planning process.  It was focused on the Cloud, beating Google, doing more around Information Protection, it’s perennial stalking horse of Collaboration, etc.  Oh, and we have to support touch.  Oh and there is Windows 8 so we better have a release somewhere around the same timeframe to support it.  This is why, for example, there are no pure Windows Runtime versions of the Office client apps other than OneNote.  Besides the teams reporting to Steven Sinofsky (Windows, Windows Live, IE) the only Microsoft organization to truly align its planning and priorities with Windows 8 was the Developer Division.  Everything else was serendipity and/or late plan changes to take advantage of the momentum building behind Windows 8.

I believe this is changing, with Microsoft moving to more broadly align planning across its product family.  As an example, if Windows RT is a demonstration of where Microsoft wants to go with Windows than allowing desktop-free environments is a priority and requiring it for Office is an embarrassment (to say the least).  Joint planning would identify elimination of the desktop requirement as a goal and the need for the Office team to produce pure “Metro” versions of its products on the same timeline as Windows releases.  Likewise on the Enterprise side the various Server products have never coordinated their planning and are now likely to move to do so.

Note that coordination is a lot easier when product cycles are 12 months long than when they are 24-36 months long.  The shorter cycles eliminate much of the schedule risk of taking dependencies on one another, and even more importantly eliminates much of the strategic problem of having to meet your own customer requirements but not being able to wait for the other product’s next 24-36 month cycle (which in the worst case could mean 4-6 years for you to address a customer requirement).

Which brings me to one very speculative question, what if “Project Blue” isn’t the name for the next release of Windows but rather the name for the next wave of client products that includes a release of Windows?

Over the next few months we will see Microsoft reorganize, change its planning processes, and replace its historical release paradigm, to adapt to the changing world.  It’s a new Microsoft, and seeing how it changes to create and release software is almost as important as knowing what it will produce.

 

Posted in Computer and Internet, Microsoft, Mobile, Windows, Windows Phone | Tagged , , , | 17 Comments

The $199 Microsoft Surface

Reports surfaced (no pun intended) today that Microsoft was going to offer the Surface at the $199 price point.  This should come as no surprise really, but read on for the catch.

I telegraphed how Microsoft could reach the $199 tablet price point in my June posting on Windows RT Pricing.  There are four important data points to consider when looking at the validity of the $199 rumors:

  1. Microsoft learned from the Xbox gaming console business how to sell hardware at a loss in order to make money selling software (accessories, etc.) for it.  They understand this business model, and they understand how to make it work for them even though it doesn’t work for traditional OEMs (because OEMs have no significant software/service revenue stream).
  2. The tea leaves increasingly indicate that Microsoft is moving to a subscription model wherever they can figure out how to do so.  The Office 365 Home Premium offering is the latest evidence of this.  The Zune Pass and Xbox Live Gold are other consumer examples.  And Microsoft is reportedly working on a streaming media service that could debut this fall.
  3. Most of today’s $199 tablet are either explicitly or implicitly subsidized offerings.  Some are carrier subsidized with a traditional cellphone-like contract, others are subsidized by the media services that you are expected to buy.  Amazon charges $199 for a Kindle Fire because they expect most people to consume books, movies, and music from Amazon.  Ditto for the Nook Tablet and B&N store.  Ditto for the Google Nexus 7 and Google Play.
  4. Microsoft has been experimenting with a $99 Xbox offering that requires a 24-month subscription to Xbox Live Gold.

So it is completely within expectations, and in fact the $99 Xbox deal is just telegraphing it for all who are willing to listen, that Microsoft is going to offer the Surface for $199 when you sign up for a TBD subscription of some sort.

I’m sure that the subscription offering will make Microsoft’s take from a Surface sale a minimum of $399 over a two-year period.   That would match Apple’s iPad entry point (using the previous generation iPad) as well as the entry point for ~10″ tablets being established by Android OEMs.  This would give Microsoft an edge versus its Windows RT OEMs selling to customers who want the Microsoft services, but leave the OEMs positioned to compete for those who’d rather forgo the subsidy because of their commitment to other ecosystems (e.g., Amazon).  Another possibility is that Microsoft will offer the OEMs a commission for signing up customers to the Microsoft subscription offering and the OEMs will have the choice of pocketing that commission or using it to subsidize their own Windows RT tablets!  OEMs or no OEMs, Microsoft would be in the tablet market at a very competitive price point with very compelling hardware.

So go ahead and believe the $199 price for Surface.  Just remember you’ll probably also be committing yourself to a subscription for that amount or more over two years.

Posted in Computer and Internet, Microsoft, Windows | Tagged , , , | 26 Comments

Acer’s JT Wang is part of the problem

I find it hysterical that Acer Chairman JT Wang is the most vocal critic of Microsoft getting into the hardware business with the Surface.  You see, I still haven’t experienced Windows 8 in Portrait mode and it is his company’s fault.  Acer chose not to release a 64-bit driver for the Iconia Tab 500W tablet’s Accelerometer (without which it is locked in landscape mode), the very machine I use for testing Windows 8.  I’m not talking about a Windows 8-specific driver.  They never released the driver for Windows 7.  If they had it would work just fine with Windows 8.   It is exactly this kind of crap that is forcing Microsoft to get into the systems hardware business.

JT Wang should stop harping on Microsoft and focus on fixing his own company’s poor execution.  If Acer did a proper job on Windows tablets then Microsoft wouldn’t have done the Surface in the first place.

 

Posted in Computer and Internet, Microsoft | Tagged , , , | 20 Comments

A data point on the lack of Windows Phone 8 Upgrades

For those upset over Microsoft’s decision not to upgrade existing Windows Phone 7 devices to Windows Phone 8 (but rather to offer them a new Windows Phone 7.8 update) I offer this tidbit:  Nine months after replacing Android 2.3 with Android 4.0 (“Ice Cream Sandwich”), and more recently 4.1 (“Jelly Bean”) new non-upgradeable Android 2.3 phones are still being released.  And only about 16% of the total population of Android phones have been upgraded to 4.0 or 4.1.  That’s less than the number running ancient versions of Android such as 2.1 and 2.2.

When you compare Microsoft’s plan to Apple’s history then Microsoft doesn’t look so hot, but when you compare Microsoft to Google Android it looks like Microsoft is making the right decision.  Or at least they have a massively good data point to support their decision.  And Microsoft is at least providing an update for all their earlier devices, even though it doesn’t bring all the Windows Phone 8 goodies with them.  That puts Microsoft somewhere between Google and Apple in the consumer-friendliness dimension.

I’m still upset that Microsoft isn’t going to give me Windows Phone 8 for my Lumia 900, but at least I know I’m no worse off than if I were living in the Android world.

Posted in Computer and Internet | 3 Comments

You can’t defeat SPAM when legitimate mail looks like SPAM!

One of the subtle changes in Microsoft’s new outlook.com replacement for Hotmail is that the messages about suspicious mail have changed.  In Hotmail messages are simply described as suspicious, in outlook.com it is now clearly stated that the “sender failed our fraud detection checks”.  Not only that, whereas Hotmail displays this text in “warning yellow” outlook.com displays it in “danger red”.  So when I switched to outlook.com and noticed that many messages in my Inbox were now labeled as failing fraud detection (and only showing up in my Inbox because I’d placed the sender’s domain on my safe list) I decided to investigate further.

Before I dig inlet’s discuss the meta issue.  The world is awash in SPAM.  While SPAM started out as being primarily (semi-)legitimate Unsolicited Commercial Email (UCE) it has now become primarily a distribution mechanism for Malware (usually by getting you to follow a link to a malware distribution website) and Phishing scams.  Attempts to fight SPAM have really focused on two things, one is attempting to determine if the mail is from a legitimate sender and the other is content analysis of messages to see if they are SPAM-like.    The former is an architectural nightmare given the origins of the Internet, and the latter is prone to all kinds of failure.  For example, a message from a wife to her husband of “Big plans.  Don’t forget to take your Viagra before leaving work :-)” would likely be flagged as SPAM by content filters.   To combat this problem mail systems contain overrides such as ignoring content filters if the sender is one of your contacts or on your “safe sender” list.  But this places a burden on the user of scanning their junk folder periodically to see if something has ended up there inappropriately and adding the sender to their safe sender list so future mails from the sender go to the Inbox.  And worse, it means that mail systems have to allow potential Phishing and Malicious emails into the Junk folder just in case they are actually legitimate mail.

To really solve the SPAM problem you first have to solve the problem of determining the legitimacy of senders.  Unfortunately since the Internet was designed as a research project authentication of email senders and messages was not designed in, and we’ve been paying the price ever since it was opened up for general use.  While dreams of every email message being authenticated may be just that, dreams, various techniques for allowing senders who wish to authenticate their mail have been proposed and somewhat implemented.  The problem is, not enough senders are properly and completely using these techniques.

If the bulk of legitimate senders were fully using already existing authentication techniques (SPF, Sender-ID, DKIM, DMARC), then SPAM-filtering systems could get much more aggressive about just deleting SPAM rather than putting it in your Junk folder for you to look at.  For example, I get some email from my bank that is fully authenticated and some that isn’t.  Because some isn’t, SPAM-filters can’t really be sure of the difference between real mail from my bank and phishing mail that looks like it is from my bank.  So occasionally real mail from my bank goes into my Junk folder and phishing mail that looks like it’s from my bank ends up there too.  Occasionally phishing mail actually makes it into my Inbox.  If every mail from my bank was known to be authenticated then the SPAM filters could more easily determine what a phishing attempt was and make sure it never reached me.

So why does my bank, or any other legitimate sender, ever send an unauthenticated mail?  Because businesses, even small businesses, are surprisingly complicated.  In addition to their own email systems, almost all use third-party bulk mailing services or allow partners to send mail on their behalf.  So if you look at their SPF records in DNS, which is the most widely used authentication technique, you find they usually specify their own mail servers as legitimate sources of email from them and then “~all”.  “~all” is also known as “soft-fail” meaning that what they really are saying is “there are other legitimate parties sending mail on our behalf but we don’t know who they are, so we can’t help you decide which are legitimate and which aren’t”.  And this is why I find so many messages being marked as failing fraud detection checks, and why so many legitimate mails go into the junk folder; They hit the “soft-fail” condition when the actual sending server is evaluated against the purported sender’s SPF record.

Why not add the Third-Party’s servers to the sender’s SPF record, essentially authorizing them to send mail on the sender’s behalf?  Let’s say you create a customer advisory board for your product and want to be able to send out notices to the group.  Maybe you even want it to be a discussion list.  you go out and find an inexpensive third-party bulk email service you can use for this.  How do you, some junior product manager buried 12 levels deep in the organization, get IT to change the corporate SPF record to make the Third-Party a legitimate sender of email on the company’s behalf?  You can’t.  They won’t.  They’ll laugh at you.  Really.  Not just because they don’t want to change the corporate DNS entry every time an employee goes outside the box, but also because they can’t authenticate the Third-Party just for you.  Adding them to the SPF record means receivers will think any mail coming from the Third-Party claiming to be from your organization are legitimate.  And without a corporate-level agreement with the Third-Party that violates corporate security.  For a small business things are similar except that the real problem is that “what’s an SPF record?” is the problem.  In other words, while their ISP or IT consultant probably created an SPF record for their primary mail server no one inside their organization even knows what an SPF record is; Or who to contact to change it.  Instead they tell people to add “foo@ourlittleorganization.com” to their safe sender list so mails don’t go to the junk folder.

For those who want a real example of what I’m talking about, here is one from an organization that should be sophisticated enough to address this issue.  The IEEE Communications Society.  Here is how the email looks:

So, notice the “This sender failed our fraud detection checks” message.  When we view the message source we find that this message was sent with a service called Magenta Mail, which you can see here (though you may need a magnifying glass):

And when you look at the SPF record for comsoc.org you find no mention of the Magenta Mail servers and a soft-fail (~all) indicator as the catch-all case for this:

v=spf1 ip4:140.98.0.0/16 a:conan.comsoc.org a:cmsc-ems.ieee.org
a:cmsc-ems2.ieee.org a:cmsc-ems3.ieee.org a:comsoc-listserv.ieee.org
mx:hormel.ieee.org mx:lemroh.ieee.org include:ieee.org ~all

Most mail systems will throw this into the Junk folder unless you add comsoc.org to your safe sender list.  And even when it is on the safe sender list Hotmail, I mean outlook.com, will warn you it is suspicious or potentially fraudulent.

When I switched to outlook.com and started noticing that a lot of mail from a few organizations were marked as “failing fraud detection” I investigated  and found many were using third-party mailer  Constant Contact.  I contacted the organizations, as well as Constant Contact, about this.  According to Constant Contact they provide a feature called Constant Contact Authentication to address the problem that their user’s have with changing their own SPF records.  They also mentioned that many of their users don’t use this feature which is exactly what I was seeing.  They are going to look at ways to further encourage users to turn it on.  BTW, Constant Contact Authentication works by turning the problem on its head and giving you a new domain that authenticated emails come from.    They also document how you can make them an authenticated sender of email for your organization (via SPF, Sender-ID, and DKIM) as an alternative.  And Constant Contact’s website says that eventually all their users will either have to use Constant Contact Authentication or make them an authenticated sender.

If all bulk mailing services went down the path of  requiring their clients to use an authentication mechanism it would represent a huge step forward in cleaning up the SPAM mess.  A legitimate service like Constant Contact might still be used for the traditional UCE type of SPAM, but you could trust that it was safe to use the unsubscribe link.  And abuses reported to the authorities could be traced back to the actual sender.  Much of this mail, like the IEEE ComSoc example above, falls into what Microsoft now calls the “GreyMail” category.  Microsoft’s tools for managing GreyMail would work more effectively with proper authentication.

If email authentication was really widespread then mail system’s SPAM filters could adopt a more aggressive approach.  They would only put authenticated email into your Inbox.  And they would be more comfortable just throwing away potential phishing and other harmful mails rather than putting them in the Junk folder.  With no unauthenticated mail in your Inbox, and a very small number of mails in your Junk folder, the Internet would be a much safer place.

So here is a little call to action.  First, if you are a user of a bulk mailing service pleasemake sure your mails are being properly authenticated.  Second, if you get unauthenticated mails from a legitimate sender please contact them and ask them to fix this problem.  One of the organizations I contacted turned on Constant Contact Authentication the day I brought the problem to their attention.  Another is taking a look at it.  This suggests that with a little bit more user pressure we could make email much better, much sooner.

 

Posted in Computer and Internet, Phishing, Privacy, Security | Tagged , , , , | 8 Comments