Anatomy of a startup – PredictableIT Part I

Back in 2004 a friend had told me about moving a small VC firm and one of its portfolio companies to a hosted environment (at a Managed Service Provider, which was another of the portfolio companies).  Not just servers mind you, but rather they had locked down all their PCs (turning them into Thin Clients) and were using Microsoft’s Terminal Services offering to run all client software in the data center.  We quickly decided that it would be interesting to take this model more broadly and formed a company, PredictableIT to do so for Small Businesses.  Initially our thought was that this would be a modest-volume income producing business, not your every day attempt to change the world and create the next Microsoft 🙂  Basically we would take what he had done on a consulting basis and productize it.  We’d self-fund the operation and try a business model that all but eliminated fixed costs.  For example, we’d have no employees and rely on contractors and other third-parties for all services.  We’d use the Managed Services Provider, who would rent us machines, rather than own any hardware (other than some development/test systems).  We’d build a website and some tools to help with marketing, sign-up, and deployment.  Of course just as few battle plans survive the first engagement with the enemy so to do few initial business plans survive their engagement with the real world.  And so this is our story.

By early 2005 we’d started PredictableIT and done all the initial things one needs to do.  We were an LLC, had a bank account, we had made the first capital infusion, and found office space in an Executive Office setup.  More importantly, while during the first couple of months of operation my partner had remained in California he soon moved to Colorado so we could spend 60+ hours a week sharing an office and getting on each other’s nerves.  Just kidding about the “getting on each other’s nerves” part.  We’d worked together before and knew that we could handle so much togetherness.  Most importantly we’d convinced the two companies that were doing this on a consulting basis to transfer their custom solution to us and become our first customers, and so we were up and running pretty much on day one.  We even had arranged it so that they would continue to bear the full cost of the Managed Services Provider’s systems until we started offering the service to other customers.  At that point they would stop paying for the systems and start paying us based on our standard monthly per-user rate, which would greatly reduce their costs.  Not only that, but while we absorbed the overhead of supporting the current systems any custom work required was done on a Time and Materials basis.  We weren’t paying ourselves for this work so effectively the T&M work helped fund the company.  This would turn out to be a mixed blessing as the distraction it caused lead to delays in creating our real service offering.  But for the time being we had a very modest cost structure, with our biggest expense being paying a contractor to build our website.

The offering we were creating seems pretty simple at first glance.  Users would replace their PCs with Thin Clients (or lock down their PCs to operate as thin clients) and use Terminal Services to run client software on our servers.  We would also operate a Hosted Exchange mail system for them.  But as you look at this environment it quickly becomes apparent how complex it is.  For example, Exchange did not include an anti-SPAM system at the time and the third-party software available was both expensive and not geared to multi-tenant environments.  Moreover, even Exchange did not fully support multi-tenant environments at the time.  In fact, it was able to do so primarily via hacks that were extremely fragile.  So we had to put together and manage a SpamAssasin-based anti-SPAM system in front of Exchange.  Initially we manually provisioned Exchange for each new customer, but realized immediately that this was so fragile that we need to automate the process.  The same was true for the directory structure on our Terminal Servers.  Mis-configuration exposed one company’s data to another, and so automation was required to prevent operational error!  Likewise at the time Anti-Spyware was in its infancy and the Anti-Spyware and Anti-Virus software categories were separate.  Our Managed Service Provider included Symantec Anti-Virus software in their offering, but not any Anti-Spyware.  Even though Symantec added Anti-Spyware support in a newer version the Managed Service Provider didn’t upgrade.  Since we were the only customer who really cared about this capability they just prioritized the upgrade very low, so low that it didn’t occur during the life of PredictableIT.  The result was that I had to build (and operate) our own Anti-Spyware solution using a variety of half-solutions.  Why didn’t we just buy an existing Anti-Spyware solution?  Well now there is a business plan issue I’ll get to in Part III.  Not that there were many choices that clearly claimed support for Windows Terminal Services!  Indeed we did a lot of dealing with the fact that much third-party software didn’t work in a Terminal Services environment, something else I’ll cover in a future installment.  And keep in mind that back with Windows Server 2003 Terminal Services was an incomplete solution, with most users opting to license Citrix in order to complete it.  Even simple things like handling remote printers didn’t really work in Terminal Services!  And then there was the Managed Services Provider.  I already mentioned that they didn’t provide Anti-Spyware, but that is just where the weaknesses began.  Their patching policy, for example, was unacceptable for our environment.  They would take weeks after Microsoft released a patch to deploy it to their systems, and then they would apply only the most critical fixes.  Because we had lots of client users across multiple companies using our systems we needed them patched rapidly and completely.  I had to take on the testing and deployment of Microsoft (and other) patches rather than leaving it to the Managed Service Provider.

While the basic idea of the service was quite simple the details of making it work in a way that could easily be resold to many users, and be supportable in that environment, were actually daunting.  We inherited a Phase 0 operating environment out of the consulting project, then spent a great deal of effort creating the Phase 1 operating environment that we could use to take on and support other customers.  In fact, when I think about how I spent my time over the course of the PredictableIT experience I would say that over 50% was dedicated to creating and actually running the operating environment.  My title was President, but I now think I was more VP of IT Operations!  My partner was CEO, yet he too spent a lot of time on the operations side (e.g., he setup and maintained SpamAssasin and a few other things we ran on Linux while I handled the Windows systems).

Thinking back on other time commitments is pretty revealing overall.  Keep in mind that these things are bursty, so there were times when both my partner and I were 90% working on the business plan, or 90% working on testing, or 90% working on some marketing experiments, etc.  But on an average basis I was 50% operations, 10% T&M, 20% Program Manager, 10% Test, 5% Dev, and 5% everything else.  My partner took the brunt of the T&M work and I suspect it turned into 30% of his time.  The rest was probably 10% operations, 30% Program Manager, 10% Test, 20% everything else.    If you see problems with these percentages then so do I.  But that I’ll leave that for the end of the series.

Having real customers and real customer problems tought us a lot in those first few months.  We realized that in order to take on more than a very few additional customers we’d have to automate a lot of things.  And as we thought this through we also realized that once automated we’d be able to take on vast numbers of customers.  And as we plugged this into our business model and realized the potential economies of scale we got stars, or rather  Clouds, in our eyes.  In other words, the work required on our part to handle a few dozen clients was not much different from the work to be able to handle thousands of clients.  So we changed our focus.  We were going to go big.

In Part II I’ll explore the system and service we actually built, and in Part III the business aspects themselves and offer a retrospective on why we shuttered the company.

 

Posted in Cloud, Computer and Internet | Tagged , , , | 1 Comment

100,000 Apps and I can’t find anything to install

Back at the beginning of April I wrote a blog post that posited Windows Phone’s biggest App problem was the absence of key applications for interacting with the physical world.  So I found it really interesting that in the same week that the Windows Phone Marketplace broke the 100,000 app barrier one of my friends commented over dinner that he had “just about given up on Windows Phone”.  Why I asked why he replied that the apps he really needed, like those to manage his Fidelity and Schwab accounts, are still not available.  Ah, more anecdotal proof that Microsoft needs to focus more attention on wooing this class of application developers.  To be clear my friend loves Windows Phone, but in the end all that really matters is what you can do with it and not the beauty of the interface etc.

Anyone who keeps tabs on the Windows Phone space has certainly seen its momentum building ever since Mango was released last fall.  The app marketplace is growing fast.  New and more exciting devices are hitting the market and having apparent success, particularly the Nokia Lumia line, and some markets are real bright spots (e.g., Windows Phone is more popular in China than Apple’s iPhone, although Android leads by a wide margin).  Soon Windows Phone 8 will be revealed and along with whatever goodies it brings itself the likely integration with Windows 8 and Xbox hopefully make the must have mobile OS.  Of course there are dark clouds on the horizon, such as the upgrade policy for existing Windows Phone devices, the introduction of IOS 6 (which may steal some ideas from Windows Phone, such as Live Tiles), and even Microsoft’s own recognition that IOS and Android are platforms they can’t ignore.  But even with these dark clouds things seem to be looking up for Windows Phone.  Except for that physical world interaction app problem.

At the end of dinner my friend and I agreed that we are both going to give Windows Phone a little more time.  We want to see what happens with Windows Phone 8 and if Microsoft can use it to bring more physical world interaction apps on board.  But if that doesn’t happen, then 12-18 months from now I’ll be writing about my personal experiences with some other platform.  I may love my Windows Phone, but the novelty is wearing off.  There are now over 100,000 apps in the Marketplace, but not the ones I need.

 

Posted in Computer and Internet, Microsoft, Mobile, Windows Phone | Tagged | 4 Comments

Uber Failure

I truly love the Internet and Apps and Mobile and the resulting improvements they make to my quality of life.  But I have to tell you about two recent failures that leave me a little less enamored of today’s technology.  The first involves one of my old standbys, Yelp, while the other involves newcomer Uber.  First my Yelp disappointment.

I’ve been using Yelp for a few years now to find Restaurants, and it has proven itself a worthy if not perfect tool.  I’ve responded to Yelp’s usefulness in finding good restaurants by adding dozens of restaurant reviews of my own.  Last year I added a review of a non-restaurant, the cleaning service I use for my Seattle-area condo.  A few days ago I glanced at Yelp’s entry for this cleaning service and noted that its rating had dropped to 2 1/2 stars, which seemed rather low to me.  I read the reviews and discovered mine was not being shown.  In fact Yelp was displaying only 10 reviews (all but 1 of which was fairly poor).  Then I noticed a small link at the bottom labeled “filtered reviews”.  I clicked on the link and discovered 44 reviews that Yelp’s automated filter had decided should be hidden and removed from the business’ rating.  Sure enough mine was amongst them.  Ok, now I was annoyed on two fronts.  First, as a long-time and frequent Yelp reviewer I can’t understand why an automated filter would decide to remove my review.  I do understand why it would take notice of an influx of new reviewers who suddenly came in and reviewed a business as this might represent an attempt to manipulate the business’ rating.  But that didn’t explain why my review, or that of other heavy Yelp users, was removed.  Yelp won’t say how the filter works and offers no way to appeal the automated filter’s decision.  And what would happen if you included my review and those of other heavy Yelpers?  This businesses rating would go up dramatically.  In other words, the filter is removing good reviews and keeping bad reviews (although Yelp claims that should not be the case).  Since I’ve never seen this problem with restaurants I’m going to chalk it up to immaturity with Yelp’s attempt to expand beyond restaurant reviews.  Personally this situation means I’m not going to write any further non-restaurant reviews on Yelp, nor will I use Yelp to find non-restaurant businesses.

The second disappointment involves my attempt to use one of the hot new services out there, Uber.  Uber is a service that lets you use your cell phone (or an app on some smartphones) to request a car service (e.g., Lincoln Towncar) pick you up and take you to your destination.  So something nicer than a typical taxi service.  Uber itself provides the request scheduling and billing services and contracts with various limo companies for the actual car service.  I was in New York City, which is one of the cities Uber is active in,  a couple of weeks ago and thought I’d give it a try.  New York isn’t a city where you “call a taxi”, generally you have to flag one down.  This should give Uber an advantage.  So I ran Uber’s app on my iPad and looked at requesting a car, none were available.  I waited ten minutes and tried again…nope none available.  In fact I kept trying over the course of two hours and cars were never available.  Finally when it was really time to go to the airport (and Uber still showed no cars available) I asked the hotel doorman to get us a taxi.  He walked over to Avenue of the Americas and had one pulling in front of the hotel in 3 minutes.  Uber failed miserably.  The problem here is that unless you can rely on a service like Uber you will simply stop using it.  The next time I’m in an Uber service area I don’t know if I’ll try it again.  I might just go with tried and true, if not modern, solutions.

Posted in Computer and Internet | Comments Off on Uber Failure

What makes you think they know anything?

I’m constantly fascinated by reports in which someone with an important-sounding title at a company makes a statement that sets the blogosphere and twitterverse ablaze.  For example, a country-level marketing manager or an evangelist.  Obviously it must be true because the Estonian Marketing Manager said it, right?  No offense to Estonia, but for a large U.S.-based multi-national the General Manager of the Estonian subsidiary is corporate middle management at best, and the Marketing Manager working for him/her may not even be that.  The same holds for “evangelists”.

In a modern large multi-national the corporate business units tightly control the information flow out to their field as well as other internal businesses and support organizations.  Everything is on a need to know basis, and even senior executives without a clear need to know may have the information flow to them restricted.  Organizations will put different details (including false information) into internal disclosures so that they can trace leaks back to the source.  The closer you get to going public the more people have a need to know, and so the information flow from field people should get more accurate.  But still the trend is to keep details as close to the vest as possible for as long as possible.

Thinking about Microsoft specifically I cast a wary eye on futures statements by anyone who isn’t at least a General Manager (or Distinguished Engineer, Technical Fellow, or a Director reporting directly to a CVP) in a Product Group.  Others, such as the Program Manager who owns the feature area being discussed are also reliable sources.  But if a field person, even one with a fancy title, speaks out of turn I wonder if they have any clue what they are talking about.  They may be saying something they heard through the grapevine, or that is out of date, rather than truly being in the know.

And when partners speak about each other that is even more suspect.  Again, what are the odds that the Estonian Marketing Manager of Company A can accurately say what Company B is doing when they may not even have accurate information about Company A’s plans?  They are low.  If Nokia CEO Stephen Elop makes some comment about Microsoft’s Windows Phone 8 I would tend to believe him, but when random other Nokia employees (again, even with important sounding titles) speak out of turn about Windows Phone 8 I take what they say with a large grain of salt.

You may wonder what prompted this blog entry, and it was the conflicting information coming out of Nokia and the BBC about a version of iPlayer for Windows Phone.  Someone from Nokia apparently spoke out of turn, leading the BBC to deny everything.  The BBC will tell us what they want us to know when they are ready for us to know.

Finally let me apologize to Estonia for using them in my example, I needed someplace tiny yet big enough a multi-national would actually have a local presence (unlike Monaco for example). The capital, Tallinn ,was one of my favorite stops on a Baltic cruise last year and I’d definitely recommend a visit.

Posted in Computer and Internet | 5 Comments

How will Microsoft keep existing Windows Phone’s “Fresh”?

When queried about Windows Phone 8 upgrades for existing handsets both Microsoft and Nokia officials have deflected the question with comments about supporting existing customers and keeping their experience “fresh”.  The most direct interpretation of these statements is that Windows Phone 8 is indeed coming to existing handsets, but that contradicts what people with good sources are hearing.  So what is the truth?  Well, we won’t know until (at least) next month.  But I’ve been mulling a different reading of the tea leaves and I’m not the only one.  Makram Daou mentions a similar option to what I’m about to describe.  And a commenter on an earlier entry in my blog suggested it as well.  What if there are two editions of Windows Phone 8 (WP8), with two different kernels?

I believe that the development methodology for WP8 included keeping it running on both the CE and NT-based kernels in case delays in Windows 8 itself forced the Windows Phone team to (temporarily) abandon the NT-based platform in order to meet their 2012 deliverable commitments.  And my assumption had been that both to reduce internal testing requirements, and avoid platform fragmentation, that once the NT-based platform was green lighted for shipment Microsoft would stop work on the CE-based platform.  But what if I’m wrong?  What if Microsoft concluded that the cost of shipping both CE-based and NT-based variants was low enough for the benefits?  Then Microsoft could ship a Windows Phone 8 “Standard” based on CE and a Windows Phone 8 “Pro” based on NT.  So let’s explore what this world would look like, what are the benefits of this approach, and what are the (significant) downsides and their mitigations.

Microsoft is getting a number of benefits by going with the NT kernel, both in terms of new capabilities for Windows Phone and in terms of compatibility with Windows 8.  Take multi-core support as an example of a capability that comes with NT.  This is important going forward, but is completely meaningless for existing phones (and for the low-end phones that have become a big strategic move as of late).  In fact, many of the new capabilities that come with the NT kernel are of no or questionable value for existing devices.  Can you support Windows 8’s Secure Boot Path on existing WP devices?  Can you support Windows 8’s Drive Encryption on existing devices?   In both cases the answer may be “not exactly”, meaning that Windows Phone would need unique implementations rather than taking advantage of the work Windows 8 has done to support ARM.  How about Windows 8 application compatibility?   One of the major expectations is that Windows Phone 8 will bring in Windows 8’s Metro application model, in addition to retaining the Silverlight and XNA application models.  What if Metro doesn’t support the existing 480×800 screen resolution?   That brings a form of fragmentation that is unrelated to the kernel switch itself.  You can write a Silverlight Windows Phone application that runs on all devices, or a Metro app that runs only on newer devices (and perhaps can support both phones and Windows 8 tablets rather easily).  The bottom line here is that there are only a small number of features that are tied to the NT kernel that existing customers would care about.  And the main concern is really not about the kernel itself, but around app model fragmentation.

Now let’s briefly talk about the business.  Microsoft originally was focused on the high-end of the Smartphone market, and specifically on Consumers.  Requirements for Enterprise users were deferred.  Nokia apparently convinced Microsoft that going after the low-end was critical to capture the vast untapped markets in places like China (where Windows Phone reportedly very quickly surpassed the iPhone  in sales).  Microsoft tweaked Windows Phone 7.5 to evolve it (“Tango”) to better support low-end devices.  There have been questions about if the NT kernel would make support for low-end devices more difficult.  I think the answer is maybe, but even if the NT kernel isn’t an issue there are business questions at play here.  The low-end devices will eschew hardware configurations that the NT-based Windows Phone 8 is required to support (e.g., multi-core).  The low-end devices can’t take advantage of the better multi-tasking that might come along with the NT-based WP8.  The low-end devices are too cost sensitive for the higher resolution screens that Metro may require.  The low-end devices are very consumer oriented, they aren’t going to be sold to people who need features like VPN and full-drive encryption.  And the manufacturers of low-end devices want Windows Phone to be even cheaper than it is today.  From a business standpoint this screams out for two editions of Windows Phone!

Microsoft could offer OEMs a Standard Edition which is Windows Phone 8 with the CE kernel and a Pro Edition with the NT kernel.  Features needed for high-end hardware and enterprise use would be in the Pro Edition.  The CE-based edition would be priced at about what Microsoft is charging OEMs as patent royalties for their Android devices (plus the cost of codecs that Microsoft includes but that Android requires the OEM to license separately) so that there is no cost difference to the OEM of using Android vs Windows Phone.  The Pro edition would be priced higher, but still low enough that the OEM can compete with both the iPhone and high-end Android devices.  Standard would likely have other restrictions, perhaps not supporting more than 512MB of RAM and 16GB of Flash, so that OEMs wouldn’t be tempted to use it on higher-end devices.  And I picked those two numbers for a specific reason, because Standard Edition would also be the upgrade path for existing Windows Phone 7.5 devices, which largely match those specs.

The biggest problem with this strategy is fragmentation, but I think fragmentation is unavoidable (as technology marches on).  Even Apple has fragmented the IOS world and that fragmentation is likely to grow over time.  I wouldn’t be surprised if IOS 6 drops support for the iPhone 3GS, for example, as they move away from non-Retina Display support.    The key for Microsoft is not to avoid fragmentation completely, it is to control fragmentation.  And the biggest issue they appear to be facing is the possible fragmentation that comes with adding a Metro app model.

There appears to be quite a chasm in the smartphone world.  There are users who install dozens of apps on their smartphone and there are users who install 0-9 (and probably are concentrated in the 0-3 range).  The first group cares a lot about the number of apps available and moreover which specific apps.  The latter group does not.  My own anecdotal evidence is that the latter group is growing much faster than the former group.  And guess what, the lower-end the phone the more likely people are in the 0-9 apps group.  This makes me think that the application fragmentation problem that Metro could introduce might not be that big of an issue.   Yes developers who want to specifically target the low-end devices would have to stick with the existing Silverlight app model, but on the other hand the “long tail” would not necessarily want to focus on a user base that doesn’t buy or install apps.

Something else to consider, under the scenario I describe here Microsoft would then shift completely to NT in Windows Phone 9.  Wouldn’t it still have the upgrade problem?  Yes and No!  Technically yes, however the evidence from the Android world suggests that low-end Windows Phone users really wouldn’t care about missing the upgrade.  And Generation 1/2/2.5 devices will be obsolete, not to mention their owners having seen the handwriting on the wall, will either have moved to a Generation 3+ device or be ready to do so.  Further, by that point Windows Phone is likely to be well established unlike today’s environment where it is quite fragile.  So it just might be that introducing version fragmentation at that point is a tolerable thing to do.  Particularly since it would actually eliminate long-term fragmentation!

I know this is all speculation, but look at how it takes all the conflicting rumors out there and makes sense out of them.  And how it would seem to meet Microsoft’s business need to compete with an almost free Android at the low-end while having a sensible business model overall.  Yes having multiple editions sucks, but it is a controlled form of fragmentation (vs the uncontrolled form in the Android world).

Honestly I’ll be surprised if the scenario I just described turns out to be the real one.  But to me the Windows Phone business is facing a Kobayashi Maru type of problem.  This scenario is one that could address it.

 

 

 

Posted in Computer and Internet, Microsoft, Mobile, Windows Phone | Tagged , | 11 Comments

Slow Industry Response to Malware Reports is Killing Us

One of the biggest threats to the world of computing is how slow vendors are to respond to malware threats.  For example, it took Apple 49 Days after Adobe fixed a vulnerability in Flash to make that fix available to OS X users.  That delay allowed hundreds of thousands of Macs to be infected with the Flashback malware.  The last few days I had the opportunity to test how quickly the industry as a whole was to respond to a report of a website distributing Malware.  The results aren’t pretty.

Last Friday I discovered a website that tried to install Fake AV software (aka Scareware).  How?  I’m not really sure!  I was hitting “next” through a slide show and suddenly I was redirected to the Scareware site.  Obviously the original site had been compromised.  I took the URL for the Fake AV site and submitted it to virustotal.com (a service that checks a couple of dozen anti-malware services) to see what the malware industry broadly thought of the website and discovered that no one had yet flagged the site as harmful.  So I went around reporting the URL as malware to everyone I could think of.  A couple of hours later Google Safebrowsing had flagged the URL as containing malware.  That was a great response.  Sadly it was the only decent response in the entire industry.  Two days later no one else had flagged the site for malware.  Early on the third day Websense ThreatSeeker flagged the URL for malware.  The bad URL is no longer responding, so after almost three full days the domain hosting the scareware has been taken down.  But the scareware distribution site had almost three days from when I reported it, which may or may not have been the first report, to spread its malicious payload.  Thousands of machines may have been infected, a situation that was preventable.  I amnot happy with the industry’s response, and you shouldn’t be either.

It doesn’t appear that Microsoft updated its SmartScreen to block the offending website within the three days the site was online.  My report to OpenDNS went nowhere.  Web of Trust (WOT)? Nowhere.  Malware Patrol?  I got email promising me the site would be checked within an hour and I’d receive followup email.  After three days I still haven’t received the followup email, and querying  today doesn’t show any results.  Yandex Safebrowsing?  It still shows the URL as not having a problem.  MalwareDomainList? No activity on my report.  URLQuery?  It actually just hung analyzing the site.  Scumware.org?  It is a little ambiguous because virustotal tells me that it reports the URL as safe, but direct use of the scumware.org website says it is not safe.  So I guess they deserve a passing grade.

It is one thing for a malware distribution site to continue to dispense its malicious content in the hours, or even days, before it comes to the attention of malware fighters.  But once a site is reported for malware distribution the protection systems need to move to block it within minutes or hours.  Malware Patrol’s promise of one hour response was music to my ears, unfortunately there was no follow-through.  Google’s response was excellent, but I’ve had other incidents where they were slow to respond so I’m hesitant to praise them too strongly (just yet).

I do understand the challenge here, but that doesn’t excuse the industry’s tardy response.  The designers of malware distribution sites have been able to trick automated tools into thinking they aren’t infected (and URLQuery’s hang is probably an example of this).  Commercial malware research teams can only afford so many people to analyze potential malware distributors and must prioritize by the numbers of reports received, meaning a site may not get attention until it is impacting lots of people.  Community-based systems rely on large numbers of people voting, or at least one of their designated superusers to decide to investigate and designate a site as a bad actor.  It looks to me like most have a large backlog of sites waiting to be investigated.   These factors mean a website stays online way too long.

Are their solutions to this problem?  Sure, but no easy ones.  The first step towards a solution would be for the major players to set organizational goals that all reports of malware distribution URLs will receive an initial investigation within one hour of being reported.  In some cases, as with drive-by downloads, determining a site is distributing malware takes some work.  But in the case of the URL I reported this was a 30-second exercise.  Click on the URL, you get the Scareware webpage, press a button to “Add this URL to our Block List”.  Overall setting an aggressive goal may cause staffing increases, but more importantly will force development of better tools for automatic evaluation.

Another tactic I would employ would be creation of a Grey List that caused browsers to block automatic redirects while a URL was being investigated.  So when a URL was reported for malware distribution an automated system would check to see if the domain was on a White List (e.g., well-known sites) and if not immediately add it to the Grey List until a full investigation could be done.  The White List would prevent malicious users from causing websites from being disrupted.  Other techniques could automate this further (e.g., domains that have been around for years would be immune from being added to the Grey List while recently created domains would go on the Grey List immediately on any malware distribution report).

The industry must respond quickly and aggressively to reports of malware distribution.  The time is now.  Users, demand whoever you rely on for protection commit to one hour turnaround on all malware or malware distribution reports.  Vendors, make your responsiveness commitment to malware reports a competitive weapon!

 

Posted in Computer and Internet, Security | Tagged , , , | Comments Off on Slow Industry Response to Malware Reports is Killing Us

Samsung and AT&T Send Messages

Today Samsung and AT&T announced the Samsung Focus 2 would be coming to AT&T in a couple of weeks.  While the phone itself is interesting, what it really important here are the messages that both AT&T and Samsung are trying to send.  First the phone, then the messages.

I carried an original Focus around as my primary phone for nearly 18 months.  It was a great phone.  Great screen.  Fit in my pocket amazingly well for something with a 4″ screen.  Took pictures that blew away my wife’s iPhone 3GS to the extent that she would borrow my phone to snap a shot rather than use her own.  The Focus became the most popular Windows Phone 7 sold in the U.S. despite only being available on a single carrier.   Now AT&T and Samsung are resurrecting the name, and some of the specs, to create a low-end Generation 2.5 device called the Focus 2.  It is priced at $49, and for a limited time AT&T is offering it for $39.  I’m sure Amazon, Wal-Mart, and many others will offer it for free.  The only reason I gave up my Focus was because I wanted a front-facing camera, which the Focus 2 has.  It also supports LTE, so no compromise on network speed, and has a 4″ Super AMOLED screen, which is more high-end than one would expect on a device at this price point.  So, without ever seeing one, it seems like the Focus 2 is going to be a great option for those entering the smartphone market and others who care more about getting a good device at a low price than about having high-end specs.  Actually, with the exception of limited storage the Focus 2 has mid-range specs.  Those who aren’t planning to keep much in the way of music or videos on their phone might even happily choose the Focus 2 over the Lumia 900.  Enough about the camera.

The real surprise here is that Samsung is delivering a Generation 2.5 device at all!  I think most observers had expected that Samsung, like many other OEMs, had decided to wait for Windows Phone 8 before introducing new Windows Phone devices.  Some had even worried that Samsung might just cede the Windows Phone market to Nokia and focus exclusively on Android.  So for me this announcement is really Samsung’s way of confirming that not only are they staying in the Windows Phone business, but that they aren’t going to let Nokia win the Windows Phone mind share battle.  I’m sure we’ll see comparisons between the Lumia 710 (Nokia’s most comparable current device) and Focus 2 soon, and the Focus 2 is very likely to win all of them.  Samsung wants us to know that when Windows Phone 8 hits this fall they will be there with a lineup that can beat Nokia.

AT&T is also doing this as much for the message as for the sales.  No doubt the Lumia 710 is losing its exclusivity at T-Mobile and AT&T could pick up a variant, but that wouldn’t be a device they could trumpet.  By complementing the higher-end Nokia Lumia 900 with the lower-end Samsung Focus 2 they get to send a number of messages.  First, they want to gain as much mind share around Windows Phone as possible before Verizon re-enters this space with Windows Phone 8 devices this fall.  Just as their association with the iPhone lingers despite it now being available on Verizon and Sprint, they want a residual association with Windows Phone as it (presumably) climbs to success.  Second, AT&T wants to keep Samsung in the Windows Phone business so they have at least three (Nokia, HTC, Samsung) good sources for devices.  They also don’t, for example, want to become primarily the channel for Nokia while Samsung aligns with Verizon.  Third, they want to offer a distinctive premium device in each segment (low, medium, high) of the market.  So it appears AT&T is really serious about Windows Phone, as opposed to the lip service I sometimes worried they were giving it.

The bottom line here is that the Focus 2 is really good for Microsoft’s Windows Phone prospects.  It keeps the new-found momentum going, and confirms that two partners who are important to Windows Phone’s eventual success are still on board and committed to the platform.  It should also raise the excitement level about Windows Phone 8 another notch as the number of new or re-established commitments to the platform is expected to grow dramatically.

Posted in Mobile, Windows Phone | Tagged , , , , | Comments Off on Samsung and AT&T Send Messages

Media Center, DVD Playback, and Microsoft’s Media Strategy

There is quite a bit of noise in the system after Microsoft’s announcement this week that Windows Media Center would be a separate add-on to Windows 8, and even more noise regarding the decision to remove DVD playback from Windows Media Player.  I’ve talked a bit about this topic before, but now seems like a good time to address it directly. Microsoft created Windows Media Center in an attempt to move Windows into the living room as an entertainment console.  The effort failed.  There are all kinds of reasons for that, my own perspective being that their refusal to target the custom installer market kept WMC from gaining the support of this very influential community  But let us not dwell on that.  WMC basically brought three things to the party.  First was an “8 Foot” UI necessary to make a PC usable by someone sitting at the living room couch with nothing but a TV-style remote control.  Second was DVR functionality taken from Microsoft’s brief offering of a TIVO competitor.  Third were the Codecs for consuming media types not part of Windows itself.  Microsoft has to pay per-unit licensing fees for these Codecs, which is why they weren’t initially in Windows itself.   Adding them would require Microsoft to raise the price of Windows, and OEM pressure to keep the entry-level pricing of Windows low forced Microsoft to offer Windows Media Center (WMC), and these Codecs, as a separate more expensive addition.   Initially Microsoft handled this by introducing a separate Windows XP Media Center Edition (MCE)  which it assumed would only be purchased to turn PCs into home entertainment consoles.  When MCE failed to catch on Microsoft started pursuing different strategies for offering WMC.  Starting with Vista and evolving some in Windows 7 Microsoft eliminated MCE and created more editions of Windows!  They split standard Windows into two editions, Home Basic and Home Premium, and put WMC into Home Premium.  Home Basic was, as far as I can tell, the cost-constant version of Windows while Home Premium took the place of MCE and cost more than Basic.  To keep OEMs from revolting Microsoft reportedly offered them a rebate of some of the cost difference between the editions in the form of co-marketing dollars.  So most Vista and Windows 7 consumer PCs, at least in the developed world, ship with WMC even though few users actually make any use of it.  And yes, everyone pays for WMC because they are buying the higher priced Home Premium edition rather than the Home Basic edition!

With Windows 7 Microsoft went one step further and added support for the Codec (for which Microsoft has to pay per-unit licensing fees ) necessary to play back DVDs into Windows Media Player, also only in Home Premium (and above).  Home Basic does not include DVD playback capability!  And no edition of earlier versions of Windows (such as the ever-popular Windows XP) include DVD playback capability.  Windows XP systems used third-party software, often included by the OEM, for DVD playback.  So beginning with Windows 7 users were paying for the Codec through their purchase of Home Premium.

Around the time Windows 7 shipped the media business was going through major changes in direction.  First, the HD-DVD standard that Microsoft was backing as a replacement for DVDs lost out to the Blu-Ray standard.  Microsoft had included HD-DVD support in Windows Media Center for Windows 7 but not support for Blu-Ray.  Meanwhile it became apparent that the use of DVDs and Blu-Ray (which never fully escaped niche status) for video distribution was peaking as user habits started to migrate to streaming services such as Netflix and network download services such as iTunes and Zune.  Not only that, Microsoft’s ongoing difficulties gaining support from carriers for WMC (e.g., DirecTV repeatedly claimed to be working on WMC support but it never actually shipped) was hitting home.  Some years ago Microsoft made clear that its focus was shifting to Internet TV over traditional cable/broadcast and Internet delivery of content over the use of physical media (i.e., DVDs).  With the cross-over in media consumption from DVDs to the Internet occurring in 2012 (and the extremely limited use of WMC for tuning into broadcast television) Microsoft was going to have to take into account the cost/benefit of including the DVD and other Codecs in Windows as it considered the Windows 8 edition structure and pricing.

Meanwhile on the other side of the aisle Microsoft is facing pricing pressure from devices such as the iPad and Android.  OEMs, whose margins have shrunk almost to “why bother being in this business” levels, have long been asking Microsoft to lower prices.  Not only that, the vast number of Windows Editions just caused customer confusion and I think there were pressures from many quarters to simplify them.  The easiest way to clean up the Windows edition structure was to remove the factor that caused them to become so complex in the first place, the added expense of the Codecs associated with Windows Media Center.  With these Codecs removed it becomes possible to merge Home Basic and Home Premium back into a single (standard) Windows 8.  While we haven’t seen any pricing yet, it is most likely the case that OEMs will be paying lower prices for the standard Windows 8 than they did for Windows 7 Home Premium.  It is a lot less clear how Microsoft will handle retail pricing, but I do think they need to respond to the pressure from Apple’s low upgrade pricing for the last few versions of OS X.   Microsoft could have moved the Codecs into Windows 8 Pro, but that causes pricing pressures on other parts of the customer base (e.g., businesses).  There are a number of reasons for buying Windows 8 Pro, and legacy media consumption is not amongst the most compelling ones.  Leaving the Codecs (and thus WMC) out of Windows 8 Pro allows them to price it so they will achieve a mix that maintains Windows revenue and profits (although Windows RT is likely to lead to loss of a few points of margin in exchange for much higher volumes).  I know some people don’t really care about Microsoft’s financial health, but if they don’t stay healthy then there just won’t be a Microsoft to kick around.   Simply put, with the number of WMC users fairly small, and the use of PCs for watching legacy media on the decline, the best option for keeping prices down for the majority of customers was to move the Codecs (and the WMC software that uses them) to a separately purchasable package.  And that’s what Microsoft has announced, while also making a point that the price should just be enough to cover the Codec licensing fees.  I rather like that option because I believe it will lower the price of Windows for the vast majority of users.  What I don’t like is that the Media package will only be available on Windows 8 Pro.  Users who on are on standard Windows 8, and require nothing else in Windows 8 Pro, will be required to purchase a Windows 8 Pro + Media upgrade.  And that seems like a hard decision for Microsoft to justify.

Now let’s talk about Microsoft’s Media Strategy and then I’ll come back to Windows 8 scenarios.  As far as I’m concerned the strategy has always been a mess, with media playback spread over three highly duplicative efforts.  The basic media playback capability in Windows is via Windows Media Player, which is used by just about all PC users since most web audio and video content is targeted for it.  Windows Media Center was all about the “8 Foot” experience.  And both of them suffered from the traditional Microsoft “build it and they will come” approach.  Acquiring media, for example, is very haphazard in both of them.  Microsoft then built the Zune client and service as an integrated easy way to obtain and play media.  At the same time the Xbox replaced WMC for the “8 Foot Experience” role in Microsoft’s strategy.  Some WMC functionality, such as photographic slide shows, has already been subsumed into other products.  Zune, or an upgraded and likely renamed replacement service, along with support for HTML5 Video, are now the core of Microsoft’s media strategy.  New Metro apps such as those for Vimeo, Slacker, Netflix, etc. are the supporting players.  WMP and WMC are legacy offerings and their use is already in decline.  During the life-cycle of Windows 8 their use is likely to “fall off the cliff” as the web moves to less Windows-centric data types and the use of legacy media continues its decline.  Microsoft will focus on the Internet and all its wonders.

Now let’s get back to Windows 8 and some scenarios that are likely to play out.  The first thing to consider is that very few people, outside real techie power-user types, upgrade existing PCs to new versions of the operating system.  For the last decade most new OS shipments have come with new hardware purchases.  That’s simply a fact.  So most users experience will be quite different from what Microsoft’s actions suggest.  For example, OEMs will make sure that any new system with a DVD drive (or any standalone DVD drive) comes with software that enables DVD-playback.  Yes it will most likely be third-party software, but that just takes us back to pre-Windows 7 days.  And given the continuing love for Windows XP (which always uses third-party software for DVD playback) that scenario seems adequate though not ideal.  The third-party software is usually clunky because it is a “lite” version of a heavyweight product the third-party wants to sell you.  But hey, with a shrinking number of both systems including DVD players and users watching DVDs even if they have a DVD player this will impact only a modest number of users.  Of course on systems that have Windows 8 Pro the OEM could just include the Media pack when a DVD drive is in the configuration, though they may choose a third-party package even in this case.  The bottom line is that no user buying a new Windows 8 system with a DVD drive is going to be unable to watch DVDs.

But the vast majority of new PCs sold are not going to have DVD drives!.  Take what are likely to be the highest volume PC form factors the next few years, Ultrabooks and Tablets.  They certainly won’t have them.  And just like floppies before them, DVD drives will become options on other form factors such as desktops and slowly fade from sight.  The Internet and USB Flash Drives will have replaced them for almost all usage scenarios.

What of those who do want to upgrade an existing PC from Windows 7 to Windows 8 and just want to retain functionality they already have?  You can purchase Windows 8 Pro + Media.  That’s as close as you are going to get.  Will this really be an attractive upgrade?  Well until we know what Microsoft’s upgrade pricing is going to be I really don’t know.  But I assume in most cases users will choose to stick with Windows 7 for this scenario, and as I’ve said in other posts Microsoft will be just fine with that.

Finally about those who really want a PC-based home entertainment center?   Well, Microsoft is giving you that option with Windows 8 Pro and the Media add-on.  Just be clear that it is a legacy option and has no strategic future.  Yes some enthusiasts will want to continue to use a PC as their home entertainment center long into the future.  For those a third-party offering like XBMC seems like a better long-term choice.  For the majority though, the Xbox 360 (and future family members) will become the home entertainment center that WMC longed to be.  And it will integrate with your PCs, Zune (or replacement), and third-party media services to create an overall entertainment environment that will make us all forget about the failed Windows Media Center.  At least that’s what Microsoft hopes.

 

 

 

Posted in Computer and Internet, Home Entertainment, Microsoft, Windows | Tagged , , , , | 7 Comments

Looking for the Microsoft Nook

Microsoft has always had a thing for electronic books, but turning that into a successful venture?  That’s a whole other story.  Microsoft entered the electronic book reader business 12 years ago with the release of Microsoft Reader 1.0 as part of its Pocket PC 2000 software.  I purchased and read a number of books on my Compaq iPaq using Microsoft Reader.  Although not exactly a reading optimized device the  iPaq’s very good (for its era) iPaq’s 3.8″ screen (bigger than the iPhone!) along with Microsoft’s ClearType font technology made this a decent early e-reader.  Of course the Pocket PC operating system and the iPaq were targeted at the PDA market, and particularly to users of Microsoft Office, so that it never was really sold as an e-reader.  The Microsoft Reader software came to Windows a few months later, but was never a significant venture and Microsoft finally announced last August that it was being discontinued as of August 2012.  On the whole one can view this as another case of Microsoft being a market innovator, but entering too early and failing to follow through when its original strategy didn’t pan out.

So what were the problems with Microsoft’s e-book strategy?  Well the first is that it attempted to follow Microsoft’s usual game plan, build the software platform and let others do the rest.  E-books need four things: (a) e-reader software, (b) DRM/Sales/Distribution backend software, (c) e-reader hardware, and (d) bookstores.  Microsoft built the first two.  It sold the backed software (and DRM service, which it continues to operate to this day) to major book distributors and a number of electronic bookstores.  In fact they even formed a strategic partnership with Amazon in which the later would adopt Microsoft Reader for its own e-book efforts (and did for a time sell books for Microsoft Reader).  But as the promise of e-books failed to materialize Microsoft’s enthusiasm waned, many players exited e-books, and Amazon went its own way.  So why did e-books fail to take off in the early part of the 21st century?  I can think of two reasons.  First, the lack of a real e-book reading device meant it was always a secondary activity being performed on systems that weren’t appropriate for reading.  Microsoft was likely counting on its Tablet PC efforts to produce better systems for reading, but the Tablet PC never gained much traction either.  Second, the number of books that were available in digital format was extremely small.  You couldn’t actually make Microsoft Reader your primary way of reading books, it was more of a novelty.

In 2006 Sony launched the modern e-reader era with the Sony Reader.  Microsoft employees probably represented a significant part of Sony’s early sales and interest within Microsoft for taking another stab at e-books started to rise.  But Sony sold almost no readers in its first year on the market, leading to questions about Consumer interest in e-books.  In late 2007 Amazon introduced the more innovative Kindle along with a push to accelerate the book industry’s move to digital content.  The Kindle was an instant success, though in absolute numbers the e-reader market remained small through 2008.  It looked like e-book readers would be a significant but very specialized niche.  The Kindle definitely got e-book fans at Microsoft excited (and again many Microsoft employees were amongst that first batch of Amazon customers).  More than that I can’t say, but you can find hints of various activities in this space if you look at the blogs of major “Microsoft Watchers” like Mary Jo Foley.

Now let’s talk about what seems to have happened and why Microsoft likely decided to invest in Barnes & Noble’s Nook.  At some point Microsoft stopped various one-off tablet efforts like Courier and focused its tablet efforts around Windows 8.  Amazon was clearly dominating the e-book space with Kindle, and part of its strategy was to put the Kindle Reader software on all popular platforms.  Windows, for example, already had a Kindle Reader client and Microsoft could expect Amazon to write one for Windows 8’s Metro environment as well.  Moreover, Apple had entered the e-reader business and was pressuring Amazon with its business policies (e.g., you can’t purchase books from inside the Kindle Reader app on the iPad).  So perhaps Microsoft had dreams of Amazon focusing extra effort on Windows 8 as a means of countering Apple.  And not only could Microsoft count on Windows 8 tablets being good Kindle Reader devices, as a generalized platform it would attract other e-reader players like Barnes & Noble’s Nook.  Microsoft appeared to take a “Build it (Windows 8) and they (e-readers) will come” approach to the e-book market.  Amazon had other plans.

Oh, I’m sure Amazon will do a good job on a Windows 8 Metro client.  They actually provided one for the Consumer Preview.  But Amazon decided the best way to counter the iPad was to produce its own media-consumption oriented tablet, the Kindle Fire.  Whether Microsoft had explicit evidence that Amazon wouldn’t do anything special for Windows 8 (e.g., unique features, co-marketing, etc.) or the Kindle Fire simply made it clear that Amazon was going to put its best efforts into its own tablet almost doesn’t matter.  The success of the iPad and the blazing launch of the Fire, both of which have great media consumption stories, along with the tepid response to Android tablets and their weaker media consumption stories, must have made it quite clear to Microsoft that Windows 8 needed more than a “build it and they will come” approach.  Amazon was now a competitor, and the most striking evidence of this was the removal of the Kindle kiosks from Microsoft’s retail stores shortly after the launch of the Fire.

Rumors of a major revamp or replacement of Microsoft’s Zune music and video service have been around for a while, and it is safe to assume Microsoft is working hard on have a service that is a great alternative to iTunes, Amazon, etc.  for Windows 8, Windows Phone 8, and the XBox 360.  The missing part of its media story was e-books.  If “build it and they will come” was necessary, but not sufficient, then Microsoft finally needed to get back into the e-book game.  I don’t know what options they considered, but in the end they settled on the one that holds their best shot at success.

Microsoft and Barnes and Noble had been partners for a long time prior to the start of an intellectual property spat around the Nook.  BarnesandNoble.com (originally a separate BN.COM subsidiary) was built on Microsoft technologies and was one of Microsoft’s biggest success stories during the .com bubble.  BN.COM was also one of the top 5 companies who influenced the development of SQL Server 7.0 and put it into full production during beta.  While Microsoft needed a premier e-book solution for Windows 8, Barnes and Noble was planning to split off the Nook operation (to enhance shareholder value) and had numerous challenges of its own.

One of the challenges with Nook was Microsoft’s patent lawsuit.  Barnes and Noble seemed to adopt a novel defense, basically arguing the Microsoft shouldn’t be allowed to enforce its patent rights because Microsoft was a bully.  I am sure that Barnes and Noble’s lawyers were telling them it was a long shot argument and that at best they’d get some negotiating leverage.  But meanwhile it was a management distraction and I’m sure was sucking up Barnes and Noble’s cash.  And it was the least of their worries.  While the Nook business was healthy, the parent company’s bookstores were losing ground to digital media.  Nook was essentially a U.S. business and Barnes and Noble needed resources and help to take it global.  And the U.S. Justice Department price-fixing charges against U.S. publishers (and Apple) would have the perverse side-effect of strengthening Amazon’s competitive position.  The Nook business needed a strategic partner.  Microsoft and Barnes and Nobles interests were suddenly aligned.

Microsoft committed $600 million to a digital media joint venture with Barnes and Noble.  There is the widely reported $300 million to buy into the joint venture, and the less widely committed $300 million in payments over five years for content.  In return Microsoft gets a share of the revenue from book sales as well as a premier e-reader client for Windows 8.  It might also be getting a Windows 8 (and likely more precisely a Windows RT) based Nook, perhaps to be sold under Microsoft’s name but more likely to be sold as a member of the Nook family.  Barnes and Noble got cash, a very significant partner (especially for entering global markets), and the end of the uncertainty created by Microsoft’ patent lawsuit.

Will Microsoft be able to turn the Nook partnership into a significant winning story for Windows 8?  Will Barnes and Noble find the partnership a significant boost to its ability to compete with Amazon?  Of course the jury is still out on these questions, but in my opinion it was the best shot for both of them.

Posted in Computer and Internet, Microsoft, Mobile, Windows | Tagged , , , , , , | 6 Comments

Windows Phone 7.5 will top out at 100,000 apps

Two phone ecosystems, two very different phone experiences.  That’s what the current Smartphone market really looks like.  Apple’s IOS and Google’s Android (fragmented as it is).  Apple ships out updates to its iPhone user base at regular intervals and has, over a long period of time now, shown that if you buy a phone today it is very likely that you will receive all IOS updates during the two-year contract period that most (in the U.S. at least) sign up for.  More likely your updates will continue for longer.  And most likely they will bring some benefit to your existing device.  Now we switch to the Android world.  Android updates are frequent, but upgrades for existing devices are limited.  Adoption by existing phone users is rare.  So Android users either don’t have an update available to them, don’t see a benefit of updating, or have actually had bad experiences with updates and don’t want to risk it again.  So one world in which customers get, accept, and benefit from updates and another in which they don’t get, don’t accept, and don’t benefit from them.  Now let’s explore the Microsoft Windows Phone ecosystem.

Windows Phone started out with Apple’s iPhone firmly in its sights.  Microsoft had been succesful with the Windows Mobile business model until the iPhone introduction, which had (with its shift from business to consumer focus) just blown Windows Mobile out of the water.  Not wanting to get into the phone hardware business itself Microsoft crafted a plan that was an OEM-centered variant of what Apple was doing.  Consistency of user experience was key, a closed (and secure) app platform was key, etc.  The promise that all devices would get all updates was key.  For a variety of reasons this has not yet taken off, and apparently Microsoft is willing to break with Apple on the key point of upgrade strategy.  Has Microsoft decided that instead of trying to emulate Apple it will now try to emulate Google (who themselves emulated Windows Mobile)?  Is Windows Phone now supposed to be a better Android instead of a better IOS?  Kowtow to the OEMs in hopes they will produce more leading edge Windows Phones?  Kowtow even more to the carriers in hopes they will put greater focus on Windows Phone?  Probably.  Still try for less fragmentation than Android?  Almost surely, though Windows Phone too will fragment.  And it is the fragmentation of application platform that I am most concerned about.

Android has proven it can be successful without updating phones because users tend to split into two groups.  The first is the true enthusiast, and they want not only the latest operating system but also the latest device.  As a result they, at least grudgingly, are ok waiting to get both together.  The second and larger population of Android phone owners buys something and is happy with what they have.    They buy a Droid, or a Samsung Galaxy II, or whatever and don’t really differentiate it from the underlying OS.  So a Galaxy II is a Galaxy II is a Galaxy II.  “What version of Android am I running?  I don’t know, does it matter?”  and “I don’t even know where to get an update” are responses I’ve gotten from multiple Android phone owners.  The only time an Android phone owner cares that they don’t have the latest version of the OS is when they find there is a new app they really want and it isn’t available for their phone.  And that my friends is the Windows Phone problem I am worried about.

The Windows Phone 7.x library is at about 80,000 apps and growing fairly fast.  It will be at 100,000 apps by the time Windows Phone 8 ships.  Microsoft has committed that Windows Phone 7.5 apps will run on Windows Phone 8.  But what about the other way around?  What about apps that are written for Windows Phone 8?  They are very unlikely to run on Windows Phone 7.5.  Waiting for “Words with Friends”?  It may come to Windows Phone 8 but never to Windows Phone 7.5.  Pandora?  Ditto.  Once Windows Phone 8 ships the number of new apps being introduced for Windows Phone 7.5 will quickly drop to zero.  If “Words with Friends” is important to you then you need to go to Windows Phone 8, but if there are no upgrades available, you just purchased a new Nokia Lumia, and you have a contract, you may be waiting 18+ months until you can afford to upgrade your device to run “Words with Friends”.

Do you really care that your phone won’t be upgradeable to Windows Phone 8?  Not really.  Do you care that the apps available for your phone, apps that perhaps you’ve been patiently waiting for, will never come to it?  Most likely yes.  And so for the remainder of your contract you are going to be simmering, perhaps boiling over when your long-awaited app candy hits the Windows Phone 8 marketplace but won’t run on Windows Phone 7.5.  And every time it happens you are going to think “I should have just bought an iPhone”.  And when your contract expires, you probably will.

My advice to potential Windows Phone buyers is don’t.  At least don’t buy one now.  Wait for Windows Phone 8 devices to arrive this fall.  If you can’t wait then you have two choices.  Make sure every app you could ever want is already available for Windows Phone 7.5, or buy an iPhone.

Posted in Computer and Internet, Microsoft, Windows Phone | Tagged , , | 9 Comments