The marketing gaffe that could sink Windows Phone

What if you made all the right decisions from a technology and engineering standpoint, and then with one little marketing gaffe sunk an entire product line?  Well, I worry that is exactly what Microsoft has done with Windows Phone.  I have very mixed emotions about Microsoft not bringing my shiny new Nokia Lumia 900 into the full Windows Phone 8 world, but I have no qualms about saying that naming the release they are doing for it and its cousins, Windows Phone 7.8, is a huge mistake.  They took an opportunity to come out smelling like a Rose and instead littered the landscape with the scent of rotting flesh.  History may record that Windows Phone was sacrificed to the gods of “Engineering Political Correctness”,  that a product’s name must somehow be tied to its kernel architecture.

Even Apple drops support for old hardware, they aren’t going to support my two-year old iPad with IOS 6 for example.  And of course very few Android phones get upgrades to the latest version.  The latest Android version, Ice Cream Sandwich, has only made it out to 7% of Android devices in the 7+ months it has been on the market.  And older iPhones, like the 3GS, are getting IOS 6 but reportedly in very restricted form (i.e., without most of the new features).  Even the iPhone 4 won’t be getting all the IOS 6 capabilities.  But here is the interesting thing, Apple didn’t call the update for the 3GS IOS 5.6, it called it IOS 6!  And so Apple fan-boys, which includes most technology writers, will sing Apple’s praises even as they trash Microsoft’s handling of Windows Phone 8.

Microsoft could have brought the full Windows Phone 8 to older devices, they chose not to.  Why?  If you are going for engineering efficiency then pouring a lot of resources into supporting the older chassis (single core, different boot model, etc.) seems like a poor use of resources.  Maybe they found that they couldn’t get the performance on single core systems to match WP7/7.5, and so it would be a disservice to bring the NT kernel to them.  In the long-term not supporting the older devices will reduce fragmentation and allow more rapid improvement of Windows Phone.  In the short-term it is a potential disaster, and given Windows Phone’s precarious position it could be fatal.  But good marketing could have gotten around most of this.

In an earlier post I postulated that Microsoft could split Windows Phone 8 into two editions, one for older and low-end devices and one for new and future devices.  With Windows Phone 7.8 and 8 they are effectively doing this, but in the worst possible way.  Let me repeat, Apple did not call it IOS 5.6 and IOS 6, they are just calling it IOS 6 even though older devices will not see most of the benefits of the new OS.  Even though apps will be written for IOS 6 that can’t run on the older devices.  Even though it is quite clear that this is the end of the line for those older devices.  Microsoft could have done the same thing.  It could have introduced Windows Phone 8 and had two sets of bits, one for older devices that would offer limited improvements and one for newer devices with all that Windows Phone 8 has to offer.  Calling the later Windows Phone 8 and the former Windows Phone 7.8 just shows how poor Microsoft is at marketing compared to Apple.

You may be asking “isn’t this just a distinction without a difference”?  Well, no.  For the next few months Nokia and the other phone manufacturers are going to have to sell against a very public message that these phones are not getting Windows Phone 8.  That could turn off customers tremendously.  The message that they are getting something called Windows Phone 7.8 just sounds like “blah blah blah” after the “no Windows Phone 8” message.  If they’d called the neutered version Windows Phone 8 then the details about it being a subset would be what sounds like “blah blah blah”.  In other words, for all us techies it may indeed be a distinction without a difference but for both the typical customer and the device manufacturers the difference is huge.  And it could be the difference between a smooth transition from Windows Phone 7.x to Windows Phone 8 and the “Osborne Effect” sinking Windows Phone completely.  Actually Microsoft and Windows Phone probably survive this gaffe, it is Nokia who is most at risk.  Having their ability to sell Windows Phones dry up for a quarter or two could be the straw that breaks the camel’s back.  That’s why they are working hard to spin Windows Phone 7.8 and a bunch of Nokia-specific enhancements as being such a good thing.  But it didn’t have to be this way.

Microsoft, and it’s not alone in this, has an engineering driven need to name releases after the architecture that it embraces.  If you switch the kernel, or otherwise re-architect the core of a product, then you give it a new major version number.  This is thinking that goes back to the 1960s or 70s when the industry first started formalizing product names as versions with Major.Minor.Revision (or similar schemes) numbers.  The concept that Windows Phone 8 could have two kernels is just heretical to engineers, they would love the 7.8 vs. 8 naming difference.  And apparently they got what they wanted.

The problem at Microsoft is that they still haven’t figured out Marketing.  Every now and then there are signs that they get it, but for the most part they don’t.  In this case it may very well turn out that Engineering made all the right decisions on the product, but then Marketing completely botched the messaging.  Or, Engineering over-ruled Marketing on the messaging.  In either case, I think Microsoft “screwed the pooch” on this one.

Posted in Computer and Internet, Microsoft, Mobile, Windows Phone | Tagged , | 8 Comments

Skimming the Surface

So, Microsoft took the “let us show what can really be done with Windows Tablets” route. From afar I like what I see. They didn’t say anything about 4G support, and I really want that. And I have some other nits to pick. But overall this was a good announcement. Yes there was innovation…I LUST after one of those keyboard/covers.

There was something subtle and important too. The Surface will only be available from the Microsoft Stores and some online channels. That means they will not be competing with OEMs in the channels they use. You won’t be comparing a Microsoft Surface with a Toshiba, HP, Dell, Lenovo, or Sony Tablet at your local Best Buy or Walmart. And you won’t be comparing it to a Samsung or Nokia Tablet at your carrier’s store. It is, at once, an iconic product line and one that OEMs can tolerate being in the market. Sure it will challenge OEMs to up their game, but that is a good…no great…thing. And it will let Microsoft control it’s image, and ultimately it’s success. And if the OEMs do let Microsoft down? Microsoft can expand both the channels and products in the Surface lineup. Carrot and Stick.

Please tell me these devices have 4G! If they do, tell me where I can pre-order.

Posted in Computer and Internet, Microsoft, Mobile, Windows | Tagged , , | 4 Comments

Will Microsoft introduce its own Tablet?

Microsoft has scheduled a major announcement for later today and the press and blogosphere has decided they are going to announce a Microsoft-branded Tablet. Perhaps I missed it, but I haven’t noticed anyone citing a leak from an ODM or any other data to substantiate this conclusion. It seems to flow from a combination of wishful thinking and a process of elimination. So I’ll provide what little data I have and some analysis, then we’ll wait a few hours and have some answers!

Back in the 1990s Microsoft’s OEM model was humming, with Microsoft, component suppliers (particularly Intel), and OEMs all doing very well. The OEMs benefitted from cross-manufacturer economies of scale that let them make good money while undercutting vertically integrated competitors like Apple by substantial amounts. But in the 2000s the transition to notebooks severely undercut the cross-manufacturer economies of scale. At first OEMs could command premium prices for notebooks and thus they improved margins, but eventually price wars broke out. OEMs had to find ways to bring in addition revenue and lower manufacturing costs, and they came up with disturbing solutions in both cases. They raised revenue by accepting payment for putting third-party software on their systems, which we now know as crapware. Since they could no longer rely on multi-vendor economies of scale they focused on their own economies of scale. The best way to do that? Focus only on the highest volume designs. They would make stabs at truly premium offerings, but it seemed neither they nor their customer base were that excited by the result. For the most part the PC ecosystem now focuses on inexpensive mediocre systems that get e job done but elicit little passion from consumers.

Meanwhile over in the gaming space Microsoft wanted to introduce a system and approached OEMs about it. Since gaming systems are a razor/blades model (lose money on the gaming console, make it off game sales) the OEMs weren’t interested. Microsoft had to get into the console hardware business on its own. In the music player business Microsoft initially followed the OEM model, but when none of the OEMs could stop the iPod juggernaut it chose to enter the market itself with Zune. While unsuccessful in the music player business itself this let Microsoft build the infrastructure and other expertise it needed to compete with Apple in other endeavors. The Xbox Music and Video store that is appearing across Windows, Windows Phone, and Xbox is a direct dependent of the Zune work. The Windows Phone and Windows 8/RT is another outgrowth of the Zune effort. Microsoft had a rough start in the systems hardware business, but has now built up the experience and success (with the Xbox 360) to be a real participant.

Microsoft hasn’t brought it’s own systems into the mainstream markets yet. When evaluating how to respond to the iPhone it debated doing its own device and decided to stick with the OEM model. Before Nokia came on board that was looking like a bad decision. The decision looks better now, but if Nokia fails before completing its own transformation then Microsoft will be back to deciding if it should be in the mobile phone hardware business. Meanwhile I’ve detected a subtle change in the attitude of Microsoft employees. For years my co-workers and friends would debate merits of the OEM business model over beer. Recently when I’ve commented to friends that Microsoft might now be able to get OEMs to produce hardware that is really competitive with Apple they’ve responded with “we know”, not “I know”. That’s a pretty thin thread to hang a claim Microsoft will introduce it’s own Tablet on, but it is all I have in terms of inside information :-). Personally I think I’m just imagining the shift in tone, but maybe there is something to it.

Microsoft is going to have no problem getting OEMs to build great business-oriented Tablets. But Consumer-oriented Tablets are a bigger question mark. If they build a super-premium “iPad killer” the OEM probably can’t get the volume to make money versus Apple. Just as with Windows Phone they will try to use the same designs for Android and Windows designs in order to improve their economies of scale. And Microsoft would have to fear that there won’t be any killer Windows-specific designs on the market. Doing its own high-end device, perhaps more to create excitement than to be a high-volume product, becomes attractive. OEMs would still get the volume segment of the market, but Microsoft would set the tone with something really attractive versus the iPad. So there is an argument for Microsoft introducing its own iPad competitive hardware, but it certainly risks alienating OEMs. There is a more compelling argument for Microsoft addressing the low-end of the market.

The Kindle Fire has created a situation at the low-end of the market that makes it just like the gaming console business, it is a razor/blades model. You lose money on the Tablet and then make it on all the books, videos, music, and apps that you sell the Tablet owner. Just as with the Xbox Microsoft would find it difficult to get OEMs to participate in this model making, perhaps even forcing, Microsoft to do it themselves. Last week Mary Jo Foley speculated that today’s announcement would be for a Kindle Fire competitor, and if there is a Microsoft-branded Tablet announced today then I think we’ll see she was right.

There definitely is a case for Microsoft to introduce its own Tablets, though I don’t know if that is what we’ll see today. We could also see B&N introduce a Windows RT-based Nook, or an announcement completely unrelated to Tablet hardware. Given that the announcement is in L.A. We could simply be seeing an announcement of a major relationship with one or more Hollywood studios to provide content for Microsoft properties. I know the process of elimination has tried to rule this out, but I think it is still a possibility. Fortunately we’ll know soon.

Personally I hope Microsoft does introduce its own Tablet. Then it won’t be able to make excuses about its success, or lack thereof, in the Tablet market.

Posted in Computer and Internet, Microsoft, Mobile, Windows | Tagged , , | 2 Comments

More on Windows 8/RT target market, low-end Windows Tablets

Continuing on from my previous post on Windows RT pricing I wanted to give a few other data points that show Microsoft is clearly aiming for the more premium end (iPad and above) of the Tablet market, and then offer another way Microsoft may choose to participate in the low-end.

First let’s talk about screen resolution.  The design center for Windows 8/RT tablets is 10.1″ 1366×768 and above widescreen displays.  Yes Metro apps will run on a 1024×768 display, and perhaps there will be some new tablets introduced at this resolution, but the primary purpose for 1024×768 is so Metro apps run on the majority of existing Windows 7 PCs!  So what is the resolution for a Kindle Fire?  1024×600, below the minimum for Windows 8.  How about other sub-$200 Tablets?  Some match the Kindle Fire, but others are as low as 800×480!  Microsoft is simply not targeting this market with Windows RT or Windows 8.

The second data point I wanted to explore is the Consumption/Creation scale that I’ve talked about previously.  Tablets, at least ever since the iPad entered the scene and legitimized them, have been Content Consumption  optimized devices.  PCs have always been Content Creation optimized devices.  On the extreme end of the Consumption scale are dedicated devices like the Sony Reader, Kindle, and Nook e-Readers.  The iPad is closer to the Consumption/Creation dividing line and has been creeping towards it over time, but it is still firmly in the Consumption camp.  The Kindle Fire is more towards the e-Reader end of the scale, being heavily optimized for media (of various types) consumption.  So what of Windows 8 and Windows RT?  For Windows 8, its ability to run on traditional PC configurations, work well with a keyboard and mouse (or other pointer device), and run traditional PC apps speaks to Microsoft’s desire to create the first successful operating system to truely span the Consumption/Creation line.

But what about Windows RT?  Well, the inclusion of Office 2013 tells us that Microsoft expects Windows RT to span the Consumption/Creation line as well.  No it won’t scale up to support desktop workstations the way Windows 8 on x86-64 processors will, but it will support the Content Creation needs of most consumers and many Information Workers!  Let me put this in perspective a little more.  Take an iPad and try to do creation on it (e.g., write a long email or document) and you find it sucks.  Add a Bluetooth keyboard to the iPad and it sucks a little less.  Take a Windows RT Tablet and try to do creation on it and it will suck too.  But add a Bluetooth keyboard and mouse and the Windows RT tablet will let you very productively create Word documents, Excel Spreadsheets, and Powerpoint presentations.  And write long emails.  And blog entries (yeah!).

Now when you start thinking that every Windows 8 and Windows RT Tablet is a Content Creation device as well as a Content Consumption device you really don’t think “low-end”.  The Kindle Fire owners I know are interested in reading books and streaming movies to their device, they aren’t interested in creating Excel spreadsheets.   And they’d find it pretty painful on a 7″ screen even if they did.   Microsoft will have a number of value propositions to convince customers why Windows 8/RT Tablets are a superior option to the iPad and Android devices.  But perhaps their most compelling is that you don’t have to choose between a Content Consumption and Content Creation device.  Windows RT is Consumption-optimized/Creation-capable.  And that is a higher-end positioning.

For a third data point take a look at what little we know about the ARM chipsets that Windows RT will support.  The examples I’ve seen suggest they are all higher-end chipsets.  The focus is quad-core, high clock rates, etc.   They aren’t cost-focused chipsets.  Again this suggests Microsoft isn’t trying to target the low-end, they are focused on the high-end.

Ok, so is Microsoft just going to abandon the low-end of the Tablet market?  I doubt it.  In my previous post I suggested some ways that Microsoft could help bring down prices of Windows RT for Tablets.  But there is another way.  Let’s say you wanted to build a Windows 8-based Nook that was price competitive with the Kindle Fire.  And you wanted it to have a customized very media-consumption oriented user experience but still be able to run Metro apps.  Sounds like a Windows 8 Embedded scenario to me!  I don’t know what to expect of Windows 8 Embedded pricing, but it certainly seems that it would be far lower that Windows RT (if for no other reason than it doesn’t include Office 2013).

In summary Microsoft is targeting the heart of the Tablet market where the greatest success to date (meaning the iPad) has been.  It is also the segment that poses the greatest threat to the Windows PC franchise.  The Consumption-optimized/Creation-capable design center offers the best opportunity to both keep existing PC users within the family and offer something that really differentiates Windows 8/RT Tablets from the competition.  Microsoft will also go after lower end Tablets, but that is a secondary priority right now.

 

 

Posted in Computer and Internet, Microsoft, Windows | Tagged , , , | 3 Comments

Windows RT Pricing

Rumors are flying that Windows RT (with Office 2013 included) will cost OEM’s $85 a copy. This is causing a bit of an uproar because at that price there is no way an OEM can build an Amazon Kindle Fire ($200, subsidized by Amazon’s services) competitor. Indeed it extrapolates to tablets that will be in the $500-700 range, just like the current generation iPad! And, btw, the new Samsung Galaxy Note 10.1 which priced at $550. So if this pricing is correct then it says to me that Microsoft decided it wanted to target the premium part of the tablet market rather than the bargain part of the market. Good decision? Maybe, maybe not. But it is important to consider that Microsoft can always reduce the price later, but it can’t realistically raise it.

Pricing is, of course, far more complicated than a simple “$85 a unit” conveys. For example, Microsoft typically has co-marketing plans in place so that it effectively rebates part of the unit price back in exchange for certain marketing activities. An OEM who advertises a tablet and includes a Microsoft Windows logo and tagline might get a $1 back for each $1 they spend up to some number of dollars per unit sold. That could effectively lower the price per unit by several dollars. Microsoft could also share other revenue streams, such as from app sales, further reducing the effective cost per unit. In fact there is a rumor that they are going to do just that. They could extend this to Xbox Music and Video sales, making for another effective reduction in cost per unit for the OEM. So you really have to look at what the net cost is going to be for the OEM, not the headline price.

And what if you really did want a competitor to the Kindle Fire? Keep in mind that this is a heavily subsidized device. Amazon is at best breaking even on the device itself, and losing significant money when Sales and Distribution costs are factored in. Microsoft has a number of ways it can enable lower priced tablets to compete with the Fire. They have the same media subsidy possibilities with a combination of Xbox Music and Video and their new Nook e-book joint venture. They could offer a lower price on Windows RT for 7″ screen devices. They could offer a Windows RT variant optimized for media consumption devices (by removing Office 2013, for example). In other words, they have options for competing in this space.

My first reaction to the $85 price was that it was out of line, but now I’m not so sure. A friend pointed out that the OEM price for Windows 7 Starter Edition was $55, and that let Windows successfully compete against (free) Linux in the Netbook market. In fact, Netbooks running Windows 7 Starter Edition were available for as low as $300. Windows 7 Starter Edition didn’t include Microsoft Office, so when you factor that in the $85 price doesn’t seem all that outrageous. Particularly if customers really value having Microsoft Office built-in. And if they don’t? Well then Microsoft could rather easily offer OEMs a lower priced Windows RT without Office!

So don’t get too caught up in the furor over the $85/unit pricing rumors or the $500-700 tablet pricing predictions. They might be outright wrong, and they certainly are operating on incomplete information.

Posted in Computer and Internet, Microsoft, Windows | Tagged , , , | 9 Comments

Protecting yourself from malicious websites

With all the press about the Flame malware the last couple of weeks I took yet another look at my own security precautions. This involved a bunch of ad-hoc testing on my part, and I’ve come up with a couple of simple recommendations that could materially improve most people’s information security.

Most modern malware shares two characteristics. First, infected websites (many of which are completely legitimate) are the primary means by which malware is distributed. Second, most malware is aimed at stealing data and thus must communicate that stolen data out to a server somewhere. In the case of so-called Bots, the malware also communicates with Command-and-Control servers. If you can block access to malware distributing websites, and block communications between malware and servers out on the Internet, then you’re better protected. In fact, the biggest problem with both these approaches is that they can be a bit like locking the barn doors after the animals have escaped. What I’ll try to do with this post is explain a couple of things that will increase the odds that you get the doors closed in time.

I’m going to talk about URL Filtering and use of an enhanced DNS. Let’s start with URL filtering, although honestly I’m more excited about what you can do with DNS these days. URL filtering (for security) involves checking a URL against a database of sites known to be malicious and blocking access to them. The major browsers have built-in URL Filtering facilities, SmartScreen for Internet Explorer and Google Safe Browsing for the others. While these clearly help, I’ve found that there are other URL filtering tools that seem better. Personally I use Web of Trust (in addition to the built-in browser facilities), which in my testing of URLs contained within SPAM email is far more likely to flag dangerous websites than any other product (except perhaps one). It is free. In doing my research the commercial product that seems to do the best job is Trend Micro. I regularly send URL’s off to Virustotal for testing and get back results that show Trend Micro as the only service to have flagged a (clearly bad) site. So if you want to purchase a security suite for your PCs, Trend Micro has at least one advantage! But for the rest of us it is worthwhile installing Web of Trust’s Browser add-on. Not only will this warn before attempts to access a malicious page, it will augment search results with icons that indicate the trustworthiness of each link. And if you do find a page that you feel is malicious, with a few clicks you can let others know. As an aside, Facebook’s link policy explicitly calls out having a poor Web of Trust rating as their standard for what constitutes a bad link. So they must think it is a pretty good link reputation service!

Of course the problem with Web of Trust is that it is a browser add-on, which means two things. The first is that it only works with browsers that allow add-ons! So it can’t help you with your iPhone/iPad’s Safari browser, or your Windows Phone, or the Metro version of IE in Windows 8, or browser modes (such as Private mode in IE) that disable add-ons, or guest PCs that you allow on your network, or…. The second problem is that it does nothing to block the outgoing communications from malware to data collection or command and control servers. For this you need help from your DNS (Domain Name Service).

First a bit of history. Originally malware would open up outgoing connections from your PC for their communications. This problem was addressed by the addition of Firewalls that block generalized outgoing communications. Of course you can’t prevent outgoing communications over “Port 80”, the communications port that is used for web browsing. And so most software, legitimate or not, now communicates by creating a tunnel using Port 80. Legitimate software does this because firewall management is just too difficult for consumers and too bureaucratic inside enterprises. Malware does it because, well what other choice does it have? So these days Firewalls are necessary but not sufficient to protect against malicious outgoing communications.

The Domain Name Service (DNS) is the Internet’s “Phone Book”, translating names such as www.thecompanyIwant.com into an Internet Protocol (IP) address such as 192.168.2.100 that is needed to actually talk to a computer on the Internet. Purists believe that is all that DNS should do, but others recognize that this translation facility opens up a number of opportunities for enhancement. One such enhancement would be to mark malicious domains (hypothetically www.thecompanyIwant.com in this case) in the DNS and refuse to hand out their real IP addresses. This has two benefits. It prevents a user from browsing (or following a link to) the URL/domain, and it prevents malware from successfully communicating outward by blocking its attempt to find the server it wants to talk to. Let’s go back to Flame for a moment. As soon as researchers identified the domains that Flame was contacting for Command and Control the enhanced DNS services like OpenDNS blocked that communications effectively neutering Flame (unless it has some backdoor communications mechanism that researchers have yet to identify). And so a great way to add another layer of protection to your computer or network is to switch from your ISP’s DNS server to using one of the enhanced DNS servers.

I already mentioned OpenDNS, but it is no longer my first choice. I originally switched to using OpenDNS years ago when my ISP’s DNS servers were having problems. I stuck with them because they have some anti-malware features built-in. In particular they have blocked some of the major botnets, and did quickly block Flame’s Command and Control server access. OpenDNS also has more extensive Malware-blocking features, but only in their Enterprise offering. Neither the free OpenDNS service, nor the paid service that a home user or small business might buy, includes their full malware-blocking features. Fortunately there is a really powerful malware-blocking DNS available for free, and from a surprising source.

Symantec is well-known as a top security company, but they aren’t known for free or lightweight offerings. And yet they’ve created their own free (for home use, and by definition lightweight) DNS offering under the name Norton Connectsafe. This uses the same database of malicious domains as their enterprise URL/Domain filtering products, making it one of the most extensive in the industry. Switch from using your ISP’s DNS servers to using Norton Connectsafe’s DNS servers and you’ve made a major improvement in blocking malicious websites and malware communications. You also have the benefit of multiple levels of URL protection since you’ll still be getting URL Filtering from SmartScreen/Safe Browsing as well as the Norton Connectsafe protection. And if you are really paranoid then you can use Web of Trust (where possible) as well!

The main problem with Norton Connectsafe is that you have to configure your router or individual computers to use it instead of your ISP’s DNS servers. There are instructions on the Norton Connectsafe website, but if you are uncomfortable with this ask a home networking savvy friend for help. It will be well worth the effort.

Posted in Computer and Internet, Security | Tagged , , , , , , , , , | 5 Comments

Anatomy of a startup – PredictableIT Part II

In the first part of this series I alluded to what we put together as an operating environment and before I flesh that out I want to go through the heart of our “IP”.  As I’d mentioned the major initial development item was a website for marketing, ordering, and perhaps some deployment tools.  You could have imagined such a system involving manual billing/payment, partial or completely manual provisioning, etc.  But as we gained experience handling these things for our two customers, and thought about how we would handle it for dozens (and later thousands) we realized we’d have to automate more processes before we started adding customers.  Our simple website plan now morphed into a system with the ability to fully order, pay by credit card, provision, and self-manage the users for a customer.  Monthly billing and credit card payment was completely automated, so we would never have to invoice, hold Accounts Receivables, etc.  If the Credit Card charge failed the customer would receive a three-day warning email and if it failed on the retry we would lock their users out of the system.  All automated (though obviously we got email as well so we could follow-up and prevent losing the customer).  We made provisions to add ACH support, and even talked our bank into giving us ACH access despite not meeting their usual criteria, but didn’t integrate this capability into our initial launch (had a customer come along who was too big for the Credit Card solution we would have added ACH rather quickly).   We thought it likely that either the business’ Principal or Accountant would be the one choosing the service (which was targeted at 5-25 employee companies) and designed the system for their use.

When a customer signed up for our service they designated an Administrator who would be responsible for adding and deleting users, choosing (and changing) which software package and optional software/storage/etc. each user had access to, manage passwords, etc.  Since the most likely user of this interface was the company’s “Office Manager”, that is the persona we targeted.

Automating Provisioning was clearly one of the big challenges as Microsoft had not yet provided tools to do this.  We had to create domain accounts, do weird things to Exchange (because of multi-tenant), set up storage, work around Roaming Profile limitations, etc.  Towards the time we were getting ready to go live Microsoft introduced hosting tools that addressed much of this, but it was too late for us to incorporate into Phase 1.  So we put in a lot of time and effort to figure it out, only to realize we’d be throwing it away for Phase 2.

We also invested a lot of energy into exporting all of our configuration and billing information into our accounting system automatically.  So at each monthly billing cycle we would dump the results directly into QuickBooks, and my partner could close the books with just a couple of hours of work.  I’d been a little dismissive of how pedantic my partner was about our accounting system and this automated process, but it turned out to be a lifesaver.  For example, each month we needed to create a report of exactly what software we were using under Microsoft’s Service Provider License Agreement and pay (through our Managed Service Provider) for it.  We also needed real data to keep our business plan updated.  And, of course, we needed to keep a close eye on every penny we spent since every few months we’d have to make a decision on injecting additional capital.  When an engineer sits down and says “I’m going to build this cool new X” they don’t think about General Ledgers, Accounts Receivable, Accounts Payable, Trouble Tickets, etc. and yet those are the real backbone of a business.  We thought of them from the beginning, and while the work involved delayed our launch it also meant we had the infrastructure to support customers and the business when we did launch!

Speaking of Trouble Ticket Systems this was another area of investment for us.  Our Managed Services Provider was using one of the leading commercial offerings and we hated it.  Not only did it suck, when we heard what they were paying for it we just about fell off our chairs.  My partner brought up the open source OTRS trouble ticket system with a few days of effort (mostly in customization).  Then he and our contractor put in several more days getting provisioning and single-system sign-on to work.  Why did we invest in single-system sign-on?  Well, our experience with small business users showed that expecting them to have and remember separate accounts and passwords for the trouble ticket system was just asking too much.  Our support costs for supporting the system were going to exceed the cost of implementing single-system sign on by orders of magnitude, and fast.  Why did we do trouble ticketing at all initially?  Well even with two customers we had trouble managing the requests.  Something would come in by phone or email.  One of us would work on it and then hand it off to the other.  Sometimes we’d hand it to the contractor.  He’d hand it back.  Since he billed us for his time we needed to track actual time spent.  If the customer request was for billable work, we had to track time spent so we could bill them.  Often requests had to be directed to other providers.  For example, if you needed a file restored that really was assigned to the Managed Service Provider.  So we paid a couple of man-week price to implement OTRS, but it paid for itself almost immediately.

We did take one change to our plans that almost sunk us.  Along with the decision to “go big” was my partner’s insistence that we build in a Special/Trial offer facility.  This was spec’d out as being rather general purpose.  We could give a customer a unique offer, we could run a radio or direct mail campaign with a shared code, and we could write all kinds of interesting rules for what an offer was.  But this wasn’t just an ordering feature, it had its tentacles into provisioning, the administration tools, the monthly billing process, etc.  We knew we’d stretched the website contractor to his limit, but when we went to add the Offer Code capability we learned just how far we’d stretched.  I’ll address this in Part III.

Overall we offered Hosted Exchange, Terminal Server with Microsoft Office and a number of other apps, and a way to run apps that didn’t work in a normal Terminal Server.  Again this turned out to be one of our biggest challenges.  Most third-party software was not designed to run correctly under Terminal Server.  Some even had licensing language that completely forbid it (e.g., Intuit had language for the multi-user version of QuickBooks that made any kind of hosting impossible).  So I created a VM environment in which a user would get a dedicated VM for running one of these difficult apps (at least where licensing wasn’t an issue, so for example we never offered multi-user QuickBooks).  We’d charge a lot for these apps since we could support relatively few per physical server, but fortunately they had limited user bases.  For example, only one or two people in a company typically used accounting software.  I also made provision for a shared VM so that we could support multi-user software, for example Microsoft Dynamics, with a single VM per company rather than one per user.  Whereas we used Microsoft SPLA (rent software through us) model for most of our offering, the software running inside the VMs followed a “Bring Your Own License” model much as with today’s IaaS Cloud offerings.  This solved the problem that most software companies did not yet offer a service provider licensing model.  Take note that Microsoft, despite some flaws I’ll talk about in Part III, was way  ahead of the competition in trying to meet the needs of what we now call “the Cloud”.

Another problem we faced was integration with third-party services.  We first faced that with Paychex, which was used by one of our initial two customers.  Paychex (at least at the time, I haven’t checked recently) downloads a few ActiveX controls when you go to use it from your browser.  We, of course, had locked down our systems so that users could not load arbitrary ActiveX controls.  Addressing this required manual intervention, and it was a doozy.  Paychex only made the ActiveX controls available to logged in users.  Our customer’s user couldn’t install ActiveX controls.  Catch-22.  Paychex technical support was not helpful at all.  The final solution?  The customer gave me their Paychex login information (SHUDDER) so I could figure out what was going on and download the appropriate controls.  After that experience I invested a few days in trying to automate the process of allowing us to pre-authorize specific ActiveX controls for download by users, but had to put the effort on the back burner when it proved a tougher nut to crack than it first appeared.  We decided that for the time being we’d handle these as one-off situations and that I’d get back to solving the problem in an automated fashion after launch.

The next one to come up was salesforce.com.  This one proved far more satisfying for two reasons.  One, because we couldn’t make money running other Contact Management/CRM software in their own VMs, and no one ran under Terminal Server, salesforce.com actually made our overall profitability better.  Second, because salesforce.com was supportive of our efforts.  Their sales rep got us technical support to help, and when we had problems with one part of their integration offering they arranged a teleconference with the developer of the integration code.   (I wasn’t impressed with some of their early integration attempts as I didn’t think they made the best use of Microsoft’s facilities, but I was very impressed with their desire to support and make the integration better.)

On the day we went live a user could come to our website, learn all about the offering, sign up (with or without an Offer Code, though the website itself listed a code for a 30 day free trial), be automatically provisioned and using the system within minutes, add/delete/manage additional users, find KB articles on how to turn their PCs into thin clients, migrate data, setup their phones to Exchange ActiveSync, migrate their domain, submit trouble tickets, automatically re-bill credit cards on a monthly basis, and all the other things needed to run our service as a business.  Our original two consulting customers were cut over to the volume production system and using it the way we’d intended (actually we’d cut them over early as a short beta).   We were off and running.

Or crawling.  If “build it and they will come” worked PredictableIT would now be part of some large company and my partner and I would have pocketed many millions of dollars.  So in Part III I’ll explore the business proposition, what we did right and wrong, and why we eventually decided to shutter PredictableIT.

Before I go I wanted to address one last thing.  Readers of the story so far may think we overreached and could have cut back and gotten to market earlier.  They are right to an extent, but in the next installment I’ll explain why the customer value proposition forced us to be more complete at launch than would otherwise have been prudent.  But what I really wanted to add here is that I covered what we did, not what we didn’t do.  For example, we knew fairly early that what we were building was not scalable but decided time to market was more important.  If we succeeded it would hold us long enough to create a second generation that was fully scalable and cost optimized.  So many of the things we did sound big, but were actually scoped to avoid superfluous effort.

Posted in Cloud, Computer and Internet, Microsoft | Tagged , , , | 6 Comments

Keeping Windows 8 “Fresh” after RTM

One of the outstanding questions around Windows 8 is how Microsoft will keep it fresh after RTM.  Most recently Long Zheng raised this question in the context of the relative immaturity of the new Windows Runtime.  If the Windows team retains its current release philosophy we’d have to live for 2 1/2 to 3 years for the next release.  In a world of annual, or more frequent, updates by competitors such a philosophy would be fatal for Microsoft.  I devoted a paragraph to this topic when discussing falling footware, but it deserves additional analysis.  And as is often the case I find that examining history gives lots of clues to how the future might evolve.

From its beginnings Windows has primarily been on a 2-3 year release cycle.  This held true from 1.0 to 2.0 to 3.0 to 3.1.  This wasn’t especially surprising as other vendors had similar release cycles, Apple being an example in the PC space.  But with Microsoft’s decision to pull out of the OS/2 partnership with IBM to focus on Windows, it now entered a period of more rapid releases turning “3.1” into a series.  Windows for Workgroups was released 6 months after 3.1, and version 3.11 about a year later.  Not only that, Windows started the practice of independently shipping add-ons such as the Windows for Pen Computing extensions to enable tablet devices.  It even updated the Windows 3.1 API set to include Win32s after Windows NT shipped so developers could write Win32 apps for both platforms.

In 1995 (three years after the initial Windows 3.1 release) Microsoft shipped Windows 95, then in 1998 (there is that three-year thing again) shipped Windows 98 (and two years later ME).  But focusing on these dates can be very misleading, because after Windows 95 shipped Microsoft actually went into a period of very rapid evolution of Windows.  Windows 95 was followed by four updates called OEM Service Releases (OSR) that added new functionality such as Internet Explorer, the new FAT32 file system, and USB support.  Microsoft also adopted a philosophy of making new Windows functionality available as downloads so that Windows was constantly evolving.  In fact the delivery of a Windows version as a series of updates over time became so ingrained that Microsoft changed its accounting practices to spread the recognition of revenue from a Windows’ sale over a multi-year period!

I mostly want to leave Windows NT and Windows 2000 out of this discussion because they were both a parallel activity and a precursor to Windows XP, but do keep in mind that from a customer standpoint in the late 1990s through 2001 Microsoft was pushing out new releases at a rapid clip.  So in one three-year period you had four releases (98, ME, 2000, XP).  XP is where things once again get interesting.

After shipping Windows XP in 2001 Microsoft followed up with the Media Center Edition and Tablet PC Editions in 2002.  It later added X64 support, and a Starter Edition for emerging markets.    Microsoft continued to ship new Windows functionality very frequently in separately available downloads as well as in Service Packs.  Microsoft also introduced Windows Update, which made it even easier for Microsoft to deliver these updates to Windows XP systems.  If you compare Windows XP as shipped in 2001 to Windows XP as it looks with Service Pack 3 installed in 2008 you see two massively different operating systems.  From networking to security to user interface details (though not basic philosophy) to hardware support to API set (.NET in particular) Windows XP was in constant motion.

Which brings us to Windows Vista.  There was a five and a half-year gap between the original shipment of Windows XP and the release of Windows Vista.  This was unplanned and obviously totally unacceptable.  Not only that but it was, by most standards, not a good release.  Both the actual delay, and the low adoption rate, put into customer’s minds that Microsoft takes way too long between releases of Windows.  In other words it is the Vista experience that people focus on, not the near constant stream of updates to Windows XP.

After the Vista fiasco Microsoft had to completely re-examine its development philosophy for Windows, and the starting point became the philosophy used by the Office team.  This planning/design-heavy philosophy results in very predictable release cycles with generally excellent quality.  The tradeoff is a lack of responsiveness to changing market conditions.  Whereas in the Win95/WinXP worlds an emerging challenge would be met by rapidly spinning up an effort and shipping out-of-band updates or even a specialized edition this does not fit well in the post-Vista Windows development philosophy.   I think there was another recognition as well, Microsoft was backporting so much of the functionality it worked on for a new version as updates to the old that the new versions offered little to customers.  Take Vista as an example.  If you strip out the quality issues it had there is very little functionality that isn’t available as updates to Windows XP!  If customers had been comparing Windows XP RTM to Windows Vista RTM, rather than Windows XP SP2 + other updates to Vista RTM, then they would have found Vista a far more compelling release.  And so Microsoft not only adopted a development philosophy that would produce more predictable higher quality releases, it became far more thoughtful about what kinds of functionality should be provided as updates to earlier releases.  Internet Explorer, because it had gotten behind the curve, maintains an independent update capability.  And the consumer-oriented applications such as those for mail or photo editing, that needed frequent updates to be competitive, were actually removed from Windows and put on an independent delivery system.  But otherwise there is far less in the way of updates to Windows 7 then there was to Windows XP.  Let me give a simple example.  Windows XP added system supported WiFi configuration in Service Pack 1, so while Vista had even further refined support the difference between XP and Vista was not enough for it to be a real differentiator.  Windows 8 adds system support for 3G/4G connections, something that you need third-party software to do in earlier Windows versions.  Back in the Pre-Windows 7 days it is likely that Microsoft would have made (at least a subset of) this new functionality available as an update to earlier releases.  Not these days.  Integrated 3G/4G support will be one of many compelling small improvements that really differentiates Windows 8 from its predecessors.

When I summarize that rather long history lesson it comes down to a few points.  First, Microsoft has used a number of different strategies to keep Windows fresh over time.  Sometimes it goes into accelerated release cycles, but far more often it has used various forms of updates to existing releases to keep them fresh.  It has also shown it is not afraid to make API additions in these updates, with Win32s introduced as an update for Windows 3.1 and the .NET Framework introduced as part of Windows XP SP1. On the other hand, Microsoft has recently moved away from broadly updating Windows between releases making it less clear how it might handle keeping Windows 8 fresh.

Three more points about updates.  First, there has long been tension over what types of changes should be in a Service Pack.  Because you want a Service Pack to increase system stability, and to be widely adopted, there has been pressure to keep new functionality out of SPs and have them completely focus on bug fixes.  However the reality, if you look across product families, is that some SPs are indeed just collections of bug fixes while others introduce small numbers of functional improvements.  Occasionally a Service Pack is used to make major improvements to an existing area of functionality, as was the case with Forefront Unified Access Gateway SP1.  UAG SP1 (like Windows XP SP2, BTW,) could easily have been introduced as a new release, but we chose to make it a Service Pack for a number of reasons I won’t go into.  Second, Microsoft has occasionally used the notion of a Feature Pack as a way to introduce new functionality.  I consider this nothing more than a feature-oriented Service Pack and thus won’t explore it further.  Third, there are two areas where changes are avoided between releases.  The most obvious one are architectural changes.  These almost by definition requires a new release.  The second is User Interface.  While tweaks are possible, you don’t want users (or IT departments) to avoid installing an update because it causes usage disruption.

Lastly, Microsoft has always been caught between consumers who want the latest and greatest, its own need to address new markets and new initiatives as quickly as possible, OEMs who want support for freshening their hardware offerings fairly frequently, and Enterprise IT who want stability.  So you have some audiences that want rapid change (1-2 releases a year would be fine) and others who want little change (a release every 3 years is just dandy).

Thank you for putting up with that long setup!  Now on to Windows 8.

That IOS and Android have rapid, at least annual, release cycles puts a lot of pressure on Microsoft to keep their tablet OS on a similar release cycle.  Not only that, OS X has also gone to an annual cycle.  And to top it off Windows Phone is on an annual cycle (with some interim updates) as well.  I can’t believe anyone at Microsoft is under the impression that they can get away with a two to three-year cycle for Windows!  So the question isn’t will they do more frequent updates, it is how often and how extensive they will be.  And it is how they will ship the updates and how will they handle the engineering.

How the engineering is handled is the first big question.  The engineering philosophy that Windows adopted seems to be working very well and so I don’t think they want to throw it away.  At the same time a long planning cycle is not conducive to quick release cycles.  And so my first conclusion is that they won’t change the basic engineering philosophy, they will adapt it.  There are two ways you could adapt it, what I’ll call Forward Looking and Series Planning.  With Forward Looking you would plan the next major (2-3 year cycle) release and choose which things you wanted to make available as updates to the previous release.  You’d then schedule engineering to complete those items early and release them as updates.  With Series Planning you wouldn’t think of a release as a single event but rather as a series of released deliverables that included a base release plus updates.  In either case you’d have more concurrent development than Windows has today because you would need to be writing and shipping updates during the planning portion for the next release.  I think Series Planning meshes better with the Windows philosophy, but there is one little hitch.  I assume they weren’t doing this when Windows 8 was initially planned, so it’s a bit unnatural of a transition.  However, you can take items that were dropped later in the release cycle plus input from the Previews and create a series plan now.  Then for Windows 9 adopt Series Planning right from the start.  The main point here is that Windows will likely want to remain very thoughtful about what goes into a release and very planful about how to get there.  Updates to the mainline release will stay within the boundaries of the original planning scenarios/themes.  Architectural changes, significant UI changes, changes that could disrupt stability, and completely new scenarios/themes will wait for the next major release.

Given the above assumptions the question then becomes how will these changes manifest themselves?  Will we see a continuing stream of updates delivered through Windows Update?  Will we see annual or semi-annual Service Packs that contains these changes?  Will we see these given a new release name, so we get an 8.5 for example?  I’m going to go out on a limb and make a prediction.

When I balance all the competing forces here I come up with a scenario that I think works for Microsoft.  I want something that updates frequently enough to keep the experience fresh for users, fills out Metro/Windows Runtime for developers, and keeps OEMs happy in terms of hardware support.  I also want a consistent “current” platform for everyone to target.  And I want updates infrequently enough that you can keep control over quality and let Enterprises develop good and rapid adoption policies.  To me that means you go with the Service Pack-oriented philosophy.  Frequent small updates are too disruptive.  New releases have lots of baggage, including resistance to adoption as well as all kinds of marketing implications.  Doing an annual or semi-annual Service Pack addresses the real problems without introducing new ones.  It becomes the “Spring Update” or “Fall Update”, or maybe has some amusing internal name like “Knish” that everyone knows even though it is never called that.  You market Windows 8, period.  What Windows 8 is just gets better and better over time, until you finally have such major changes to introduce that you launch Windows 9.

Now to Long Zheng’s point.  Microsoft has already shown it will introduce new API capabilities between major releases.  Further, continuing to fill out Windows Runtime between major releases fits with both my Series Planning idea and within the notion of a Service Pack.  And so I strongly expect Microsoft will continue to improve Windows Runtime in 6-12 month increments even if it retains a 2-3 year overall Windows release cycle.

 

Posted in Computer and Internet, Microsoft, Windows | Tagged , , | 4 Comments

Windows 8 Release Preview Impressions

I’ve been using the Release Preview for a week now and thought it would be a good time to provide some impressions.  With the previous (Developer, Consumer) Previews I only installed Windows 8 on otherwise “out of service” PCs.  For example, a system I used for testing Viruses, a notebook that had been replaced and otherwise not used for months, and an old tablet that I really only acquired to play around with Windows 8.  There were all clean installs.  With the Release Preview I upgraded one of my production systems, the notebook that serves as our “home PC” for a second home.  Upgrading and using a production system yields a very different experience than installing a fresh system.  The latter makes you focus on all that is new in Windows 8 while the former really lets you focus on how Windows 8 will change your existing experience.

With the exception of the Tablet, which is a Win 7 era device, all of the hardware I’ve installed Windows 8 on bear a “Windows Vista” sticker.  So three of the four systems are in the 4-5 year old range.  The production system uses a Celeron with integrated Intel graphics, so it is a rather low-end affair.  If I hadn’t actually looked to see what this configuration was I would have thought it was much higher-end hardware.  It ran Windows 8 smoothly.

The upgrade of the production system to Windows 8 Release Preview was quick and relatively uneventful.  I removed Microsoft Security Essentials and updated a few Toshiba-specific items, then installed the RP.  After installation I tested most aspects of the system and the previously installed software.  I found only one problem, apparently the driver for my CD/DVD drive doesn’t work.  Everything looks great until you try to access a CD or DVD, and then the driver reports that its database in the Registry is corrupt.  You can uninstall the driver and try again, but you get the same results.

Now that the preliminaries are out-of-the-way let me get to my impressions.  I’ll start with something a little negative, the duality of the desktop and Metro IE versions.  You lose a lot with the Metro IE, particularly if there are add-ins you like.  I use Web of Trust, for example, and that is out of the question with the Metro browser.  And it isn’t clear which settings are shared between the desktop and Metro IEs.  There is no solid information, for example, that the Tracking Protection Lists (TPLs) that you set up on the desktop (and there is no way to set them up in Metro IE) are actually enforced by the Metro IE.  Certainly you can find other settings on the desktop that aren’t enforced by Metro IE.  In addition, Metro IE just isn’t as easy to use (tabs being an example) with a Mouse as is desktop IE.  So I’ve decided to only use desktop IE on a desktop/notebook system.  This is an easy settings change made, of course, in the desktop IE settings.  Now whenever I click on an IE-related Tile or some software launches IE it is always desktop IE.  I’ll reserve my use of Metro IE for Tablets.

Otherwise on my production system I found a very interesting thing, I often forgot I was using Windows 8!  Sure enough most of the applications I run have desktop shortcuts to them.  Things like Alt-Tab haven’t changed as a good way to cycle through apps and browser tabs.  So I would easily go an hour or two before needing to “Start”.  And then most of the times I would go to the Start Screen I would operate just as I did on Windows 7.  That is, I would click on Start and then start typing to search for something.  As I mentioned in an earlier posting I had already abandoned the use of navigating the Start Menu hierarchy in Windows 7 except as a last resort.  So I don’t miss it.  The only time I would really get into navigating the Start Screen is when I wanted to scan through the new Metro apps.

Also as mentioned in a previous posting I tend towards running windows maximized, so that the full-screen nature of Metro apps is fine with me.  I easily mixed Metro and Desktop apps in my usage, again with Alt-Tab as my friend.  The upgraded production system doesn’t have a screen capable of supporting snapped applications, and I did miss this feature.  With the system I’m typing this blog entry on I have “Bing Daily” snapped to the side so I can see all the latest news.  So a snapped Metro app next to a desktop IE page, not bad.  I’ll never buy new hardware that doesn’t support the 1366 x 768 required for this feature to work, and the second home “House PC” will be replaced once Windows 8 ships.

I have learned some of the keyboard shortcuts that makes use of the desktop more pleasant in Windows 8 (by letting you skip extra round trips through the Start Screen).  I’d never really gotten into keyboard shortcuts (other than Alt-Tab) before, but now that I’m using a few with Windows 8 I wish I’d made heavier use of them in earlier versions of Windows and applications.  So I’m torn between thinking of the new ones in Windows 8 as being a cool new feature and as being a necessary accommodation.  Just to be fair I give the nod to the latter interpretation.

The Start Screen itself, a rather pleasant experience on a Tablet, is just tolerable on a Desktop or Notebook.  The core of the problem is that it is meant to be the real starting point for your interactions with the PC, but that is not the desktop model of the world.  On a Tablet you live in the Start Screen (which is why Live Tiles are useful).  But traditionally on a PC you lived in the Desktop and then used Start only to find and launch an application when there was no shortcut for it.  With Windows 8 if you are primarily using Desktop apps then you will prefer to maintain the “live in the desktop, use Start only when needed” model, in which case the switch to the Start Screen is a jarring and unpleasant experience.  I believe this is how most non-Tablet users will experience Windows 8 for the first year or two of its existence.  As time goes on, and the Metro app library becomes enormous and enormously interesting, even non-Tablet users will choose to live in the Start Screen and use the desktop solely as a container for running “legacy” desktop apps.  But I’ve now concluded I won’t get there for a while.

I’ve hit a few bugs besides the driver issue along the way.  The worst of these was when (as I later figured out) my system apparently became disconnected from my Microsoft Account.  Apps that relied on Microsoft Account started giving me weird errors (essentially “App X isn’t working”) with no clue as to what the problem was.  To force things to reconnect you can switch your account type back to local, then back to Microsoft Account, which will force a new login and everything to be wired up properly again.  Microsoft  must  fix this problem (whatever it actually is) before RTM or risk discrediting one of the major advances in Windows 8.

The pre-installed apps are quite immature right now.  I could never use Mail to replace my direct usage of the Hotmail website, or of Outlook.  It just lacks too many features.  I love the idea of the Bing-based apps, and of Bing’s strategy of using Search to build apps instead of Search for Search’s sake.  But again these things are primitive right now.

There are two bottom lines for me.  One is that I rather like Windows 8 and enjoy when I go to do something new with it.  I am really looking forward to new Windows 8 hardware, particularly a Tablet that can replace my iPad.  Second is that when I use Windows 8 on a Desktop/Notebook system I’m going to continue to optimize my usage as a Desktop model and not force a Tablet usage model onto those systems.  At least for a while.

 

Posted in Computer and Internet, Microsoft, Windows | Tagged | 3 Comments

Anatomy of a startup – PredictableIT Part I

Back in 2004 a friend had told me about moving a small VC firm and one of its portfolio companies to a hosted environment (at a Managed Service Provider, which was another of the portfolio companies).  Not just servers mind you, but rather they had locked down all their PCs (turning them into Thin Clients) and were using Microsoft’s Terminal Services offering to run all client software in the data center.  We quickly decided that it would be interesting to take this model more broadly and formed a company, PredictableIT to do so for Small Businesses.  Initially our thought was that this would be a modest-volume income producing business, not your every day attempt to change the world and create the next Microsoft 🙂  Basically we would take what he had done on a consulting basis and productize it.  We’d self-fund the operation and try a business model that all but eliminated fixed costs.  For example, we’d have no employees and rely on contractors and other third-parties for all services.  We’d use the Managed Services Provider, who would rent us machines, rather than own any hardware (other than some development/test systems).  We’d build a website and some tools to help with marketing, sign-up, and deployment.  Of course just as few battle plans survive the first engagement with the enemy so to do few initial business plans survive their engagement with the real world.  And so this is our story.

By early 2005 we’d started PredictableIT and done all the initial things one needs to do.  We were an LLC, had a bank account, we had made the first capital infusion, and found office space in an Executive Office setup.  More importantly, while during the first couple of months of operation my partner had remained in California he soon moved to Colorado so we could spend 60+ hours a week sharing an office and getting on each other’s nerves.  Just kidding about the “getting on each other’s nerves” part.  We’d worked together before and knew that we could handle so much togetherness.  Most importantly we’d convinced the two companies that were doing this on a consulting basis to transfer their custom solution to us and become our first customers, and so we were up and running pretty much on day one.  We even had arranged it so that they would continue to bear the full cost of the Managed Services Provider’s systems until we started offering the service to other customers.  At that point they would stop paying for the systems and start paying us based on our standard monthly per-user rate, which would greatly reduce their costs.  Not only that, but while we absorbed the overhead of supporting the current systems any custom work required was done on a Time and Materials basis.  We weren’t paying ourselves for this work so effectively the T&M work helped fund the company.  This would turn out to be a mixed blessing as the distraction it caused lead to delays in creating our real service offering.  But for the time being we had a very modest cost structure, with our biggest expense being paying a contractor to build our website.

The offering we were creating seems pretty simple at first glance.  Users would replace their PCs with Thin Clients (or lock down their PCs to operate as thin clients) and use Terminal Services to run client software on our servers.  We would also operate a Hosted Exchange mail system for them.  But as you look at this environment it quickly becomes apparent how complex it is.  For example, Exchange did not include an anti-SPAM system at the time and the third-party software available was both expensive and not geared to multi-tenant environments.  Moreover, even Exchange did not fully support multi-tenant environments at the time.  In fact, it was able to do so primarily via hacks that were extremely fragile.  So we had to put together and manage a SpamAssasin-based anti-SPAM system in front of Exchange.  Initially we manually provisioned Exchange for each new customer, but realized immediately that this was so fragile that we need to automate the process.  The same was true for the directory structure on our Terminal Servers.  Mis-configuration exposed one company’s data to another, and so automation was required to prevent operational error!  Likewise at the time Anti-Spyware was in its infancy and the Anti-Spyware and Anti-Virus software categories were separate.  Our Managed Service Provider included Symantec Anti-Virus software in their offering, but not any Anti-Spyware.  Even though Symantec added Anti-Spyware support in a newer version the Managed Service Provider didn’t upgrade.  Since we were the only customer who really cared about this capability they just prioritized the upgrade very low, so low that it didn’t occur during the life of PredictableIT.  The result was that I had to build (and operate) our own Anti-Spyware solution using a variety of half-solutions.  Why didn’t we just buy an existing Anti-Spyware solution?  Well now there is a business plan issue I’ll get to in Part III.  Not that there were many choices that clearly claimed support for Windows Terminal Services!  Indeed we did a lot of dealing with the fact that much third-party software didn’t work in a Terminal Services environment, something else I’ll cover in a future installment.  And keep in mind that back with Windows Server 2003 Terminal Services was an incomplete solution, with most users opting to license Citrix in order to complete it.  Even simple things like handling remote printers didn’t really work in Terminal Services!  And then there was the Managed Services Provider.  I already mentioned that they didn’t provide Anti-Spyware, but that is just where the weaknesses began.  Their patching policy, for example, was unacceptable for our environment.  They would take weeks after Microsoft released a patch to deploy it to their systems, and then they would apply only the most critical fixes.  Because we had lots of client users across multiple companies using our systems we needed them patched rapidly and completely.  I had to take on the testing and deployment of Microsoft (and other) patches rather than leaving it to the Managed Service Provider.

While the basic idea of the service was quite simple the details of making it work in a way that could easily be resold to many users, and be supportable in that environment, were actually daunting.  We inherited a Phase 0 operating environment out of the consulting project, then spent a great deal of effort creating the Phase 1 operating environment that we could use to take on and support other customers.  In fact, when I think about how I spent my time over the course of the PredictableIT experience I would say that over 50% was dedicated to creating and actually running the operating environment.  My title was President, but I now think I was more VP of IT Operations!  My partner was CEO, yet he too spent a lot of time on the operations side (e.g., he setup and maintained SpamAssasin and a few other things we ran on Linux while I handled the Windows systems).

Thinking back on other time commitments is pretty revealing overall.  Keep in mind that these things are bursty, so there were times when both my partner and I were 90% working on the business plan, or 90% working on testing, or 90% working on some marketing experiments, etc.  But on an average basis I was 50% operations, 10% T&M, 20% Program Manager, 10% Test, 5% Dev, and 5% everything else.  My partner took the brunt of the T&M work and I suspect it turned into 30% of his time.  The rest was probably 10% operations, 30% Program Manager, 10% Test, 20% everything else.    If you see problems with these percentages then so do I.  But that I’ll leave that for the end of the series.

Having real customers and real customer problems tought us a lot in those first few months.  We realized that in order to take on more than a very few additional customers we’d have to automate a lot of things.  And as we thought this through we also realized that once automated we’d be able to take on vast numbers of customers.  And as we plugged this into our business model and realized the potential economies of scale we got stars, or rather  Clouds, in our eyes.  In other words, the work required on our part to handle a few dozen clients was not much different from the work to be able to handle thousands of clients.  So we changed our focus.  We were going to go big.

In Part II I’ll explore the system and service we actually built, and in Part III the business aspects themselves and offer a retrospective on why we shuttered the company.

 

Posted in Cloud, Computer and Internet | Tagged , , , | 1 Comment