WinFS, Integrated/Unified Storage, and Microsoft – Part 3

Although there are several ways to interpret the phrase “integrated storage” (or “unified storage”) one of the most important ones to focus on is that it creates a single store for Unstructured, Semi-Structured, and Structured types of storage.  The differences between these storage types, often seemingly small, are at the core of the technical, engineering, and political challenges involved in creating a new store.  So before diving into the history of Microsoft’s efforts it is valuable to discuss these three types of storage.

Unstructured Storage, the classic storage provided by operating system file systems, is something I’ve already discussed quite a bit in the previous parts of this series but want to add more clarity.  File Systems historically treat files as a bag of bits which can only be interpreted by an application.  They concern themselves with making it very fast to open a file, allocate space to it, stream bits to and from the file, and navigate to specific starting points in the file for performing streaming.  They also pay a lot of attention to maintaining the integrity of the storage device on which the file resides, and of providing certain very specific behaviors upon which an application (which might include a DBMS) can build more robust integrity.

The developers of File Systems tend to rebel against changes that violate the basic Unstructured Storage premises.  They want a very restricted fixed set of metadata about a file so they can make File Open very fast.  They don’t want to introduce concepts that require a lot of processing in the path of moving data between a raw storage device and the application (or network stack in the case of TransmitFile).  They don’t want to introduce complexity into kernel mode that risks the overall reliability of the operating system.  And they pay a huge amount of attention to the overall integrity of a “volume” and what happens when you move it between computer systems.

It isn’t that File System developers haven’t responded to pressures for richer file systems, it is that they have done so in very careful and precise ways that mirror their core mission.  At DEC, for example, they introduced Record Management Services (RMS) to add some measure of structure on top of the core file system.  RMS turns a bag of bits into a collection of records of bits.  In the case of keyed access a set of bits within the records could be identified as a key which was then indexed allowing retrieval by key.  But once a record was retrieved the application was responsible for interpreting its contents.  Importantly RMS existing as a layer on top of the core file system, and didn’t run in kernel mode.

At Microsoft you can see numerous ways that the File System team tried to accommodate greater richness in the file system without perverting the core file system concepts.  For example, the need for making metadata dynamic or adding some of the things that the Semi-Structured Storage world needs was met by adding a secondary stream capability to files.  That is, the traditional concept of a file was that you had a single series of allocation units pointed to by a catalog entry.  NTFS gained the ability to have that catalog entry point to more than one stream of allocation units.  The primary stream represented the file as we normally know it.  An application could attach another stream to the file to hold whatever it wanted.  The file system guys really didn’t care, and they didn’t interpret the stream.  So this was a very natural extension.  They also created File System Filters as a means to allow extensions to the file system without modifying the core file system itself.

From an engineering and political standpoint you can see what might happen when you start discussing replacing something like NTFS with an Integrated Storage solution like WinFS.  How does it impact the boot path of the operating system?  How does it impact the reliability of the operating system?  What happens to scenarios like Web Servers or Network File Servers, which serve up bags of bits using standardized protocols.  And are evaluated by benchmarks, and against competition, that will neither benefit from nor suffer the cost of a richer file system?  How would the new file system impact minimum system requirements?  Does the namespace cross multiple volumes?  How would that impact the portability of volumes?  All very good questions that need to be addressed.

The natural progression would be to talk about semi-structured storage next, but since it is the youngest of the storage types I’ll first focus on Structured Storage.  While the file system guys have always treated files as a bag of bits, applications need some way of interpreting those bits.  That knowledge can be completely encapsulated in the application itself, or parts of it can be shared.  One of the earliest motivators of the library mechanisms we find in programming languages today was as a way to share the definitions of how to interpret the contents of a file.  COBOL’s Copy statement was a prime example.  Data Dictionaries, and their modern evolution to being a Repository, were further evolutions of this concept.  To commercial data processing, as opposed to technical/scientific, applications a file was a collection of records each of which adhered to a specific format.  That format information was shared across any application that desired to process the file.  So you had a customer file with customer records.  Each record was xxx bytes long.  The first two bytes contained an integer Customer ID, the next 30 bytes had a Customer Name, etc.

Pretty soon this evolved to deal with the fact that apps didn’t processes one file with one record type.  You had orders, and order line items, and part, and the bill of materials for those parts, and inventory information, and the customer, and customer contact information, and so on.  You needed to manage and share these as collections.  Then notions of cross-file integrity entered the picture and transactions, logging/recovery, etc. were added.  And there was recognition that apps not only didn’t care about the physical structure of the “files”, putting that knowledge in apps made it hard to evolve them.  So separation of logical file and physical file ensued.  And making every app responsible for the integrity of the data lead to logical data integrity problems, so the ability to pull some of that responsibility into what is now called a database management system was added.    And application backlogs became a key problem so there was a push for reporting and query tools that allowed non-programmers to make use of the data collection.  And high-productivity “4GL” development tools to allow lower-expertise programmers to write applications.  And this all lead to the modern concept of a relational database management system.

So when we talk about Structured Storage we are talking about the classic database management concepts.  We’ve replaced Files/Records with Tables/Rows.  Each table has a well known logical structure that each row in the table conforms to.  There are good mechanisms for making tables extensible, such as adding a new data element (column) that is “null” in rows in which no value has been specified.  And a relational database by its nature transforms tables into other tables so we can actually have virtual table definitions (or views) that applications use.  But basically we are talking about groups of things with well known, externally described, structure.

Most of the world of commerce we are used to was made possible by the creation and growth of the concept of Structured Storage.  The modern world of Credit Cards and ATMs is 100% predicated on this work.  Amazon.com was in the realm of science fiction in the 1940s.  By the 1970s the conceptual basis for everything you needed to create it was in place.  It took until the 1990s for those concepts to mature sufficiently to let Amazon happen.  For structured storage we had database management system concepts and (hierarchical and network) implementations appear in the 1960s.  Ted Codd described the relational model in 1969, and during the 1970s the System R and Ingres projects explored how to implement his model.  They also defined most of the integrity concepts we take for granted today such as ACID.  But it wasn’t until the late 1980s that relational database management systems, which found their earliest adoption in “decision support”, became suitable for transaction processing.   And it was the 90s by the time they were the preferred solution for high performance transaction processing.

Moreover, it wasn’t until the late 90s that developers in all application areas embraced relational database management systems.  In fact, in the mid-90s most applications that weren’t clearly in the commercial data processing camp preferred to use unstructured storage even when they were storing structured data.  Today we have smartphone applications using SQLite (and other small relational systems) as a primary means of storage.  My how Structured Storage has evolved.

During the commercialization of relational database management systems (RDBMS) in the 1980s it was recognized that not all data you’d want to store in them was actually structured.  During the development of DEC’s Rdb products Jim Starkey invented the concept of a BLOB (Binary Large Object) as a way to store this data, a concept that was embraced by virtually all RDBMS.  The simple idea here was that you could do something like store an employee’s picture in a blob that was logically inside the employee’s row in the Employee table.  Other ideas quickly developed, such as a document management system with the documents stored in blobs.  But blobs were rather weakly implemented and received minimal attention from RDBMS development groups.  This will play an important role in our later exploration.

Meanwhile I third category of storage had emerged, primarily out of the Information Worker environment, called Semi-Structured Storage.  I like to think about this as having two periods of evolution.  In the first, files remained a bag of bits whose internal structure was private to an application but that also carried around a set of public metadata.  In the second, the internal structure was exposed to any application though they might not be able to actually operate on it.  The latter is the world brought about by XML and I’ll discuss that a bit later.

So what are examples of Semi-Structured Storage?  A Microsoft Word document is one.  Forget that today Word documents are stored as XML using the Open XML standard, they used to be a fully proprietary binary format.  But they exposed metadata such as Title, Author, etc. as a group of Properties known as a Property Bag.  In other words, they promoted certain information from their private format to a publicly accessible one.  Email is another example of something in which there is the content of the message and then a set of metadata about the message.  Who sent it, who was it sent to, what is its Read/Unread status, etc.  For something non-IW think about JPEG files.  There is the image and then there is a set of properties about the image.  Things like the camera it was taken with, GPS coordinates, etc.   Applications, including the Outlook or the Windows Shell, can make use of these Property Bags without having the ability to interpret the contents of the file itself.

One of the characteristics of a Property Bag is that new properties can be added rather arbitrarily.  A law firm might create a “CaseNumber” property that it requires employees to tag all Word documents with.  Or Nikon could add specific properties about photos taken with their cameras to a JPEG image that neither the standard defines nor that any app other than their own could make sense of.  But it’s not just top level organizations that can define properties, anyone can.  So the PR department can define a property for its documents such as “ApprovedForRelease” with values such as “Draft” or “Pending” or “Approved”.  Or an individual could define a property such as “LookAtLater” for email messages.

The notion of a Property Bag seems easy enough and painless enough to understand, but it clashes with the world of Structured Storage.  How does arbitrary definition of metadata clash with a world in which schema evolution is (mostly) tightly controlled?  Do you add a column to a table every time someone specifies a new property?  If two people create properties with the same name are they the same property?  If a table with thousands of columns, all of which are Null 99.99% of the time, seems unwieldy then what is an alternate storage structure?  And can you make it perform?

XML didn’t exist until 1998, so when I start talking about Microsoft’s Integrated Storage history it is important to note that it didn’t play a role in the first two major attempts at a solution (OFS and JAWS).  Prior to XML it was assumed that either a file was explicitly a semi-structured storage type (with a Property Bag, stored in a secondary stream for example) or implicitly one because an application-provided content filter (IFilter) could extract the Property Bag from a proprietary bag of bits.  In either case the application controlled the set of properties that were externalized.  With XML though anyone can examine and process the content of the file, making arbitrary structured storage-like queries possible.  The world of semi-structured storage exploded.

There are numerous ways one can combine these three views of storage.  BLOBs were an early attempt to address use cases where unstructured storage was needed in an application that was based on structured storage.  My “ah ha” moment around the importance of XML came during a customer visit and involved a favorite (from the earliest days of my career) application, Insurance Claims Processing.

During the waning days of SQL Server 7.0 Adam Bosworth approached me about this new industry effort, XML, that he and his team were driving.  XML as an interchange effort made a lot of sense, but as a database guy I was a skeptic on using it to store data.  So I set up a series of customer visits to early adopters of XML.  One customer was using it in an insurance claims processing app to address an age old problem.  The claims processing guys were evolving their application extremely rapidly, must more rapidly than the Database Administration department could evolve the corporate schema.  So what they would do is store new artifacts as XML in a BLOB they’d gotten the DBA’s to give them and have their apps work on the XML.  As soon as the DBA’s formalized the storage for an artifact in the corporate schema they would migrate that part out of the XML.  This way they could move as fast as they wanted to meet business needs, but still be good corporate citizens (and share data corporate-wide) when the rest of the organization was ready.

I returned from that trip convinced we had to add formal support for XML in SQL Server 2000.  So convinced that I encouraged my boss to bring Adam into the SQL organization and combine his efforts with others to create the Webdata org.  And, in a move that caused some consternation with the rest of the Server team, let the Webdata team make changes to the relational server code base.  And so independent of, though actually very much in line with integrated storage thinking, SQL Server was on its way in semi-structured storage.  Something I’ll return to in Part 4.

The existence of three types of storage, three sets of often conflicting requirements, three (or more) shipping product streams with different schedules, three classes of experts who deeply understood their type of storage but not both of the others, and three organizational centers of activity for those types of storage would make trying to create an Integrated Storage solution a continuing challenge.  It actually gets worse though in that various efforts which weren’t specifically under the storage or integrated storage umbrellas had deep overlap with storage.  Hailstorm is one example,  And it seemed like everyone in Microsoft had their own sync/replication service.  What was different about WinFS is that most of these barriers, including the organization structure, were addressed.  And the failure to deliver an Integrated Storage File System when the conditions were as close to ideal as they’ll ever be is why the concept will probably never be realized.  Meanwhile the world of storage has moved on in interesting ways.

In the next part of this series I’ll go through the actual history of Microsoft’s efforts.  Depending on its length I’ll either wrap up there with thoughts about the future or finish up with a fifth part.

 

Posted in Computer and Internet, Database, Microsoft, SQL Server, Windows | Tagged , , , , | 6 Comments

WinFS, Integrated/Unified Storage, and Microsoft – Part 2

Hopefully Part 1 gave you some idea of the scenarios that Integrated Storage was intended to address and why you would want support in the storage system to help address them.  And yes, I barely scratched the surface of what one could imagine being possible if you did have that support.  I know many of people just want the dirt…I mean history…behind Microsoft’s Integrated Storage efforts, but you are going to have to wait for Part 4 before I get to that.  In this part I wanted to discuss some of the real challenges in creating Integrated Storage.  Basically I want to explain why it is such a difficult nut to crack.

Which came first, the chicken or the egg?  This classic question drives a lot of the innovation problems in technology, particularly platform technology, and plays a huge role in trying to come up with an Integrated Storage strategy.  Ok, let’s use a couple of different and perhaps even more appropriate sayings:  “Build it and they will come” or  (to paraphrase) “Suppose you built a storage system and nobody used it?”  These questions dominate any discussion about how to bring the concept of Integrated Storage to reality.  Microsoft thought it had the answers, part of which was that you make it a (or rather THE) file system.

Creating Integrated Storage as a file system has both psychological and practical purposes.  It declares that Integrated Storage is the primary store for the platform which is important to attract developer interest.  This creates a commitment to applications that would build on Integrated Storage that the store will always be present on the platform.  Maybe even more importantly, it allows other platform components to use the new store.  And (as envisioned by Microsoft at least) it creates a means by which applications that don’t explicitly know anything about Integrated Storage can still manipulate the artifacts in the store.

Before I get into talking about file systems in more detail let me tie this back to one of my scenarios.  By the start of the 21st century it was clear that Photos was the next “killer app” for PCs.  It was also clear that traditional files systems were totally not up to the task of being an organizing tool for Photos.  Third party products like ThumbsPlus and ACDSee had appeared to fill the void.  If Photos were going to become such a critical data type than you needed to make them first class citizens in your platform.  So out of the box you wanted Windows (and particularly Windows Explorer aka Windows File Explorer) to provide a full out-of-box photo organization and basic manipulation experience.  To do that would require capabilities not present in the traditional file system.  But unless your Integrated Storage solution was part of the platform then components like Windows Explorer couldn’t rely on it and couldn’t provide a great OOBE for photos.

The file systems we use today, across all operating systems, are (externally) no different from the ones I used in the 1970s and that had their origins in the 1960s.  A file is a set of allocation units on a storage medium that externally is just a bag of bits (or blocks) without structure, without a name, and without any real way to navigate to it.  External to the data structures that deal with allocations and the basic concept of a container is a catalog structure that exposes a name and navigation (directory/file a.k.a. folder/file) system to users and applications.  At the leaf nodes of the catalog there are pointers to the allocation system’s container.  So applications (including something like Windows Explorer) use one set of APIs to navigate the catalog and then take another set to manipulate the bag of bits (or stream) they find at the other end.   Internally we’ve made lots of advances in how to organize and maintain the allocation units.   Long gone are the days where files had to be contiguous, for example.  But to an end-user or application, outside the switch to long file names, I’m hard pressed to describe any significant changes in the last 40 years.

File system stability has both up and down sides.  The upside is that every application knows how to deal with a traditional concept of file.  That’s the downside too.  So take our photo example.  You don’t need to implement Integrated Storage as a file system in order for Windows Explorer to be able to provide a great organizing experience for it.  But what happens when the user wants to run Adobe Photoshop to edit the photo?  You could evangelize Adobe to support the new store through a new (non-file oriented) API, but even if successful that doesn’t help until the user buys a new version of Photoshop.  From their perspective if the photos aren’t stored in the file system, and specifically a file system accessed with existing Win32 APIs, you’ve broken their application.  This same scenario applies to Microsoft Word.

New versions of Word might support a new Integrated Storage-based document store, but forcing purchase of a new version of Word in order to access documents in the store meant dramatically slower (if not nonexistent) adoption.  Thinking about a worst case scenario where a customer had a dozen apps, any one app’s failure to support Integrated Storage could have prevented the customer from making any use of Integrated Storage.

So from the earliest discussions I recall Integrated Storage was always a new, Win32-compatible, file system.  Accessing new functionality would be done by a new API, but you always had to be able to expose traditional file artifacts in a way that a legacy Win32 app could manipulate them.  Double-click on a photo in an Integrated Storage-based Windows Explorer and it had to be able to launch a copy of Photoshop that didn’t know about Integrated Storage.  And since that version of Photoshop didn’t know about Integrated Storage it also couldn’t update metadata in the store, it could just make changes to the properties inside the JPEG file.  So when it closed the file Integrated Storage had to look inside the file and promote any JPEG properties that had been changed into the external metadata it maintained about the object.

Much of the complexity of Microsoft’s attempts at delivering Integrated Storage is owed to all this legacy support.  Property promotion and demotion (e.g., if you changed something in the external metadata it might have to be pushed down into the legacy file format) was one nightmare that wasn’t a conceptual requirement of Integrated Storage but was a practical one.  Dealing with Win32 file access details was another.

In the early post-OFS days dealing with making Integrated Storage a Win32 file system was the kernel/user mode transition problem.  An application would make a Win32 call that would end up running in kernel mode.  That would then call down into a user mode process, which itself could make a bunch of kernel model calls to access the data.  Eventually you’d return the data back through kernel model and back into the user mode process of the application that made the file system call.  It sounds slow.  And moreover it has the potential for deadlocks.

Another problem had to do with the optimizations Windows had made for dealing with network access to files.  For example, Windows had implemented the TransmitFile function for optimizing transmission of files from a web server by doing all the work in kernel mode.  It understood how to walk the allocation unit structure in NTFS in order to do this.  If one imposed a different or higher-level allocation structure on top of this, such as database blobs, then TransmitFile could no longer work as intended.  Dramatically reducing Windows’ ability to serve up web pages was considered a non-starter, particularly in an era when battles over web server market share were at their peak.

Even perfectly emulating all the file access capabilities of a Win32 file system would prove daunting.  A number of attempts at it were demonstrated to show full application compatibility in the high 90 percentile area.  Sounds great doesn’t it?  Well one of the applications that used a highly idiosyncratic feature that was impossible to emulate was Microsoft Word.   It didn’t really matter if you hit 99.5% app compatibility if that 1/2% miss included the single most important application in the entire portfolio!

Just to finish up with describing how difficult this problem is I’ll mention the Windows boot path.  It was clear from the earliest post-OFS days, and after considerable discussion that would be repeated with each attempt at Integrated Storage, that you couldn’t put the new store in the Windows boot path.  Certainly not initially.  Once you accept that you can focus on when does the new store load and what facilities in Windows can take a dependency on it.  As you work through how a Windows system functions you can find many cases where there are things that should be using the new store, but they have to run in environments where the new store can’t yet be run.  I went through a lot of Excedrin in those days.

Of course if everything just uses your Integrated Storage solution as a Win32 File System then you won’t get much benefit out of it.  Better search (or maybe discovery would be a better description) being one of the things you might get, because part of the Win32 solution was the property promotion/demotion idea that I mentioned previously.  But you really want some clients that will natively use your Integrated Storage solution and take full advantage of it.  While those clients could be internal applications or customer (nee ISV) applications, having internal clients to work with is highly desirable.  Particularly if you want to establish your solution as part of the platform (that is, why would a customer rely on it if you aren’t using it yourself).  You need clients to know what tradeoffs to make in your design and implementation schedule.  Lack of real clients either delays, or completely tanks, adoption of a new service.

Finding appropriate clients to work with you on, and commit to using, a new Integrated Storage solution turns out to be a daunting task.  Their schedules, priorities, risk profiles, etc. do not necessarily match yours.  And yes, even the org structure can get in the way.    One alternative is to take the “Build it and they will come” approach.  We repeatedly considered, and rejected, that approach.  Another approach was to forget about internal clients and just work with a few close ISV partners (e.g., SAP) for the first wave of an Integrated Storage solution.  Again, considered but rejected (largely because this was a Windows platform initiative and not specifically a database product initiative).  When I get to the history you’ll see how this influenced the direction of Integrated Storage.

Also needed is a shipment vehicle.  If you want Integrated Storage to be a platform service then you need a way to ship it as part of the platform.  One can argue the definition of platform, for example Microsoft’s platform is more than just Windows.  However to achieve its vision, including having Windows use Integrated Storage internally and having ISVs be able to count on its presence on every PC and Server, you pretty much have to be part of Windows.  Alternate strategies look good on paper, and might have been acceptable as interim solutions, but in the end the goal was to build an Integrated Storage file system for Windows.

In Part 3 I’m going to talk about the different perspectives of the unstructured (File System), Semi-Structured (Office Document), and Structured (Database) worlds and how difficult it can be to marry these three world-views.  It will serve as a transitional piece that goes from explaining more of the difficulties in building an Integrated Storage solution to the history of Microsoft’s attempts at delivering a solution.

Posted in Computer and Internet, Database, Microsoft, SQL Server, Windows | Tagged , , , , | 6 Comments

WinFS, Integrated/Unified Storage, and Microsoft – Part 1

People have been bugging me to write about Integrated Storage for some time, and with Bill Gates having just disclosed that failure to ship WinFS was his biggest product regret  now seemed like a good time.  In Part 1 I’ll give a little introduction and talk about scenarios and why you’d want an Integrated (also refered to as unified) Store.  In a future part (or parts) I’ll talk more about Microsoft’s specific history trying to tackle this problem and what I think the future holds.

To position myself in all this, of the five attempts the Microsoft made at directly attacking this problem I had a hand in three of them as well as helping with a lot of the ancillary strategy.  My last position before leaving Microsoft the first time was as the General Manager of what became known as WinFS, so I have a lot of insight into how it started but only limited second-hand knowledge about how it ended.

I’ve noticed that a lot of people on the periphery have made comments that they never understood what WinFS or, more broadly Integrated/Unified Storage, was about.  The common thread being that anyone listening to a description came away with the impression that it was about “search”.  Now maybe that is to be expected given the simplest scenarios that people presented.  In fact, maybe Bill was most responsible for this.

When trying to express his frustration over the multiple stores situation at Microsoft Bill would use an example of “I know I saw a spreadsheet a couple of weeks ago; when I want to find it again do I look in my file system or do I look in my email?”.  Bill was trying to make multiple points with this simple example, but the primary one was not that there should be a way to search across disparate stores.  His primary frustration was that spreadsheets were stored in many different places each with their own semantics, APIs, “contracts”, management tools, and user experiences.  If you can’t solve the simple problem that Bill expressed of knowing where to look, then how can you hope to solve the problems involved in complex collaborative information worker scenarios or interoperable multi-data type enterprise applications?

So making it easier to find information was a critical goal of any of the integrated storage efforts.  By the way, this should be no surprise as the Integrated Storage efforts grew out of the vision for “Information At Your Fingertips”.  Nor should it be a surprise that Bill was focused very much on end-user scenarios given the IAYF vision and Microsoft’s background.  At the time of the first integrated storage effort, Cairo’s Object File System (OFS), Microsoft had no presence in the enterprise server or apps space.  So many scenarios that drove integrated storage were end-user scenarios.  Often those were Information Worker scenarios, but sometimes they were Consumer scenarios.

A somewhat simple set of consumer scenarios. and one that was a big focus for WinFS, was around the storage of photos.  Let’s say you are on a trip and take a bunch of photos.  You take photos at the wedding you attended, and photos of your kids at Disney World, and photos of a launch from Kennedy Space Flight Center, and some pictures late one night at the hot tub that no one but you and your spouse should see.  Now you transfer them to your computer and store them in the file system, but how can you organize them?  The file system provides very few tools for doing so.  They get stored with a meaningless file name, any given photo can be in only one place (and by default just as a collection from that download), and they have a fixed set of attributes that the file system knows about (e.g., creation date).  But you want photos that live in multiple places.  For example, you might want an album with pictures of Aunt Jean.  But you also want the pictures of Aunt Jean at the wedding to be in the wedding album.  You also want to share about 50 of the 500 photos you took (and make sure you don’t share any of the hot tub pictures).  How do you do that without copying the pictures to a separate share location?  Maybe you want to organize photos from all visits to Disney World together, but also keep them together by broader trip.

So integrated storage is about creating a rich organizational system.  One that isn’t tied to the rigid structure of file systems but rather to the organizational principles of the domain, application, and/or user preference.  Of course you also want to be able to find photos by far richer information than a file system stores in its metadata.  Perhaps tagged by the camera it was taken with or the person who actually took the shot.  Perhaps you want to query for photos taken within 50 miles of particular GPS coordinates.  And so on.  Thus search is very important and enabling rich searches based on semantics rather than simply pattern matching is important.

You can solve many of the problems I described for photos by putting an external metadata later on top of the file system and using an application or library to interact with the photos instead of interacting directly with the file system.  And that is exactly how it is done without integrated storage.  This causes problems of its own as applications typically won’t understand the layer and operate just on the filesystem underneath it.  That can make functionality that the layer purports to provide unreliable (e.g., when the application changes something about the photo which is not accurately propagated back into the external metadata store).  And with photos now stored in a data type-specific layer it is ever more difficult to implement scenarios or applications in which photos are but one data type.

Let me cross over into the enterprise app space and talk about an Insurance Claims Processing scenario.  Claims processing is interesting for a number of reasons, they key one being that it was one of the first enterprise applications to really embrace the notion of multiple data types.  When you file a claim, for a car accident for example, it goes into a traditional transactional database system.  But each claim has an associated set of artifacts such as photos of the accident scene, the police report, photos taken by the insurance adjuster, photos taken at the repair shop, witness statements, etc. that don’t neatly fit into the classic transactional database.  Yes you can store these artifacts in a database BLOB, but then they lose all semantics.  Not only that, you have to copy them out of the database into the file system so that applications that only know how to deal with the filesystem (e.g., Photoshop) can work against them.  And copy them back.  That creates enormous workflow difficulties, introduces data integrity problems, and prevents use of functionality that was embedded in the photos storage application.

The claims processing scenario is one that demonstrates where the name integrated storage came from.  What you really want is for the same store that holds your transactional structured data about a claim to hold the non-transactional semi-structured artifacts, and not just as blobs.  You want the semi-structured artifacts to expose their metadata and semantics to the application, or applications, built on that store.  As soon as you do that the ability to create richer apps, and/or use the data in complex information worker scenarios, climbs dramatically.

Rather than just using the photos as part of processing a specific claim they now become usable artifacts for risk analysis, fraud analysis, highway planning, or any number of other applications.  Data mining applications could run against them seeking patterns that weren’t captured in the transactional data.  Indeed all kinds of linkages could be made amongst the photos, police reports, etc. that just aren’t possible from the transactional data alone.

The multi-data type scenarios are huge in the information worker world and we’ve developed numerous application level technologies to deal with them.  OLE, for example, allows you to embeded one Office data type within another.  ODBC started out life as a way to bring structured data into Excel.  But these application-layer solutions have significant flaws.  They basically use an import model and you generally aren’t looking at the actual data but rather at a snapshot.  And you’ve probably discovered times where it was impossible to refresh document with current information because you didn’t have access to the location where it was stored.  Imagine submitting a settlement brief in a legal case to the judge with the numbers being out of date because of the complex series of steps from an ODBC query populating an Excel spreadsheet that is then embedded in a Word document and somewhere along the lines something didn’t update.  This could be a disaster.

Even organizing data for information worker projects is difficult.  Imagine you are building a proposal for a new business.  How do you organize and control all the artifacts amongst a set of people working on the project?  Sharepoint will do this for you, by creating another store on top of underlying stores.  Each application must understand how to work with a Sharepoint-like document management system (DMS), or the end-user must use a checkin/checkout system to copy artifacts from the DMS into the fileystem and then put them back.

How about another simple task, like setting up a video conference between a few people in your company and a few at a customer?  Contact information about your peers is stored in your company’s Exchange Server and the scheduling is done via Outlook, but your customer contacts are stored in a CRM system.  Working with the different sets of contacts can be painful, often involving cut and paste rather than seemless operation.  And this is a case where the CRM vendors actively work to integrate with Outlook.  Imagine you have a CRM system that hasn’t written a specific Outlook extension.  Where the names of common data elements aren’t the same.  And when they are the same, where the data formats for them differ.  Today we largely treat contacts as an MDM problem, with problem being the operative word.  For example, I recently noticed that one of the email addresses I have for Microsoft’s Dave Campbell is actually the email address from another of our former DEC colleagues.  Another Dave.  Some tool mistakenly merged it into my contact record for Campbell.

Finally let me give a system management scenario.  Many systems that need to combine structured (i.e., typical database data) and semi-structured/unstructured data (e.g., a photo or document) do so by having the database contain a pointer (e.g., URI) to the unstructured data.  How do you backup and restore this data in a consistent manner?  Imagine going to repair an aircraft and having the diagram associated with the area you are working on is out of sync with the database that contains information on the set of changes that have been applied to that specific aircraft.  Without a storage system that can be the primary store for structured, semi-structures, and unstructured data types you always have the situation of being unable to manage the collection of data that make up an application as a unit.

So what is Integrated Storage?  It is taking the storage concepts necessary to address these kinds of scenarios and moving them from the application layer, where each application addresses them individually, into a storage layer where they are addressed in a common way.  It is a storage system that provides rich and flexible organization, sharing, extensibility, discoverability, control, and manageability across the entire spectrum of data types that need to be stored.

At Microsoft Integrated Storage has repeatedly shown up positioned as a new file system (e.g., WinFS), which many see as a pejorative.  There are hints of why you’d want to do this at the file system level in many of my scenarios.  So I’ll start off Part II by drilling in to why this is, and why it has been the pivot point on which all attempts to create an Integrated Storage system have failed.

And for those who found this section to be too much rambling I apologize.  If I were doing this as a formal paper or presentation I’d go through scenarios first in a more pure form and then get into problems with current solutions.  But this is a blog, so you get to live with stream of concience and my time constraints on cleaning it up.

Posted in Computer and Internet, Database, Microsoft, SQL Server | Tagged , , , , , | 9 Comments

Quick note on Surface Pro being sold out

I have no idea how many Microsoft had available yesterday but I did want to make a few observations:

  1. Microsoft Stores had been giving away Surface Pro reservation cards for the week before availability.  My local store ran out of reservations this past Wednesday or Thursday I believe.  Most likely the bulk of their inventory was thus pre-committed and their were few Surface Pros available for walk-ins.
  2. I saw one report that Best Buy had allowed reservations with purchase of a $50 gift card.  I’m not sure if that is the case, but if so it could mean that their inventory was also largely pre-committed and not available for walk-ins.
  3. Reports of dismal supplies yesterday are based on anecdotal conversations with Best Buy and Staples employees.  But it isn’t clear how much inventory Microsoft would have committed to each store (or rather, how much each of those stores would have ordered).  Performance of the Surface, or of other tablet devices, at those chains might have suggested sales rates that dictated a steady stream of a low number of devices over having large initial stocks.
  4. In my discussions with Microsoft Store employees last week I got the impression that they expected to have adequate supply on hand to meet first day walk-ins.  That suggests demand was greater than expected, though of course each store could have had fewer devices on hand than they were expecting.
  5. That microsoftstore.com sold out and doesn’t allow backorders suggests to me that considerable additional supplies are being targeted at retailers.  My thinking here is that Microsoft would be concerned that if they allowed backorders then they would be “soft”, meaning that buyers would keep checking retailers and as soon as they got their hands on the Surface Pro they would cancel the online order.   That would create an inventory imbalance problem for them.  And it is a hint that Microsoft is focused on making the Surface Pro a success in traditional retail in the short-term over moving product through its most profitable channel (and perhaps alienating those retailers).

 

Posted in Computer and Internet, Microsoft | Tagged , , | 3 Comments

Microsoft is, and deserves to be, judged by a different standard

The Microsoft Surface Pro went on sale yesterday and immediately sold out, leading many pundits and other observers to declare the launch a failure.  Meanwhile the Surface RT has sold an unknown number of units (because Microsoft won’t reveal actual numbers) but let’s use the estimate of 1 million that was popular for a while.  Oh, it isn’t popular right now because any time an analyst, any analyst, speculates on a lower number people love to glom onto that.   Even at 1 million the Surface RT is  considered a dismal failure by pundits.  At the same time Google’s Nexus 4 smartphone, far cheaper (e.g. $50 with a two-year mobile plan commitment) and available at far more retail outlets than the Microsoft Surface, took a few weeks longer than the Surface to hit 1 million units.  And it is considered a runaway success!  You see the Nexus 4 is supply limited.  But wait, so is the Surface Pro and that is a “failure”.  And how about that iPhone 5 introduction?  My wife waited weeks to get her hands on an iPhone 5, because they were sold out from the moment of claimed availability.

Doesn’t it seem like Microsoft is being judged by a higher standard than the rest of the industry?  They are.  And to a surprising extent, as frustrating as it is, it is fair.  Apple has nothing to prove.  Google has nothing to prove.  Amazon has nothing to prove.  Microsoft has a lot to prove.  In the court of public opinion, or at least pundit opinion, Microsoft is expected to have big runaway success stories before it can leave its 20th century legacy behind and deserve to be uttered in the same breath with Apple, Google, and Amazon.

For pundits to have declared the Surface a success it would have needed a blowout introduction on the order of the Kinect.  Recall the Kinect sold 8 million units in 60 days and was the blowout consumer electronics product of the 2010 holiday season.  That is the standard by which all Microsoft product introductions are now measured.  And it is a tough standard to meet.

While Microsoft would love to see products like the Surface, Surface Pro, Windows 8, and Windows RT getting love from pundits and sales that blow away all expectations that isn’t their central focus.  They know they are running a marathon, not a sprint, and that what really matters is where they are in two, three, four, or more years.  They know they could have goosed the short-term results for the Surface RT by using a lower price and making it more broadly available.  The headlines would have been great as many millions of units (assuming they overcame supply constraints) were purchased.  With feverish demand, and a clear “winner”, being constantly sold out would have then been a plus.

Unfortunately any blow-out short-term success would have come at a high-price.  It would have irreparably damaged the OEM channel.  It would have set a precedent that Microsoft was a vendor of low-price rather than of high-value devices.  With low-price comes the risk of a “race to the bottom” against commodity device manufacturers and the inability for Microsoft to ever make money selling devices.  Microsoft needs to make money, Apple-like money, if it hopes to be in the devices business in the long-term.  Moreover, a key purpose of having its own devices is to bring its latest innovations and viewpoint to market.  Low-price means low-cost, low-cost is the antithesis of new technology introduction.

I know some are thinking it is silly for Microsoft to have worried about setting a precedent by using a low-price to ensure quick adoption of the Surface and Surface Pro, but Microsoft worries about precedent a lot (in many different areas).  It is far easier to lower prices than to raise them.  The time will come when Microsoft decides it is appropriate to lower prices, either directly or by repackaging (e.g., always include a Touch Cover with the Surface RT at the current tablet-only price; or eliminate the 32GB version and sell the 64GB version for the 32GB version’s price).  Meanwhile it will have established its position as a vendor of premium devices and retained its ability to target the market segments that most interest it.

Precedent also plays a huge role in why Microsoft has avoided giving out numbers for Surface, Surface Pro, and Windows Phone 8 unit sales.  It doesn’t matter if they are happy or unhappy with the numbers, nor how good or bad the numbers are, they are trying to avoid being drawn into the numbers game.  Once they start disclosing weekend, monthly, quarterly, or any other absolute numbers the expectation is that they’ll continue to do so on a regular basis.  And that these numbers would then dominate every discussion they tried to have about products.

Let’s face it, if they confirmed numbers that were low then it would just add to the damage caused by speculating they were low.  If they announced numbers that were at or slightly above expectations it wouldn’t help them (and the headlines would still shout out “mediocre” or “modest”).  The only really helpful number would be something crazy high, and that would show up in so many other metrics that Microsoft wouldn’t need to confirm them.  The frenzied speculation would do the positive PR job for them.  So they have chosen not to play the game.

In the short-run Microsoft’s approach means taking a lot of body-blows in the press and blogosphere and risking slower adoption rates as a result.  In the long-term Microsoft’s success or failure in its approach to the “post-PC era” will become evident and, in the case of success, it will have changed the nature of the conversation.  Perhaps not only meeting the higher standards to which it is held, but setting a new standard by which Apple, Google, Amazon and others are judged.

So for now all we can do is be frustrated by Microsoft being held to different standards than others.  And wait for the day when we can look back and, hopefully, correct the perceived injustice.

Posted in Computer and Internet, Microsoft | Tagged , , | 21 Comments

Microsoft Office for Linux: Are people asking and answering the wrong question?

Ok, rumors out today that Microsoft is considering releasing Microsoft Office on Linux.  Cue immediate reaction that ranges from skepticism to outright hostility that anyone would repeat such a rumor.  And the most negative reactions are coming from people I respect.  Personally I think that both the assumptions about what an “Office on Linux” would be and all the hostility around the rumor are misplaced.

Now do recall that I considered porting SQL Server to *nix at a couple of points.  So I have first hand experience with taking this kind of idea to Microsoft’s senior leadership, including Steve Ballmer.  It is not the knee-jerk negative reaction that outsiders expect.  It is a rational encouragement to make the case.  Have Steve’s views changed in the many years since I talked to him about porting to a non-Windows OS?  No doubt.  At some points in the intervening years I’m sure he’s been less receptive to the discussion.  But in his efforts to remake Microsoft into a Devices and Services company I would venture he’s become more receptive than ever to such proposals.  Services need clients.  Services can not be allowed to fail because you refuse to support the clients that users actually use, even if they aren’t your clients.

Near the top of the list is transitioning the Information Worker business into the services world with Office 365.  The importance of Office 365 was clear when I was still at Microsoft, and the emergence last year of the Office 365 Home and Student Premium preview confirmed to me that if anything its importance continues to grow.  Don’t think Microsoft would make tradeoffs that weren’t in the best interest of its Windows’ business in order to accelerate Office 365 adoption?  After Steve Ballmer declared “we’re all in” on the cloud I (with senior executive approval of course) pulled committed functionality for Windows Server 2012/Windows 8 so I could shift the resources to things critical for accelerating Office 365 adoption.  This was after WS2012/W8  planning was complete.  In fact, this was after the first development milestone was complete.  And my team wasn’t the only team to make significant (and sometimes painful) shifts to support Office 365 (and Azure).  Steve wasn’t kidding around when he said “we’re all in”.

So when rumors of Microsoft considering bringing Office to Linux surface I don’t discount them so readily.  They’d be stupid to not consider it, which doesn’t mean it will happen.  It is important for Office 365 to support any and all popular clients.  That doesn’t mean each client has to be supported at the same level of depth or breadth.  Not every Office application has to come to every client.  The applications on each client don’t have to necessarily have the full functionality of the version available on Windows.  And the delivery mechanisms don’t have to be the same.

Office 365 supports Linux today via Office Web Apps, the Outlook Web App, POP3 (so you can use a local mail client), and (I believe) the Lync Web App (for meetings).  Is this sufficient?  I doubt it.  Take the simple scenario of an engineer (using Linux) collaborating with product management, marketing, finance, etc. on a business plan for a new product.  The later are likely all using full Office on Windows.  Can the engineer fully collaborate in document creation using the Word, Excel, and Powerpoint web apps?  Doubtful (unless everyone else reduces their use to things that work with the web apps).

Today’s solution for Linux users more often entails forcing them to use VDI to access a Windows Desktop with full Office, dual-booting into Windows, or running Windows in a VM on their Linux system.  The Linux users I talk to absolutely hate this and try hard to minimize how often they do it.  In a pure packaged product world, particularly with this small a user base, the strategy makes perfect sense for Microsoft and is acceptable to their customers’ senior executives.  But in a services world it doesn’t necessarily fly.

To begin with it turns the Linux users into advocates for corporate-wide adoption of Google Apps at the very time their CIOs are making a Google Apps vs. Office 365 decision.  In any enterprise the Linux user base may be tiny, but it has influence many times what its size would imply.  Generally the Linux user base is likely to include IT employees, placing them close to the decision makers.  In some enterprises the small Linux user base might include their most critical employees, such as the engineers in an aerospace company.    It would be silly to expect this user base to become advocates for Office 365, but that isn’t what is required.   Microsoft needs to be able to prove to the C-level executives that their Linux users are not excluded from an Office 365-based solution.  And they need to be able to do it to an extent that a rational impartial observer would agree with them.  Put another way, they need to be able to neutralize an argument from Google that it has a better solution for companies utilizing a wide array of platforms.

This is mostly the same argument for why Office for iPad (or Android tablets) is needed.  The difference with that argument is that most iPad users are already using Office on their Windows  (or Mac)desktops and notebooks.  And they want it on their companion devices as well.

When I hear “Office for Linux” what pops into my head is not yes/no, realistic/not, bad for Windows, etc. but rather “what exactly does that mean?”  Does it mean a port of the full Windows apps to Linux?  Does it mean as a packaged product, or only as part of the Office 365 service?  Or are they some new subset apps, such as what they might be working on for the iPad (aka, have they written a subset of Office that they can adapt to multiple platforms as Office 365 clients)?  How about greatly enhanced Office Web Apps that are only available as part of Office 365 (i.e., and not the free Skydrive offering)?  Other options?

Personally I think everyone translates “Office for x” into full ports of Office for platform x.  But I doubt that is in the cards.  These other platforms, be that the iPad or Linux, are likely to get subset offerings targeted at the Office 365 service and usage scenarios Microsoft prioritizes.  For iPad’s or Android those are companion device scenarios.  For Linux they are probably collaboration or general corporate citizenship scenarios.  On the upside this makes the rumors far more credible than most people give them credit for.

On the downside it means that user expectations are too high and the reality is bound to disappoint many.  Take Office 2013 for Windows RT as a simple test case.  The lack of Outlook has caused significant outrage.  Lack of support for old macros causes some old-timers to claim it isn’t real Office.  And sure enough the first comment I read about the Office for Linux rumor was from someone saying they weren’t interested unless it included VBA support, which of course it wouldn’t.

Still think it’s all about Windows and that’s why Microsoft will never bring Office to X?  Ok, let me accept your premise and propose why it is wrong.  The move of most enterprise app clients to a web model, the growth in acceptance of non-Windows devices in the corporate environment, and the BYOD trend have greatly weakened Windows’ hold on the enterprise client market.  While having Office support, any Office support, on non-Microsoft clients may result in some slip in Windows market share an enterprise shift from Microsoft Office to Google Apps puts the entire Windows client population within that enterprise in jeopardy.  In other words, it is in the best interest of the Windows business itself for Office to support other clients if that is what it takes to keep customers from moving to Google Apps.  Risk a 5% market share loss to avoid a 95% share loss? That is the real question for Windows.  (BTW, Microsoft overall the net impact would be positive as the increase in Office revenue and profit likely greatly exceeds any negative impact on Windows.)

So rather than sitting here and saying “it will never happen” or dissing those who are publishing the rumor I’m contemplating what the rumor might mean.  It could indeed be total BS.  But just as likely, maybe more likely, it tells us that Microsoft has something up its sleeve.  But it isn’t likely to be exactly what people think it is.

 

Posted in Computer and Internet, Microsoft | Tagged , , | 8 Comments

Congratulations Microsoft on the Surface Pro

No this is not a true review.  I’ve had all of 20 minutes playing with a Surface Pro in the Microsoft Store.  I happened to hit the store just before the end of the work day when it was quiet, so I had easy access to various devices and the undivided attention of a staff member (see my granddaughter joke in the previous posting).  So think of this as more general commentary.

My first impression of the Surface Pro was that it is heavy and thick for a tablet.  Of course, the real way to think of the Surface Pro is like being a 11″ MacBook Air that can double as an iPad.  Your primary usage pattern is more as a notebook, but you don’t have to carry around a separate tablet.  Or decide between always having a heavy keyboard dock with you or never having it when you need it.  When viewed in that context the size, weight, and even battery life (pretty comparable to the 11″ MacBook Air from articles I’ve seen) are outstanding.  And in a funny twist, at the end of my visit to the Microsoft Store (after handling the HP Envy x2, Dell Duo, etc.) I went back to take a last look at the Surface Pro.  I picked it up and thought I’d grabbed a Surface RT by mistake.  No, it was a Pro.  After handling lots of other devices the Pro’s 2 pounds no longer seemed heavy.

What is important about the Surface Pro (and indeed the Surface RT) is that Microsoft took a specific point of view and created a device that is true to it.  They did not try to create a device that would appeal to everyone.  And while one can debate how big the niche is for the device they created, they appear to have created the best device one could possibly imagine for that niche.  I like that every detail has been thought through carefully.  You can see some of their explanations on various decisions on reddit.  The attention to detail is evident when you examine the device or its specs.  Or use it.

The first thing I did when I picked up the Surface Pro was launch OneNote, grab the pen, and start taking notes and making a sketch.  Ahhhhh.  I also used the pen to manipulate desktop windows, which is difficult with your finger on any device.  And I used handwriting recognition to enter web site addresses and other data.    I’ve owned three Tablet PCs in my life, and the Surface Pro blows them away in every respect.  It might finally be the right form factor and overall set of capabilities to take the original Tablet PC scenarios mainstream.  However I think pen input is still secondary to other input devices, including a finger.  And for those who say they can do as well using a capacitive pen on their iPad, or Surface RT for that matter, I have one thing to say.  ROFLOL.  Ok, two things.  You don’t know what you are talking about.  Comparing a capacitive pen to an active digitizer is like comparing a hang glider to an SR-71.  Oh, was that three?  Sorry.

In this industry only the fruit company usually thinks things through as carefully as the Surface Pro team has.  It’s a successful formula, meaning you can’t be everything to everyone so make sure you are incredibly good at the thing you are trying to be.  In the case of devices like the iPad this has made Apple very successful even in areas they weren’t designed for (e.g., the use of iPad’s as the basis of cash registers is exploding).  Microsoft can hope for this kind of success with the Surface Pro, but more importantly they can expect the niche they actually targeted to be absolutely in love with the device.

The Surface Pro reviews have been somewhat harsh, but not because of the device itself.  Most of the criticism derives from a lack of belief in the niche that Microsoft has targeted.  The Surface Pro is neither the best tablet you can get for $899 or the best Ultrabook you can get for $1000.  So if you apply either of those lenses to it then it looks bad.  What it is trying to be, and what it probably succeeds at being, is the best Ultrabook + Tablet combination for $1000.  Particularly since all its competitors force you to take a heavy keyboard dock with you in order to compete with the Surface Pro’s Type or Touch Cover.

If you are considering a Surface Pro pay more attention to your own usage pattern than to what the reviewers are saying.  In particular, if you currently carry a notebook and a tablet around and want to ditch one of those then the Surface Pro should be the starting point for your investigation.  Want to dock the device when you are in the office and use it as your desktop machine?  That makes the Surface Pro even more attractive as a 3-in-1 solution.    Think you’ll always have a traditional tablet with you but are looking for a new Ultrabook or Notebook for work?  The Surface Pro might not be your best option.

There are specific criticisms of the Surface Pro that are almost laughable (because they are valid, but way overemphasized points).  For example, the amount of free storage space out of the box is low compared to the quoted total storage space.  There are a number of reasons for that, but if it is an issue that can easily be addressed.  For example, Microsoft chose to include a recovery image on the Surface Pro so that it is very easy to reset the device.  Need several GB more storage on your Surface Pro?  You can copy that image to a bootable USB drive and then delete it from the Surface Pro.  And, of course, you can expand the Surface Pro’s storage with a microSDXC card.  They are running slightly under $1/GB up through the 64GB cards (and 128GB cards are coming on the market, though you’ll pay over $1/GB for those).  You could also try using compression to free a few GB, likely a better option on the Surface Pro with its very fast processor than on the Surface RT.  Finally, when comparing a 64GB Surface Pro to a 64GB iPad it is true that you get more free space on the iPad.  But when you compare the Surface Pro to an Ultrabook the Surface Pro should have as much free space as any Ultrabook with the same size SSD. (Probably more since the Surface Pro doesn’t come with crapware and the other devices usually do.)  Another example of the Surface Pro not being so easy to pin down.

There has been some criticism of the lack of a way to store the Pen in the Surface Pro’s body.  That’s valid, but less of a deal breaker than reviews make it out to be.  The pen can be carried magnetically attached to the power port, or clipped to the Touch or Type Cover.  My guess is that clipping it to the cover will make more sense and be more secure.  And having lost pens with my Tablet PCs, despite those pens being stored inside the device body, my advice to anyone who really relies on pen input is to get a spare to keep in your briefcase.  Even the in-body storage has its limitations, from leaving the pen on a table after use to having something bump and release it from the device body lock so it can fall out later.

Finally, the Surface Pro team has hinted that more hardware options are coming.  Those hints suggest either a desktop dock and/or (hard) keyboard dock option.  The reason for this is that the peripheral port has the ability to take power from the peripheral to power the Surface Pro.  In the case of a desktop dock sitting in an office this would simplify the use of a Surface Pro as a 3-in-1 device.  In the case of a more traditional keyboard dock this would allow for a second battery in the dock to extend overall battery life, as well as further optimize the device for Ultrabook-like usage.

So will I be getting a Surface Pro?  I don’t know.  I love my Surface RT and it has been meeting my needs.  If a major consulting project comes along that requires me to travel with a full Windows 8 PC then I might very well turn in my Toshiba R705 for a Surface Pro and leave the Surface RT at home when on business.  What I really don’t want to do is drag both the Surface RT and another device around all the time.  Right now the R705 comes along only occasionally so I’m ok.   Besides, waiting until I need a new device might open up additional options.  I love what Microsoft has done with the Surface family, but if they’ve challenged their OEMs to do even better then I’m quite happy to benefit from OEM efforts to top the Surface.

 

Posted in Computer and Internet, Microsoft | Tagged , , | 7 Comments

Another little “shopping” trip to Best Buy and the Microsoft Store

With release of the Surface Pro imminent, and word on the street that both the Microsoft Store and Best Buy already had units on display, I made another stop at both of my local outlets.  I know I sound like a broken record, but while the Microsoft Store experience was excellent the Best Buy experience was almost criminally bad.

The Microsoft Store had a Surface Pro right out front (though oddly it was equipped with a Touch Cover rather than Type Cover) and then several more on the first two tables in the center of the store.  They put the Surface Pro on the corners of the tables with Surface RT in the center positions so you could compare the two.  The store was already out of reservation cards with sales people reporting heavy interest in the Surface Pro.  If you desperately want one you just have to show up Saturday and hope they don’t run out of stock.  I do get the impression they are expecting good supplies, but demand still could leave you waiting.

I started playing with all the other tablets and convertibles in the store, and they now have quite the selection.  While playing with the HP Envy x2 a store employee young enough to be my granddaughter swung by and asked if I’d detached the tablet yet.  I pulled it out of the dock and she proceeded to talk about how much she likes that particular tablet design, the weight, feel of materials, etc.  It is a 10.6″ tablet, like the Surface, but with more traditional curved edges.  As I wrote about after I first saw a video of this device last year it seems like a great offering if what you want is primarily a clamshell notebook with the ability to detach a very nice tablet from it.

My “granddaughter” and I then discussed numerous other devices they had at the store.   And she left me sort of embarrassed by pointing out the obvious.  The magnetic connector for the cover on the Surface is symmetric.  So if you have a Type cover, and find it unnerving to hold your Surface as a tablet with your fingers on the keys behind it, you can quickly pull the cover off and click it in place in the reverse direction.  Now your fingers rest on the fabric-like back instead of the keys themselves.  Duh.

My trip to Best Buy was its usual “why bother” experience.  At least they’ve taken the ASUS signage off the little table with the Surface, and put a Surface Pro next to it.  Unfortunately they don’t have a Type or Touch Cover attached to it.  They do have a nice (Microsoft-supplied, I think) poster that helps you decide if you should get a Surface or Surface Pro.

The rest of their tablet and convertible selection hasn’t changed much (meaning mostly MIA).  The ASUS VivoTab RT was gone from the display in the mobile area, though the signage remained.  Actually they really confused matters by having a Windows 8 x86 Ultrabook labeled as a Windows RT device.  Apparently the staff just slid the price card for the Ultrabook into the description they had sitting there for the VivoTab.  Sloppy.

Indeed I realized that 80% of the problem at Best Buy is just being sloppy and lacking attention to detail.  That means they could make a huge improvement with no increase in cost.  I’d even come in and prove that for them at no charge, except that I want 20% of any improvement in monthly sales that results.  If they go chain-wide with it I’ll settle for 5% 🙂

I mean really Best Buy, things like having the right signs on the right devices.  Improving signage overall.  Creating a demo Microsoft Account so that devices are actually usable (e.g., you can’t look at Mail on a Windows 8 system because it wants a Microsoft Account; something addressed at the Microsoft Store).  Keeping Windows and the installed apps updated (again a Microsoft Account problem).  Installing more cool apps from the store so users can get a better idea of the device experience (something the Apple Store is great at).  Having bored employees man demo stations for featured devices when there aren’t customers in need of assistance.  All pretty small stuff actually.

Here is a simple truth.  If what you want is a cheap notebook and don’t care much about the buying experience or the latest technology then go to Best Buy.  If you want to see and hopefully buy the latest and greatest, with a good shopping experience as well, go to a Microsoft Store.  Especially now, with the Microsoft Store having a full array of tablets and convertibles and Best Buy having almost nothing.  Sadly, most readers of this will not have a Microsoft Store close enough to check out at lunch hour (and they might even need to get on an airplane to visit one).  Microsoft needs to fix this, or fix its retail partners.

 

Posted in Computer and Internet, Microsoft | Tagged , , | 8 Comments

Microsoft and Dell, more and less than meets the eye

Just some quick commentary on Dell going private and Microsoft’s participation in that process.

Let me start with something that should be obvious even if it isn’t.  There are personal relationships here that exist just about nowhere else in the PC ecosystem.  Where else are the leaders who created the PC industry still involved with the companies they created?  Michael Dell and Bill Gates (and Steve Ballmer) are the only ones left standing.  Pick your favorite company or individual and you quickly discover that they are gone, often quite literally.  Are any of IBM’s leaders from the era still around?  How about Compaq?  Intel?But Michael Dell is still head of Dell, and Bill Gates is still Chairman of Microsoft, and Steve Ballmer (who was there almost from the beginning as well) is Microsoft’s CEO.

I don’t know Michael Dell is friends with either Gates or Ballmer, but they sure are more than typical business acquaintances.  And while Dell has always been one of the PC CEOs to (privately) give Microsoft the most critical feedback, he’s also been one of the most active in supporting new Microsoft initiatives.  And when he hasn’t it has largely been because they were the antithesis of Dell’s (then) business model.  It isn’t that Dell hasn’t ventured “off the reservation” from time to time, but in the end of all the OEMs it remains the one most aligned with Microsoft from top to bottom.

If Michael Dell wanted to take Dell private, presumably to finish its transformation to something more aligned with the PC business of the 2010s than of the 1990s and 2000s, he would find a sympathetic ear with Bill, Steve, and Microsoft’s board.  I think there is a trust level that exists here that can’t be duplicated.  It isn’t that Microsoft really expects to have influence over Dell as part of its investment (not that it wouldn’t have tried to get some commitments out of it), it is that it sees Michael Dell with freedom of action as the last best shot for a revitalization and redefinition of  what constitutes a successful OEM.  If he succeeds it will become the model for others to follow.  If he fails, well at least the OEM model won’t have gone down without a fight.

Microsoft has done things like this before.  Today’s high-speed Internet, particularly cable broadband, was made possible by Microsoft investments in the late 90s.  The cable industry had a plan, but not the capital for building out their broadband networks.  Microsoft provided that capital.  It was somewhat driven by hopes they could sell set-top box and other software to the industry, but as Bill Gates said back in August of 2001, “If I had a wand and I could ask for one more hardware technology miracle, it’d be some way of having $20-a-month broadband to homes and small businesses.”  Microsoft needed ubiquitous broadband to realize its vision, and wasn’t afraid to invest to make it happen.

Today’s Internet infrastructure owes much of its existence to those investments that Microsoft made (and ultimately had to write down) in the 90s.  With the investment in Dell, Microsoft must be hoping it can give another industry important to its future a chance to evolve and thrive.

Posted in Computer and Internet, Microsoft | Tagged , | 7 Comments

More Surface Devices (Part 2)

In Part 1 of this topic I mentioned that Microsoft is going after “areas where they can identify scenarios and user requirements that are going unmet by both competitors and the OEM community”.  So what are some of those areas and how might Microsoft address them?

Let’s start small.  I’ve long seen a gap in Microsoft’s strategy around a portable gaming console which I last wrote about back in November when Xbox Surface rumors appeared.  At the same time it is clear that Microsoft needs to have something in the 7″ tablet space, and an XBox Surface meets the criteria I described in Part 1.  Moreover, it is a design center that is more amenable to a subsidization strategy than the 10.6″ Surface.  Take a look at my subsidy argument back in August 2012 when rumors that the Surface would be priced at $199 were rampant.  It turns out the Surface positioning wasn’t as entertainment-focused as I’d then expected, which likely explains the lack of a subsidy model for it.  But the proposition for a 7″ device isn’t going to be around a keyboard and MS Office (though both could be offered), it is going to be around entertainment.  Microsoft’s unique asset in this case is Xbox.  And that is a business where subsidizing the hardware has been part of the strategy since 2001.  There isn’t much more to say here as you can read the previous postings for my thoughts.

Going a little smaller is the possibility of a Surface Phone.  I’ve been a skeptic of this idea unless it is part of a larger reset of their mobile efforts.  I covered the latter in discussing a Plan B.  Microsoft will definitely continue with reference design work, as they’ve been doing since WP7, but I’m not holding my breath for a Surface Phone.  First, Microsoft has ramped up direct marketing of Windows Phone eliminating one of the purposes of having its own device.  Second, Windows Phone sales are indeed accelerating though it is still a little early to make a call on the success of Windows Phone 8.  And third, if they were going to pursue Plan B I would have expected to see them combine the Windows Phone and Skype divisions into a single business unit by now.

If Microsoft does do its own phone this year than what I expect is to see a Skype Surface phone from the Skype division, not the Windows Phone division.  Basically Skype will become an OEM (and carrier).  It won’t strategically be the Plan B I talked about, but it will look a lot like what I described in that blog posting.  The important thing to point out here is that it is Skype and video calling that represents the differentiation that Microsoft would use in a phone.  OEMs won’t go this route because they build devices to meet carrier requirements and be sold by the carriers.  Oh they love Skype as a feature, but “Skype as Carrier” doesn’t fit their business models.

Jumping to the other end of the spectrum is a large screen device that uses the Perceptive Pixel technology and is aimed at the telepresence market.  Microsoft has long seen the need and potential for enhancing how people work together from remote locations.  The Lync product has been at the center of its software efforts, and of course they acquired Skype.  They’ve also made attempts at innovative hardware such as the Roundtable (now the Polycom CX5000).  The Roundtable was an attempt to move traditional videoconferencing towards an IP-based telepresence experience at a cost more than two orders of magnitude less than traditional telepresence systems from Cisco and HP.   Even the cameras in the Surface/Surface Pro are optimized for video calling/conferencing and not (as is the case with rear cameras on other tablets) for taking pictures.

Microsoft Research has done tremendous work in telepresence/telecommuting while individuals and groups within Microsoft have also been working on this problem in order to meet their own needs.  When Distinguished Engineer Kim Cameron moved to Paris he created his own telepresence solution between his home office there and his office in Redmond.  Whenever Kim is in his Paris office he is in his Redmond office.  You can walk by, stick your head in, and chat while you sip your morning coffee.  When you meet with Kim it is pretty close to him actually being there; the main thing missing is you can’t shake his hand.  The Xbox team built (with MSR) a customized solution for individuals to have a face to face conversation between two of their facilities.  The SQL Server team put together a solution optimized for discussions between their teams in different geographies.  I made heavy use of Lync/Roundtable/etc. between my Denver office and my teams in Israel and Redmond, and was considering cloning the SQL Server setup at the time I left Microsoft.  Microsoft IT has more recently deployed a corporate telepresence system in various major facilities, using technology from HP (I believe).  Like other corporate telepresence systems this is a limited resource that must be scheduled, requires travel to special telepresence rooms, etc.

Telepresence, and the collaboration it brings, is the next huge advance in the information worker experience.  Microsoft knows this from both its own internal needs and customers.  I expect we are going to see one or more devices from Microsoft aimed at taking telepresence to the next level, both in terms of collaboration capabilities and price point.  Microsoft wants to bring telepresence to every conference room and every Information Worker’s office.  So something like a 60″ Perceptive Pixel-based device seems like a good bet in 2013.

While we’ll very likely see updated Surface and Surface Pro devices in the next twelve months, will we see other devices in the tablet/tablet crossover/Convertible/Ultrabook/Tablet categories?  A 12-13″ Surface Pro makes a lot of sense to me as an addition to the lineup.  It would have the same general scenario positioning as the current Surface Pro.  The Surface Pro has great specs, and its biggest limit is that 10.6″ screen (and keyboard dimensions to match).  Going up one size class would retain much of the Surface Pro’s portability while offering a screen size more appropriate for heavy Information Worker activities.  And it could allow for a true full-sized/full travel keyboard cover as well.

I’m not convinced we’ll see anything else in these portable device categories in the near term.  While a true Ultrabook or Notebook kind of device that combines Kinect-based gesture control and telepresence features would be sufficient differentiation to justify a Surface device, it might also be too much of a shot at the heart of Microsoft’s OEM’s business.  So I think they’ll pass on this area unless and until OEMs fail to sufficiently participate in Microsoft initiatives to spread this technology.  I have to say I won’t be shocked if I’m wrong and Microsoft brings a 13-15″ Information Worker-focused clamshell Ultrabook-type device to market in the next twelve months.  But I’m just not seeing the need in the short run.

That leaves one real market segment to consider, the desktop or All-in-1 market.  While it seems like Microsoft needs to put something into this segment in order to fill out its product line I’m not so sure.  Again this is a strong area for OEMs, and actually may represent one of the areas where they are doing their best work.  Microsoft will want to see Kinect-based gesture support in future models, but OEMs are probably more than willing to accommodate them.    So where would a Surface All-In-1 be targeted?  Is it the uber IW telepresence/collaboration desktop?  Or is it an attempt to revitalize the market for home desktops through a combination of video call capabilities, unique photography and videography capabilities, participation in the gaming ecosystem, etc.?  My problem thinking through this one is that I’m not seeing the “ah hah” scenario.  But Microsoft might have one in mind.

Finally, I do expect both Xbox “720” and a related streaming media device to appear this year.  So we probably will see 5-6 new devices out of Microsoft in 2013, with a more speculative possibility for the family to grow to 7 or 8.  That’s a lot of growth, perhaps too much growth in an 18-month overall window, for what was previously a software company with some niche hardware.  And I didn’t even touch on the possibility of someone wanting to do more server appliances.  I have no inkling that such a thing is in the works, though I’m sure somewhere teams who had proposals rejected years ago have reconsidered the option in light of the company’s “Devices and Services” refocus.

Let me close by saying that I feel for Microsoft CFO Peter Klein.  Every few weeks someone must come into his office with a proposal for some new hardware device.  I’m sure he puts his head in his hands for a moment, takes a deep breath, then proceeds to listen to them explain just how they are going to make money in a business known for low margins.  As Dell, HP, HTC, other OEMs, and hundreds of companies who are no longer with us have learned this is not easy.  Hardware is a much more complex business than software, with grand opportunities for losses that balloon out of control and far greater probability anything you do will be commoditized in short order.

To Klein, every hardware proposal carries the risk of a significant and prolonged hit to Microsoft’s margins.  So while we can all speculate on how Microsoft could do a cool Z device or Y device I’m sure Klein (and Ballmer) are trying to keep things focused on opportunities with both unique (often disruptive) strategic value and the potential for eventually achieving good margins.  And that will argue against some of the devices that you or I might otherwise hope to see out of Microsoft.

 

 

Posted in Computer and Internet, Microsoft, Mobile | Tagged , , , , , | 8 Comments