Hopefully Part 1 gave you some idea of the scenarios that Integrated Storage was intended to address and why you would want support in the storage system to help address them. And yes, I barely scratched the surface of what one could imagine being possible if you did have that support. I know many of people just want the dirt…I mean history…behind Microsoft’s Integrated Storage efforts, but you are going to have to wait for Part 4 before I get to that. In this part I wanted to discuss some of the real challenges in creating Integrated Storage. Basically I want to explain why it is such a difficult nut to crack.
Which came first, the chicken or the egg? This classic question drives a lot of the innovation problems in technology, particularly platform technology, and plays a huge role in trying to come up with an Integrated Storage strategy. Ok, let’s use a couple of different and perhaps even more appropriate sayings: “Build it and they will come” or (to paraphrase) “Suppose you built a storage system and nobody used it?” These questions dominate any discussion about how to bring the concept of Integrated Storage to reality. Microsoft thought it had the answers, part of which was that you make it a (or rather THE) file system.
Creating Integrated Storage as a file system has both psychological and practical purposes. It declares that Integrated Storage is the primary store for the platform which is important to attract developer interest. This creates a commitment to applications that would build on Integrated Storage that the store will always be present on the platform. Maybe even more importantly, it allows other platform components to use the new store. And (as envisioned by Microsoft at least) it creates a means by which applications that don’t explicitly know anything about Integrated Storage can still manipulate the artifacts in the store.
Before I get into talking about file systems in more detail let me tie this back to one of my scenarios. By the start of the 21st century it was clear that Photos was the next “killer app” for PCs. It was also clear that traditional files systems were totally not up to the task of being an organizing tool for Photos. Third party products like ThumbsPlus and ACDSee had appeared to fill the void. If Photos were going to become such a critical data type than you needed to make them first class citizens in your platform. So out of the box you wanted Windows (and particularly Windows Explorer aka Windows File Explorer) to provide a full out-of-box photo organization and basic manipulation experience. To do that would require capabilities not present in the traditional file system. But unless your Integrated Storage solution was part of the platform then components like Windows Explorer couldn’t rely on it and couldn’t provide a great OOBE for photos.
The file systems we use today, across all operating systems, are (externally) no different from the ones I used in the 1970s and that had their origins in the 1960s. A file is a set of allocation units on a storage medium that externally is just a bag of bits (or blocks) without structure, without a name, and without any real way to navigate to it. External to the data structures that deal with allocations and the basic concept of a container is a catalog structure that exposes a name and navigation (directory/file a.k.a. folder/file) system to users and applications. At the leaf nodes of the catalog there are pointers to the allocation system’s container. So applications (including something like Windows Explorer) use one set of APIs to navigate the catalog and then take another set to manipulate the bag of bits (or stream) they find at the other end. Internally we’ve made lots of advances in how to organize and maintain the allocation units. Long gone are the days where files had to be contiguous, for example. But to an end-user or application, outside the switch to long file names, I’m hard pressed to describe any significant changes in the last 40 years.
File system stability has both up and down sides. The upside is that every application knows how to deal with a traditional concept of file. That’s the downside too. So take our photo example. You don’t need to implement Integrated Storage as a file system in order for Windows Explorer to be able to provide a great organizing experience for it. But what happens when the user wants to run Adobe Photoshop to edit the photo? You could evangelize Adobe to support the new store through a new (non-file oriented) API, but even if successful that doesn’t help until the user buys a new version of Photoshop. From their perspective if the photos aren’t stored in the file system, and specifically a file system accessed with existing Win32 APIs, you’ve broken their application. This same scenario applies to Microsoft Word.
New versions of Word might support a new Integrated Storage-based document store, but forcing purchase of a new version of Word in order to access documents in the store meant dramatically slower (if not nonexistent) adoption. Thinking about a worst case scenario where a customer had a dozen apps, any one app’s failure to support Integrated Storage could have prevented the customer from making any use of Integrated Storage.
So from the earliest discussions I recall Integrated Storage was always a new, Win32-compatible, file system. Accessing new functionality would be done by a new API, but you always had to be able to expose traditional file artifacts in a way that a legacy Win32 app could manipulate them. Double-click on a photo in an Integrated Storage-based Windows Explorer and it had to be able to launch a copy of Photoshop that didn’t know about Integrated Storage. And since that version of Photoshop didn’t know about Integrated Storage it also couldn’t update metadata in the store, it could just make changes to the properties inside the JPEG file. So when it closed the file Integrated Storage had to look inside the file and promote any JPEG properties that had been changed into the external metadata it maintained about the object.
Much of the complexity of Microsoft’s attempts at delivering Integrated Storage is owed to all this legacy support. Property promotion and demotion (e.g., if you changed something in the external metadata it might have to be pushed down into the legacy file format) was one nightmare that wasn’t a conceptual requirement of Integrated Storage but was a practical one. Dealing with Win32 file access details was another.
In the early post-OFS days dealing with making Integrated Storage a Win32 file system was the kernel/user mode transition problem. An application would make a Win32 call that would end up running in kernel mode. That would then call down into a user mode process, which itself could make a bunch of kernel model calls to access the data. Eventually you’d return the data back through kernel model and back into the user mode process of the application that made the file system call. It sounds slow. And moreover it has the potential for deadlocks.
Another problem had to do with the optimizations Windows had made for dealing with network access to files. For example, Windows had implemented the TransmitFile function for optimizing transmission of files from a web server by doing all the work in kernel mode. It understood how to walk the allocation unit structure in NTFS in order to do this. If one imposed a different or higher-level allocation structure on top of this, such as database blobs, then TransmitFile could no longer work as intended. Dramatically reducing Windows’ ability to serve up web pages was considered a non-starter, particularly in an era when battles over web server market share were at their peak.
Even perfectly emulating all the file access capabilities of a Win32 file system would prove daunting. A number of attempts at it were demonstrated to show full application compatibility in the high 90 percentile area. Sounds great doesn’t it? Well one of the applications that used a highly idiosyncratic feature that was impossible to emulate was Microsoft Word. It didn’t really matter if you hit 99.5% app compatibility if that 1/2% miss included the single most important application in the entire portfolio!
Just to finish up with describing how difficult this problem is I’ll mention the Windows boot path. It was clear from the earliest post-OFS days, and after considerable discussion that would be repeated with each attempt at Integrated Storage, that you couldn’t put the new store in the Windows boot path. Certainly not initially. Once you accept that you can focus on when does the new store load and what facilities in Windows can take a dependency on it. As you work through how a Windows system functions you can find many cases where there are things that should be using the new store, but they have to run in environments where the new store can’t yet be run. I went through a lot of Excedrin in those days.
Of course if everything just uses your Integrated Storage solution as a Win32 File System then you won’t get much benefit out of it. Better search (or maybe discovery would be a better description) being one of the things you might get, because part of the Win32 solution was the property promotion/demotion idea that I mentioned previously. But you really want some clients that will natively use your Integrated Storage solution and take full advantage of it. While those clients could be internal applications or customer (nee ISV) applications, having internal clients to work with is highly desirable. Particularly if you want to establish your solution as part of the platform (that is, why would a customer rely on it if you aren’t using it yourself). You need clients to know what tradeoffs to make in your design and implementation schedule. Lack of real clients either delays, or completely tanks, adoption of a new service.
Finding appropriate clients to work with you on, and commit to using, a new Integrated Storage solution turns out to be a daunting task. Their schedules, priorities, risk profiles, etc. do not necessarily match yours. And yes, even the org structure can get in the way. One alternative is to take the “Build it and they will come” approach. We repeatedly considered, and rejected, that approach. Another approach was to forget about internal clients and just work with a few close ISV partners (e.g., SAP) for the first wave of an Integrated Storage solution. Again, considered but rejected (largely because this was a Windows platform initiative and not specifically a database product initiative). When I get to the history you’ll see how this influenced the direction of Integrated Storage.
Also needed is a shipment vehicle. If you want Integrated Storage to be a platform service then you need a way to ship it as part of the platform. One can argue the definition of platform, for example Microsoft’s platform is more than just Windows. However to achieve its vision, including having Windows use Integrated Storage internally and having ISVs be able to count on its presence on every PC and Server, you pretty much have to be part of Windows. Alternate strategies look good on paper, and might have been acceptable as interim solutions, but in the end the goal was to build an Integrated Storage file system for Windows.
In Part 3 I’m going to talk about the different perspectives of the unstructured (File System), Semi-Structured (Office Document), and Structured (Database) worlds and how difficult it can be to marry these three world-views. It will serve as a transitional piece that goes from explaining more of the difficulties in building an Integrated Storage solution to the history of Microsoft’s attempts at delivering a solution.