The CDC and Seniors, a disconnect

Because of Covid-19 the CDC is recommending that those over 60 (and anyone with chronic medical conditions) stock up and prepare to stay at home for the foreseeable future. Some articles have gone so far as to equate foreseeable future with being through 2021. While I am sure that is sound conservative medical advice about Covid-19 given current data, it also ignores the reality of being over 60.

By your mid-60s you are looking at a life expectancy of around 20 years. Moreover, at some point in your 70s your physical (and perhaps cognitive abilities) will likely decline significantly. So the CDC guidance is to flush 10% of your remaining life expectancy, and perhaps 20% (or more!) of your remaining fully active life, down the toilet to reduce the chance that you will die from Covid-19. Using some current data (and all data about Covid-19 is, at this point, suspect) that means giving up 20% of your active life trying to avoid a 4.6% chance of dying if you contract the SARS-CoV-2 virus.

To put a slightly different spin on it, if you are in your 60s and contract the SARS-CoV-2 virus you have a 95.4% chance of survival. But while you are busy trying to improve on that 95.4% survival rate by following the CDC’s advice to be a hermit, the ills of aging are racing at you like a train in a tunnel. Let me give one example, the rate of new cancers rises dramatically as you age. It roughly doubles from your late 50s to late 60s. The all cancer 5 year survival rate is 69.3%. Now imagine trying to improve on that Covid-19 95.4% survival rate by following the CDC’s advice to be a hermit only to find, when you are finally free to start living your life again, that you’ve developed cancer and only have a 69.3% chance of survival. And that’s the all cancer rate, what about stomach (30.9%) or esophageal (18.7%) cancer survival rates? And even if you survive, you will have given up another year or more of your life to fight the cancer. Now do you want to start talking about Alzheimer’s and Dementia? Starting at age 65 the risk of contracting Alzheimer’s increases at 23% per year of age. I’m going to stop looking at data now, because I’ll make myself horribly depressed.

The truth for those of us in our 60s is that our mortality has become real even if, for the moment, we are healthy and active. Becoming hermits for a year or two means, quite literally, giving up the few remaining good years of our lives. I’m pretty sure this is not the right trade-off. Certainly it isn’t for me.

Although not specifically part of my point I do want to add one thing that has been pointed out to me by others. The health risks, particularly mental health risks, of long-term isolation are significant. So while I’m making the point that you could be giving up the few remaining good years of your life, they were making the point that extended social distancing could actually accelerate your decline.

Please don’t think I’m advocating against most measures to avoid and stop the spread of SARS-CoV-2. Wash your hands frequently. Avoid crowds. Try not to touch your face. Stay home if you are sick. Etc. But if the CDC thinks that people in their 60s should hide in their homes for many months or even years then they aren’t looking at the big picture.

Posted in Computer and Internet | 2 Comments

Endings and Beginnings (Part 2 – Gaia Platform)

Shortly after I retired from Amazon my wife and I decided we would need to temporarily relocate to New York to help out with family. Just after we arrived in NY my semi-retirement plan would take a u-turn. Literally. Back to Seattle.

One day I checked my voicemail and on it was the message “Hi Hal, it’s David Vaskevitch. I have an idea I wanted to talk to you about.” David is a friend, was my hiring manager at Microsoft back in the 90s, and is Microsoft’s former CTO. David is perhaps best known for his long-term vision, and had a hand in creating many of the products (and non-product functions like channel marketing and Microsoft Consulting) that Microsoft is best known for.  For example, it was David’s vision for Microsoft to enter the enterprise server market and turn SQL Server (which at the time was a departmental-level targeted port of Sybase) into a Microsoft developed Enterprise-grade database. David put the strategy in place, hired the team leaders, negotiated the deal to split from Sybase, set many of the key requirements, and contributed to the overall design of what became SQL Server 7.0.  Mehtap Ozkan is our third founder. Mehtap brings product and technical judgement, and lots of startup experience, to our team. Soon David, Mehtap, and I were in conversations that evolved into a concrete product concept and fundraising began.

In January of 2019 the first tranche of funding arrived and Gaia Platform LLC was off and hiring. In Bellevue WA, 2550 miles from where I was temporarily living. That was some commute, but fortunately by August I was back in Colorado and commuting to Bellevue on a regular basis. Another failure at retirement for me.

So what was interesting enough to draw me out of retirement this time? One was addressing a problem that I’d been interested in since the 70s, the other was addressing  another platform shift that is underway. I can’t resist a good platform shift, so let’s talk about that first.

In computing we’ve had a number of platform generations over the years, and I’ve managed to participate in all of them to date. Mainframes, Minicomputers, PCs/PC Servers, Mobile, and Cloud are the main ones. I don’t think IoT is, by itself, a platform shift and I see “The Edge” as part of the maturation of the Cloud platform. So what is the next platform? Autonomous Machines. We have some examples today, take iRobot’s Roomba robot vacuums. iRobot has sold over 25 million robots to date, most of which are Roombas. So only a small fraction of the existing ~1.2 Billion households in the world have Roombas. And projections are another 2 Billion homes need to be built by the end of this century to accommodate population growth. Then what about robot mops, robot lawnmowers, autochefs, etc.? We haven’t even gotten to the potentially far larger industrial side of things and we are looking at an eventual market for tens of billions of autonomous machines.

So am I talking about robotics as a platform shift? Not exactly, while many robots will be autonomous machines not all autonomous machines are robots. What is a drone air taxi? Or a medical diagnostic machine you deploy to a natural disaster site? Or an autonomous tractor plowing fields? Overall platform shifts have resulted in a roughly order of magnitude larger number of systems. PCs are order 1 Billion, mobile is order 10 Billion, and we expect there will be 100 Billion Autonomous Machines over some reasonable period of time.

Each new hardware platform shift has also resulted in a software platform shift, though initial systems tried to reuse the prior generation’s technology. Anyone else remember IBM sticking a 370 co-processor in the IBM PC XT (the IBM PC XT/370) so you could run mainframe software on it? Or the DEC Professional series of PDP-11 based PCs? Sticking a PDP-11 processor in a PC form-factor and running a variant of  (the very successful on minicomputers) RSX-11M OS and software stack on it does not a successful PC make. The winner in each generation of platform has been a new software stack designed to support the new use cases.

The need for a new software stack brings us to the other problem, programming is too hard. There are two dimensions to this, making it easier for the professional programmer and making it easy for the non-programmer to create and maintain applications. In the 70s, 80s, and 90s application backlog was top of mind in the IT world. In the 70s even something as simple as changing a report would get thrown on the stack of requests with a year or more waiting period until the limited set of programmers could get around to it. This is why end user report writing tools like RAMIS, FOCUS, Easytrieve, etc. were among the most popular of mainframe 3rd party software. Datatrieve was huge in the minicomputer era. Crystal Reports was an early winner in the PC business software world. Many of these evolved into more end-user query tools. Other “4GL” tools were more oriented towards full application generation, like DEC RALLY, while other end-user data access tools like DEC Teamdata turned out to be the forerunner of today’s end-user BI tools.

From about the mid-80s to the mid-90s we were in a golden age of addressing the need for creating applications without programmers, or to make it easier for application programmers (as opposed to system programmers or today’s software engineers) to build applications. These efforts peaked with Microsoft Access for non-programmers and Microsoft Visual Basic (VB) for application programmers. Before VB the development of GUI application user interfaces was the realm of a narrow set of systems programmers, VB made it possible for all levels of developer to create state of the art end-user experiences for the GUI environment. There were other efforts of course, Sybase PowerBuilder had a tremendous following as well. And then progress seemed to stop. While the later 90s saw lots of new tools emerge to make website creation easier, Microsoft Frontpage for example, that area too devolved into one of extreme complexity (e.g., the current popularity of the “Full Stack Developer”) unless your needs could be met by prepackaged solutions. Today there is somewhat of a renaissance in Low-Code/No-Code solutions, but not at the level of generality that Microsoft Access and Visual Basic provided.

Going back to the professional programmer for a moment, that world just keeps getting more complex the more tools and systems we introduce. Yet we still haven’t addressed key problems that have been with us since (at least) the 80s, such as the impedance mismatch between databases and programming languages. At some point we pretty much declared Object-Relational Mapping Layers (ORM) the solution and moved on. But this is a terrible solution, perhaps best described by Ted Neward as “The Vietnam of Computer Science“.  We think we can do better, not by creating another/different layer but by creating a new database engine that combines the relational and graph databases along with a programming model that is more native for professional programmers. It also sets the foundation for us to provide a Declarative Programming environment that will support increased productivity for traditional programmers, low-code development, and no-code development.

Of course one of the greatest advances in years is that machine learning has now become a useful, nee essential, tool in application building. It is also a form of declarative programming, and we see it as part of our overall platform. But we aren’t going to build new ML/DL technology, we are going to integrate existing technology it into our overall platform. We think about a Perceive-Understand-Act application flow, and ML/DL is the technology of choice for the Perception part. We are building technology for Understand (i.e., database) and Act (i.e., declarative programming) and to integrate these three areas (and other technologies such as traditional programming languages or the Robot Operating System) together as a platform for Autonomous Machines.

It’s a tall order for a startup to be addressing this broad of a space, and platforms are not considered the easiest startup plays, but I like hard problems. Of course we can’t deliver everything all at once, and one of the challenges is figuring out how to put product into the hands of customers early without compromising our ability to make key architectural advances that are critical to delivering on the overall vision. We are working our way through that too. As with almost anything in tech the biggest challenge has been hiring the right set of people to pull this off. Our hiring continues, so if any of this interests you please feel free to reach out.

 

 

 

 

 

 

 

Posted in Autonomous Machines, Computer and Internet, Database, Programming | 2 Comments

ARM in the Cloud

I know I’m long overdue on a “Part 2”, but wanted to slip this in first. I’ve long been a skeptic on ARM becoming a mainstream processor choice for servers. But today’s announcement by Amazon Web Services of the ARM architecture Graviton2 processor and the M6g, C6g, and R6g instance families has me rethinking my position. As is often the case I’ll start with some historical perspective and then discuss today’s AWS announcement.

In the late 1980s and early 1990s it was widely believed that RISC (Reduced Instruction Set Computer) architectures would replace CISC (Complex Instruction Set Computer) architectures such as the then leading VAX, x86, IBM 360/370,Motorola 68000, etc. instruction set architectures (ISA). The reasons for this were two-fold. First, it was believed that in any given semiconductor process technology a RISC processor would have 2x the performance of a CISC processor. This was largely because with CISC you were devoting an ever increasing percentage of the available transistors to overcoming the inherent bottlenecks of the ISA, while with RISC those transistors could be devoted to increasing performance. Second, the complexity of designing CISC ISA processors had reached the point where the semiconductor technology could advance more quickly than you could design a processor for it, so you were always behind the curve of taking advantage of Moore’s Law. RISC ISA processors were easier to design, and thus would better track the semiconductor process improvement timing.

One thing to keep in mind was the original RISC concept was to create a new ISA every time you made a new processor. So you never really had to waste silicon on legacy, you did a new optimal ISA for each processor and made it a compiler problem to re-target to each processor. Of course software people quickly made it clear to the hardware folks that this was a non-starter, that the cost of supporting new processors effectively (finding and fixing ISA-specific bugs, tuning performance, issuing patches to released software, etc) would outweigh the performance improvements of not having a fixed ISA. So we moved on to fixed RISC ISAs that would survive through multiple generations of processors. By 1995 RISC was well on its way to world domination. IBM, Apple (Mac), and Motorola had moved to Power ISA. DEC moved to Alpha and HP to HP/PA. Acorn Machines was a PC manufacturer that created its own RISC processor (the Acorn RISC Machine) and operating system (RISC OS). Acorn would later shift its focus away from PCs to its RISC ISA, dropping “Acorn” in favor of “Advanced”, renaming the company ARM, and licensing its architecture and designs to various companies. Other RISC chips also appeared including the Intel i960 and the MIPS line. MIPS in particular looked that it would become “the Intel” of RISC processors, though it would eventually falter. And as we now know, ARM would be the only RISC ISA to really thrive, by riding the growth of the market for mobile devices. But at the start of 1995 it looked like we were going to have RISC everywhere.

So what happened in 1995? The Intel Pentium Pro. The Pentium Pro could hold its own on performance with that year’s RISC chips while maintaining full x86 compatibility. How did Intel do it? First off they clearly had made advances in chip design tools that let them move much faster than other companies working on CISC. And they adopted an approach of compiling CISC instructions into RISC-lie ROPS and then making the rest of the processor work like a RISC processor. But maybe more importantly, they had a generational lead on introducing semiconductor manufacturing processes. So even if the assumption that in any given semiconductor process technology RISC would be 2x CISC held, Intel being a process technology generation ahead negated that RISC vs CISC advantage.

Intel’s process technology advantage held for twenty years, allowing the x86 to extend its dominance from the desktop to the data center. With the exception of Power, which IBM continued to advance, RISC ISAs disappeared from the server and desktop world. But RISC had another advantage, its simplicity made it easy to scale down to smaller low-power microprocessors for embedded and mobile applications. Today pretty much every mobile phone and mainstream tablet uses a processor based on the ARM ISA.

A number of years ago ARM and its semiconductor partners began trying to bring RISC back to the server and PC markets where it was originally expected to dominate. On the PC front ARM has made some limited progress, particularly with ChromeOS PCs and more recently in Windows PCs such as Microsoft’s Surface Pro X. But so far that progress represents a tiny portion of the PC business. In servers we’ve seen several abortive efforts and essentially no adoption. Until now.

Last year Amazon Web Services introduced an instance family, the A1, based on an ARM ISA processor of its own design called Graviton. Side note, in most cases (Apple is the counter-example) semiconductor designers license not the ARM ISA but an actual “core” design from ARM. That is the case with AWS. This was a pretty niche offering, and (to me at least) signaled likely another failed attempt to bring ARM to the mainstream server market. For example the A1 was not something you could benchmark against their Intel-based instances and end up with a direct comparison. It was more niche targeted.

Today AWS brought their second generation ARM processor, the Graviton2, to its three most mainstream instance families. Those are the M (general purpose, or balanced), C (compute intensive), and R (memory intensive) families and we now have the M6g, C6g, and R6g families. They even did some performance comparisons of the M6g against the Intel Skylake SP-powered M5. And they were quite favorable to the M6g.  But Skylake SP is an older Intel generation, and a comparison with the Coffee Lake SP and AMD’s Rome would be more telling. These have already made their way into some models in the C5 and C5a families. Intel is also accelerating its product cycles so I expect it to regain a performance lead, though perhaps not enough to deter the growth of Graviton. Graviton is likely to retain a price/performance lead in any case.

So what happened to allow ARM to (apparently) catch up to Intel in the data center? I think there are three factors at play. First, recall the original RISC premise that in any given semiconductor process technology RISC should be 2x CISC performance and that this turned out not to matter with the x86 because Intel was a generation ahead on semiconductor process. Intel no longer has that generational advantage, and by some measures (e.g., smallest feature size) is behind semiconductor foundries such as the one AWS uses, TSMC. The second factor is we have a modern, some might say the most important modern, “system” vendor, AWS, leading the charge. Instruction Set Architectures have tended to thrive when backed by a powerful system vendor, not as pure artifacts of the semiconductor industry. x86 is the dominant PC and Server chip today because of selecting it for the IBM PC. ARM’s success came from DEC adopting it to create the StrongARM family, which was the dominant processor used in PDAs and early smartphones. Even ARM originator Acorn used StrongARM in its systems. Earlier dominant ISAs came from the system vendors themselves, particularly DEC and IBM. Now, just as DEC boosted ARM into dominance in the mobile device market, it looks like AWS will do the same for servers. Third, because AWS can optimize the ARM architecture and licensed core into its own chips for the controlled environment that is the AWS Cloud it can tune chip designs far more than someone trying to create a general purpose offering for on-premise servers.

So is it game over for the x86? After decades of watching, and working with, Intel I doubt it. Though it isn’t just AWS that Intel has to worry about. If AWS starts to show real success with Graviton than Microsoft and Google will be empowered to go full-bore with ARM in the cloud as well. And then there is the persistent rumor that Apple wants to move the Mac to ARM processors of its own design. With failed efforts to break into the mobile device market, pressure from all corners growing in the PC market, and now an apparently competitive mainstream ARM offering in the server market, Intel can’t stay on the modest improvement at high unit cost bandwagon much longer.

Posted in AWS, Cloud, Computer and Internet | Tagged , | 2 Comments

Endings and Beginnings (Part 1 – AWS)

Last week’s announcement of Amazon Aurora Multi-Master being generally available marked a kind of ending for me. It also served as a reminder that I haven’t written anything about my new venture, Gaia Platform. So nearly two years after I tried (and once again failed) at retirement, let me wrap up my Amazon Web Services (AWS) adventure and tell you about my new one.

The lure of working on databases for a new computing era, The Cloud, is what drew me out of semi-retirement and to AWS. I was running Amazon Relational Database Service (RDS), parts of Amazon Aurora (it’s complicated in that I had the control plane and product management reporting to me, and then the Aurora PostgreSQL project fully reported to me, but my peer Anurag Gupta owned Aurora MySQL and the Aurora storage system and is the father of Aurora; I get embarrassed when assigned credit that rightfully belongs to Anurag), the Database Migration Service (DMS), Performance Insights, and a few things that aren’t externally visible (e.g., the DBAs for AWS’ control plane databases, an operations team for a bunch of services in AWS under the CIA’s Commercial Cloud Services (C2S) program).

There were a lot of challenges in this new role for me and I relished them, even when I struggled. For example, I’d always forced my hand into the business side of the products I worked on but never had actual responsibility for the business. At AWS I owned the relational database business. While confidentiality considerations keep me from talking actual sizes, it was one of the largest AWS businesses and the fastest growing of those larger businesses.  In the weeks before I stepped down we passed one of the (even) more household name services in revenue. I have no real idea on the current business size, but doing some very conservative projections it must be an unbelievably big business today. What amazes me when I look back on the experience is not that they trusted me with engineering and operations for RDS, I had the track record to suggest I could succeed at that, but that they trusted me with such major business responsibility. That turned out to be an incredible career highlight for me, and I thank Andy Jassy, Charlie Bell, and Raju Gulabani for giving me that opportunity. Particularly Raju, because I know the only way it happened is because he committed to have my back.

After three years I announced I was going back into retirement. For those who don’t know, I live in Colorado and commuted every week (early Monday morning out, late Friday return) to Seattle. That I sustained it for three years is only a bit of a mystery to me, that my wife survived three years of it is amazing! But neither of us could sustain it longer, and didn’t want to move to Seattle. Plus we had some family things to take care of. There is more to this story, and we almost found a way where I would keep working for Amazon part-time.  But I realized I’d never contribute to Amazon in a way I found satisfying as a part-timer. So after a few months I pulled the plug on a staged retirement and did a cold-turkey retirement. Or so I thought. Once again a little credit here, I couldn’t have worked from Colorado without Raju having my back (i.e., in 2014/2015 Amazon literally did not allow people to do work when in Colorado, so Raju had to cover for me if there was an operational issue that needed VP involvement over a weekend), and he was the one who proposed a staged retirement.

So why was last week’s launch of Aurora Multi-Master a good end point of the AWS story for me? My one major regret from my days at Microsoft had been that we never shipped a “single system image”, multi-master, SQL Server clustering solution. When we did the original planning for building our own database business (out of the ashes of Sybase SQL Server) we’d put clustering in our 3 version plan.  Yukon (SQL Server 2005) was supposed to include a single system image clustering solution. By single system image I mean that an application can talk to and update a database on any node in the cluster  completely transparent to the fact that the database is distributed over multiple nodes. In other words, it looks just like you are talking to a single system. That’s what we’d done at DEC with Rdb (conceptually copied by Oracle to become RAC). Others had done variants as well, but after a burst of energy in this space in the 80s to mid 90s, vendors (except for Oracle) lost interest. The SQL Server team made a number of stabs at it, but they always faltered in the wake of either higher priority work or technical challenges. Doing single-system image is hard. So sharding, or dropping some of the transparency (Spanner is an example), or going to NoSQL models that had far fewer transparency demands, became alternate answers. I’ve been away from Microsoft for almost 9 years, and the SQL Server team for over 15, and SQL Server (or Azure SQL) still doesn’t have a single-system image clustering solution. But with AWS, that was once again the vision for Aurora.

I didn’t get to be the one that built Aurora Multi-Master, and I’m fine with that. When driving back from an Andy Jassy OP1 offsite in the summer of 2015 Anurag and I talked about single-system image clustering and how desperately we both wanted to see it done. No matter how we rejiggered the organization structure over time, we would make this happen. Anurag got to drive it, although he too left AWS before Aurora Multi-Master GA, it’s done now. Oh they have plenty more to do to complete the vision (e.g., multi-region multi-master), but the solution is out now. Take your credit card and go give it a try.  From my standpoint there is always a ton more to do  in meeting customer database needs in the cloud. But in terms of a feeling of completeness and ability to move on, having Amazon Aurora Multi-Master available lets me focus on what other interesting problems there are out there. I’ll talk about that in part 2.

Posted in Amazon, Aurora, AWS, Cloud, Computer and Internet, Database, Microsoft, RDS, SQL Server | 4 Comments

Ad-blocker Wars

About a year ago I wrote my Adblockers are the new AntiVirus piece. In the intervening period the war between ad blocking and web sites that depend on advertising has gone exponential. Many sites put up a warning asking you to unblock ads on their site, others block access entirely. And now Google, the tech company almost entirely based on serving ads, is using their control of the dominant web browser, Chrome, to limit ad blockers. Make no mistake, I am OK with the concept of advertising on the web. It is a great way to democratize access to content, whereas pay walls (however appropriate in many situations) limit information flow. But as I wrote in the earlier piece, as long as advertising remains a huge channel for distributing malicious content I will be blocking it. Because I refuse to white list them there are several web sites that I can no longer access, but it is a small price to pay for better security and privacy. On the positive side for some, there is one site I found valuable enough to pay for access rather than allow ads. But just one so far, and it was a very small charge.

While I use all three major browsers to some extent, Firefox remains my primary browser. That’s partially because it offers the most options for incorporating ad-blocking and other filtering options. It even has a built-in content blocker, though you must know to configure it to use for general browsing. One of my favorite Firefox features is that it allows you to specify a DNS server to use independent of what your system is set to use. So my family notebook computers have Firefox set to use Quad9‘s malware-filtering DNS no matter what network they attach to, without having to manually change network settings each time (when on our home network our router is set to use Quad9). I could use the same mechanism to point to an ad-blocking DNS.

Of course ad-blocking extensions for browsers are insufficient, and with Google limiting their capabilities on Chrome, are becoming the wrong point in the technology stack to block ads. There is also the problem of non-browser applications that bypass the extensions, as I talked about in last year’s entry. Fortunately there are other options. Ad-blocking DNS may be the easy and free alternative, with AdGuard DNS currently the leading option.  Some routers also offer built-in ad-blockers, though they may be part of a paid service. For example, the eero Plus service for eero routers supports ad blocking. That feature has been available for years, but is still labeled as being in beta, so caveat emptor.  For those who like to really hack, you can download new firmware such as Tomato or DD-WRT to your router, or build your own Pi-hole. I keep getting tempted to add a Pi-hole to my network, but it is down a long list of things I may never get time to do. More consumer-friendly hardware solutions such as the little known eBlocker are available. I suspect as this category grows the mainstream vendors will increasingly include ad blocking options on new routers, which is great because my experience with whole-home devices that sit beside the router is decidedly poor.

There are also paid system-wide solutions. I’ve mentioned AdGuard for Windows before, but still haven’t given it a serious try. There is also a version for the Mac. I did pay for AdGuard for iOS Pro, which can perform adblocking across an iOS device rather than just in Safari. Don’t confuse this with the free AdGuard for iOS, which is a Safari extension. Not that it too isn’t a good adblocker.

And then there is Microsoft (and Apple, but I don’t follow MacOS developments). It is unclear how Microsoft’s adoption of Chromium as the basis for Edge will be impacted by Google’s latest change to Chrome. Will Microsoft follow Google’s lead,or continue to support a fully featured interface for ad blocking extensions? Microsoft abdicated its leadership role in this space when they failed to move the Tracking Protection List feature forward from Internet Explorer into Edge. They could either return to leadership by adding new features to the Chromium-based Edge, emulating Firefox, add new features to Windows that work across all browsers and applications, continue to leave this to others, or adopt Google’s privacy and security unfriendly behavior. While disappointing, I suspect they will take the middle road and leave this to others.

What you should notice is the one option that would save ad-supported websites, a move by the advertising industry to truly protect security and privacy, is absent. Maybe there is some work going on there, but so far it hasn’t made it to the mainstream. As I said a year ago, they are running out of time to save themselves. The escalating ad blocking war tells us that it is just about too late.

 

Posted in Computer and Internet, Microsoft, Privacy, Security, Windows | Tagged , | 1 Comment

Prime 1-Day Delivery Really is Different

At last week’s earnings call Amazon announced it was moving Amazon Prime from its historical 2-day shipping to 1-day shipping. Inevitably there were articles saying how Walmart or Target or whoever already had this. Or even better than Amazon, had same day delivery for some common products. All because they delivered from their large network of stores. I’m going to call BS on that, because “delivering from their stores” turns out to be more a symptom of a problem then a means of solving it.

Go back to Amazon’s origins as an on-line bookseller and Jeff Bezos’ recognizing that he could offer access to a vastly larger number of books (basically all those in print) than you would ever find in your local bookstore. Far more even than in the giant bookstores being built by chains like Barnes & Noble and Borders. This observation holds true more than ever in today’s retail environment. A retail outlet, even one as large as a Walmart Supercenter, only stocks a tiny fraction of the products, brands, styles, colors, sizes, etc. that are available. And in one of the most frustrating parts of the shopping experience, they frequently don’t have what you are looking for when you go into the store.

It is a very rare event when I go out shopping in local retailers that I come home with every item I was looking for. Even going to a store I know carries an item I want is often an unsatisfying experience. “Sorry you just drove 30 minutes and dealt with parking issues, crowds, etc. we are out of stock on that.” ^&%$(. “Oh, you like those shoes? Sorry, we don’t carry that size in store but you can order it on our website.” “We only carry the 2′ version of that cable in the store, if you want the 4′ you’ll have to order it on our website.” Brand of a particular nutritional supplement? Lets roll the dice and see if this store carries it and as it in stock at this very moment. My preferred brand/scent of antiperspirant? The Safeway stores in Denver seem to stock it, but not the ones around Seattle. And so on. As a result, I don’t bother going to stores. When I need something I just order it. Most of the time from Amazon.

While being able to deliver in one day, or same day, from a local retail outlet can be a very useful part of a fulfillment system, any attempt to make it the center of the experience replicates its bad characteristics to the online world. I don’t really care if Walmart or Target can deliver to my house in 20 minutes if neither carries the antiperspirant I want. Or if they are out of stock on the style, color, and size jeans I am looking for.

I’ve been living in an area where Amazon already offers free 1-day Prime delivery on many items for orders over $35. On Tuesday I realized I’d lost my Apple Pencil I had a new one on Wednesday, despite my SUV being in the shop. Amazon also offers various same-day delivery programs in my area, though I haven’t made use of those services. The news in Amazon announcing that they were moving Prime to 1-day delivery as the default is that they are building out their logistics system to support doing so for a very large portion of the items available on Amazon.com. And that is a whole different beast, both in complexity and in customer offering, than adding a delivery service from your local poorly stocked store. It’s the very same advantage that Jeff Bezos’ had over bricks and mortar bookstores on Day One. And not a surprise for a company where “it is always Day One”.

 

 

 

Posted in Amazon, Computer and Internet, Retail | 3 Comments

DMARC or Die

Let me ask a simple question, when are we going to get serious about dealing with unauthenticated email and its associated Phishing and Malware risks? If you think the industry is already taking this seriously, and that it is simply a hard problem, you are (IMHO) just wrong. Take this little snippet from the Microsoft Office 365 documentation on their handling of inbound mail that fails a Domain-based Message Authentication, Reporting, and Conformance (DMARC) check:

If the DMARC policy of the sending server is p=reject, EOP marks the message as spam instead of rejecting it. In other words, for inbound email, Office 365 treats p=reject and p=quarantine the same way.

In other words, in Microsoft’s infinite wisdom they ignore instructions from the domain owner to shred, incinerate, and bury deep in the earth mail that fails the checks they established to prove it comes from them, and instead put that mail in the Junk folder where 100s of millions of naive users will find it and believe it might be legitimate. This may have been a wise step back when DMARC was fresh and new in 2012, today it is simply irresponsible of Microsoft to favor legacy behaviors over a domain owner’s explicit instructions.

I don’t really want to pick on Microsoft, other than as a representative of the industry overall. We have the tools (SPF/DKIM/DMARC) to dramatically impact the SPAM problem but aren’t driving adoption, and proper usage, at a rate commensurate with the danger that unauthenticated email represents. SPF and DKIM have been with us for about 15 years. After 15 years we should no longer accept excuses such as SPF breaking legacy (pre-)Internet systems like listservers, there has been plenty of time for alternate compliant systems to be deployed. Unfortuntately nearly every SPF record seems to end with a soft-fail indicator, meaning “I don’t know who might legitimately send email on my behalf so don’t actually reject anything”. DMARC, which really brings SPF and DKIM into a useful framework, has only been adopted by 50% of F500 companies. And nearly all of them have DMARC policies of NONE, meaning just go ahead and deliver mail that fails authentication to the user’s inbox. WTF? And if they do take DMARC seriously only to have Microsoft ignore instructions to REJECT mail that fails authentication, it’s enough to make a CISO drink.

Is it going to take legislation to make the industry get serious? Maybe if Microsoft were subject to a lawsuit with treble damages because they delivered a malicious email to people’s junk folder rather than honor the DMARC REJECT policy we’d see some action. Not just by Microsoft, but by every organization fearful that new legislation had made it clear that failure to adopt well established anti-SPAM techniques subjected them to unlimited financial exposure.

We need a hard timetable for DMARC adoption, and if industry doesn’t do it then perhaps it will take a legislative push. In either case, we need a date by which all domains either establish a DMARC policy or have their mail rejected by recipient servers. We need a date by which a DMARC policy must be either REJECT or QUARANTINE. We need a date by which servers must enforce the DMARC policy rather than just check it. The later is actually the first thing to be tackled. If someone has taken the trouble to establish a policy, a server should enforce it! Hear that Microsoft? And we need a date by which REJECT is the only acceptable policy. Want to install some other milestones, fine. But let’s stop with the excuses. It really doesn’t matter if this is a problem of the perfect being the enemy of the good, or of competing interests, or just inertia. Throw out the excuses and DMARC or Die.

Posted in Computer and Internet, Microsoft, Phishing, Privacy, Security | Tagged , , , | 5 Comments