Snatching defeat from the jaws of victory

Once again Microsoft appears to have snatched defeat from the jaws of victory, this time repeating a key mistake from the Windows 8 era.  Microsoft was on the path to a coupe, launching the seemingly excellent Surface Go well ahead of Apple’s launch of the next generation of iPad Pros.  It also launched the Surface Pro 6 ahead of Apple’s launch, though with a much smaller lead.  So where did Microsoft go wrong?  NO LTE.  Oh they promise LTE in the future, but futures don’t cut it in this case.  This is exactly where Microsoft (and its ecosystem) screwed up back in 2013, and has continued to screw up in successive launch cycles.

Back in 2013 the excellent Dell Venue 8 Pro, and other Windows tablets, launched with a promise of LTE, and then it never appeared.  Within the Surface line Microsoft has always either ignored LTE, delayed it for well beyond initial launch, and if it did arrive they made it hard to buy (i.e., targeted the business sales channel) rather than featuring it.  Now we have Microsoft singing the praises of “Always-Connected PCs”, but they don’t walk the talk.  For Microsoft, being “always connected” only applies to low-end ARM-based Windows 10 systems.  And they so far haven’t even offered one of those themselves.

With Apple you just select WiFi-Only or WiFi+LTE as part of its normal sales processes, both online and in-store.  And they launch (and generally ship) the LTE models concurrently with the WiFi-Only models.

I was completely ready to spring for a Surface Go the moment I could get one with LTE, and then yesterday Apple launched the new generation of iPad Pros.  There are a few things that the iPad Pro is not good at, like software development, but for my daily on-the-go needs it is near perfect.  And most importantly, I will have one in my hands, WITH LTE, in a couple of weeks.  So the moment has passed Microsoft, and while you keep talking about being always connected Apple is doing a much better job of walking the talk.  The Surface Go likely isn’t going anywhere, and I’m not particularly hopeful about the “Always-Connected PC” initiative either.

Posted in Computer and Internet, Microsoft, Windows | Tagged , , | 6 Comments

Google goes to the dark side on JEDI

Every time I read an article on the U.S. Department of Defense large Cloud project known as JEDI I find myself suppressing an urge to comment.  Google dropping out of the bidding finally made that urge difficult to suppress.

It is almost certainly true that only Amazon (AWS) and Microsoft have the current breadth of offering to meet much of the JEDI requirements.  It is equally true that neither of them have all the pieces needed for this contract, they are going to have to build new capabilities as well.  Most articles I’ve read focus on certifications as a differentiator, and while those may represent a minimum bar for selling into this market and a demonstration of a Cloud’s maturity, they seem neither a significant differentiator nor a significant hinderance to a vendor’s ability to compete for the RFP.  Put another way, if the rest of the RFP response showed overwhelming leadership then a roadmap for achieving the needed certifications would be sufficient to overcome the AWS and Microsoft existing certification leads.

The problem for every potential bidder is that they either need to partner to meet the full RFP requirements and/or commit to significant developments that could negatively impact (e.g., in opportunity cost) their commercial offering roadmaps.  The whining about JEDI being a single source contract says more to me about  tech industry disdain for partnering amongst major players than it does about the nature of the contract.  DOD is used to many, if not most, major contracts involving partnerships amongst the top suppliers (aka, competitors).  Boeing/Lockheed, Lockheed/Boeing, Lockheed/Northrop Grumman, Boeing/Saab, etc.  The right bid from a lead/prime with a lot of DOD experience would have a strong chance to challenge AWS and Microsoft.  For example, IBM has lots of the pieces for a bid and decades of experience being a Prime contractor for DOD.  It is the latter, not their fragmented commercial cloud offerings, that make them a serious contender to win JEDI.

The real question about JEDI, and likely the real meaning behind Google’s using lack of certifications as an excuse to drop out, is how much a vendor is willing to let the JEDI requirements impact their commercial roadmap.  AWS’ Andy Jassy likes to say that there is no compression algorithm for experience.  While that sometimes sounds like a marketing sound-bite, there is a lot of truth to it.  When the cloud was new, and enterprise adoption was near non-existent, AWS aggressively went after a number of deals for the experience they would provide.  Those deals were key to getting AWS to its current leadership position, because they prepared an organization with only eCommerce DNA to address industries it otherwise couldn’t understand or relate to. One of those was the U.S. Intelligence Communities’ Commercial Cloud Services (C2S) contract, which many point to as one of AWS’ key strengths in the JEDI bid.  Certainly AWS wouldn’t be in a good position to win the JEDI deal without C2S, because it would face the “no compression algorithm for experience” dilemma.  And while others may not have the direct classified cloud experience C2S gave AWS, Microsoft, IBM, and Oracle have decades of experience working with DOD and meeting their most demanding IT needs.

C2S is most important in the context of how much a young and small AWS was willing to impact its commercial roadmap to gain experience at working in the toughest public sector environments.  Both the learnings, and yes the optics, of being able to support the most demanding security environment have had a huge impact on AWS’ ability to attract large enterprises to its cloud.  This is where Amazon’s focus on the long-term comes into play.  C2S was a drop in the bucket on public sector IT spending.  JEDI is still just a toe in the water.  AWS will value JEDI not only for the business it brings, but for the things it forces them to do to meet DOD’s requirements.  Many of which it will bring back into its commercial offerings. Oracle will value it for giving their cloud a legitimacy they have yet to achieve. It could actually save their IaaS/PaaS offerings from oblivion.  IBM seem more likely to value the revenue than other benefits. Microsoft likely sees it as validation that the direction(s) they’ve taken with Azure (including Azure Stack) has them equal to or ahead of AWS (without having to fall back on winning because the customer is an Amazon-retail competitor, or buying the business with a “strategic investment”).  Sorry, I couldn’t resist taking a little dig at my Microsoft friends.

And Google?  Google’s primary marketing thrust  is you should use Google Cloud because everyone wants to do things just like Google does.  But if Google doesn’t want government to use AI like they do, and may in the future not want government to use some of their other technologies, and doesn’t want to disrupt their commercial roadmap to meet DOD requirements, then Google can’t bid on the deal.  The same applications that Google doesn’t want its AI technology to be used in could make use of technologies like BigQuery and Spanner, so how can Google offer those as part of JEDI?  And how much does Google want to focus its infrastructure work on being able to quickly standup a new region at a newly established military base vs continued development of its commercial regions?  How hungry are they for this business?  Apparently not very as they’ve decide to go dark on the bidding.

The company that wins this business is going to be a company that is hungry for it, and not just for the revenue it brings.  That is always important of course, and being able to make a profit at it is just as important.  But in the end the winner is going to be, or at least should be, someone with a passion for the DOD customer base and for applying the learnings from JEDI to moving the Cloud up another notch in addressing broader customer needs.  I obviously see that from AWS and Microsoft, and Google already made it clear that isn’t the case for them.

Posted in AWS, Azure, Cloud, Computer and Internet | Tagged , | Comments Off on Google goes to the dark side on JEDI

The Big Non-Hack?

This week Bloomberg Businessweek (BBW) published “The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies” which claimed that 30 companies, most notably Apple and Amazon Web Services, had servers using hacked Chinese-made motherboards from U.S. manufacturer SuperMicro.  Apple, Amazon, SuperMicro, and even the Chinese government issued strong denials.  Additional denials are coming in as well, and right now BBW seems pretty far out on a limb with the story.  True or not, the article publicized real concerns about the security of the technology supply chain.  Concerns we are not taking seriously enough.

One bit of clarification (which is important, particularly if you don’t read the article carefully) is that the Amazon-related comment is about a company it acquired, Elemental Technologies.  Allegedly the hardware hack in Elemental server products was discovered as part of Amazon’s pre-acquisition due diligence and nearly scuttled the deal.  If there is any truth to the story, and Amazon gave quite a detailed response saying there isn’t, it should give some measure of assurance to AWS customers that AWS’ security processes caught this before the Elemental acquisition.  One weird part of the story vis a vi AWS is that some of the hacked motherboards showed up in the AWS Beijing region.  While I won’t say exactly why, that part of the story set off my BS detector. Otherwise, the AWS servers that run customer virtual machines (AMIs) and service control planes were not implicated in the story.

For all three major cloud providers I expect security practices that would either prevent or quickly uncover a hack such as the one discussed in the story.  I have no personal knowledge of Google, but both Amazon and Microsoft are extremely thorough, sophisticated, and usually quite aggressive on the security front.  Particularly when it comes to their own infrastructure.  At AWS security is considered the #1 priority, and failure is treated as the ultimate risk for destroying customer trust.  If the story about Elemental is even remotely true, the result of having discovered an actual hardware hack would have led AWS to implement numerous additional checks in its hardware acquisition and acceptance processes.

But to the meat of the issue, China is increasingly seen as a bad actor. When you combine repeated concerns about back doors in Chinese-made technology products with ongoing Intellectual Property theft concerns, rising wage costs, rising shipping costs, rapidly growing national security concerns, and the nascent trade war, I have to wonder how long until western companies just start removing China from the supply chain.  That doesn’t necessarily mean moving manufacturing “back” to the U.S. (or western Europe), it may mean moving to other low-cost countries.  Countries where, presumably, there is better protection of Intellectual Property and privacy.  And far less national security risk as well.  Basically, how long before western companies say the risks of having China in your supply chain far exceed the rewards?  For those wanting to sell to the U.S. Government, and likely many allies, the day of reckoning is already here. That noose will just keep getting tightened.

When will we see an accelerated move away from including China in the supply chain of technology products?  If the BBW story turns out to be true, that will certainly accelerate things somewhat.  If the trade war lasts for more than a few months, that will have a major impact.  Few, if any, companies are going to try to figure out how to remove China from the supply chain of existing or well along in development products.  But probably every (non-startup) western company is looking at products just entering the development cycle and trying to figure out if there is a sensible way to not make that product in China or with Chinese-sourced components. Most will likely conclude there isn’t currently a sensible alternative, or decide to take the risk the trade war will be resolved before they go into production.  Many will at least take some initial steps to reduce their China supply chain exposure, such as seeking second sources outside China for key components.  The longer the trade war goes on the more they will conclude tariffs are a long-term part of the cost equation and shift away from China.  And if another, confirmed, story of Chinese hardware hacking comes out during these deliberations?  There will be a mad rush for the exit.

As for BBW, I’m concerned that the story doesn’t seem to have legs.  And if the story is false, or at least got a lot of the facts wrong, then it gives a serious black eye to reporting on the technology business.




Posted in Amazon, AWS, Cloud, Computer and Internet, Security | Tagged | Comments Off on The Big Non-Hack?

The Product Shipping Tax

There is an observation I had back in the 1980s that both holds in today’s Cloud world and remains one of the toughest messages to communicate to senior leadership.  When you ship a new product or service, major release, or even a major feature (in the Cloud world), your people resources for new feature development is permanently cut in half.  The short-term message is no more palatable, but perhaps easier to communicate.  For the first 6-12 months after a major release nearly 100% of your people will be unavailable (or their efforts severely degraded) for new feature development.  So each budget cycle product teams end up asking for more staffing, even as it seems we are delivering less in the way of features.  It isn’t that senior leaders don’t get that there is a tax on supporting existing products, from bug fixing to operations, but they do have trouble with the magnitude of it.  For example, non-engineers (or those who haven’t done engineering recently) struggle with how costly yet necessary it is to pay down technical debt.

Two things happened to me in the 80s that lead to my 50%/100% rule of thumb.  The first was my experience as a project leader of multiple releases.  Each time we did a release I would end up finding the number of person-months I had to schedule to investigate customer issues, fix bugs, perform cleanups of code that had become unmaintainable, deal with dependencies (e.g., a new OS version breaking an existing product), revamp build systems, respond to corporate initiatives (e.g., you must switch to this new setup/installation system), etc. would go up.  And over time I realized it would stabilize at about half the team.

The other thing that happened in the 80s is I went back and looked at multiple releases, including those I hadn’t been involved in, and plotted the incoming Software Performance Report (SPR) rate by month against a number of other metrics.  SPRs were a means for DEC customers to report bugs, request features, and otherwise communicate with the engineering team about issues.  There was no filter on these, even customers without support contracts could submit SPRs, so a complex feature might generate a lot of SPRs even though those resulted in a low unique bug rate. There were two interesting data points here.  The first was that incoming SPR rate started to rise dramatically about 60 days after release, the peak occurring around the 6 month mark.  While the incoming rate dropped off, it plateaued at a higher level after each release.  There were two causes for that, one being just having more features that needed support.  The other was that, thankfully, there was a rapidly growing customer base.  So even if you drove SPRs per Customer (one of my favorite overall product quality metrics) down, the growth in customers meant more SPRs.

The second data point was that there was a clear correlation between the number of check-ins for the release and the incoming SPR rate, so major releases not surprisingly resulted in more SPRs than minor releases.  I was actually able to predict the SPR rate for a new major release would be terrifyingly high based on this metric, a prediction that sadly was accurate.  At peak nearly the entire development team was required to respond to SPRs, and for about 90 days before and after there was a high interrupt load on most developers as SPRs hit for their area rendering them unproductive at working on new features.

The Cloud changes none of this, and perhaps makes it even worse.  Before you enter a beta or preview period you have no operational burden, minimal deployment burden, only modest urgency on fixing most bugs, etc.  The preview is as much about making sure you can operate at hyperscale as it is about traditional beta things like verifying that customers can use the service as intended.  Then the day you declare General Availability (GA) you have a 24×7 operational burden.  Production-impacting bugs become urgent.  It’s the day you start learning where you missed on preparing for hyperscale (see and  It’s the day customers start trying to do things you never intended, or perhaps never expected.  It’s the day that you start having to plan on paying down technical debt built up during development.  It’s the day you have to start dealing with disruptions like the Meltdown and Spectre security issues with an urgency that distracts from feature work. Etc.  So just like with a 1980s packaged product, for the first 6-12 months nearly the entire team will be unavailable for feature work and on an ongoing basis only half the team you had at launch will be available for feature work.

I tried for years to find ways to avoid the 50%/100% tax, but never succeeded.  So each budget cycle I’d look at all we wanted to do, all that our customers wanted us to do, and go and ask for a significant headcount increase.  Each year I would face the pain of telling senior leadership how little feature work we could do without that increase.  Each year they would challenge me, and I didn’t blame them.  I never found a way to communicate the magnitude of the situation in the context of the budgeting exercise.  In retrospect I realize was I should have done at Amazon is written a narrative, outside the “OP1” process, that made all this clear.  I could have looked at data for numerous projects that would have (likely) supported my career-long observation.  But that would have been too late to help with the decades at DEC and Microsoft where I failed to fully explain the need for the additional people.  To be clear, I just about always got the people I needed.  It was just more painful than it should have been.

So what prompted me to write this now?  I’m watching as the first signs appear that Aurora PostgreSQL is getting past its “V1.0” 100% stage.  For example, although Aurora PostgreSQL has not yet announced PostgreSQL 10 support in some regions you can actually find it (10.4 specifically) in the version selector for creating Aurora PostgreSQL instances.  Launch must be fairly imminent, with hopefully many more features coming in the next few months.  Overall though, it reminded me my 50%/100% rule still applies.





Posted in Computer and Internet | 2 Comments

Challenges of Hyperscale Computing (Part 3)

Back in Part 2 I discussed the relationship between failures and the people resources needed to address them, and demonstrated why at hyperscale you can’t use people to handle failures.  In this part I’ll discuss how that impacts a managed service.  If you’ve wondered why it takes time, sometimes a seemingly unreasonable amount of time, for a new version to be supported, why certain permissions are withheld, why features may be disabled, etc. then you are in the right place.

tl;dr At hyperscale you need extreme automation.  That takes more time and effort than those who haven’t done it can imagine.  And you have to make sure the user can’t break your automation.

We probably all have used automation (e.g., scripts) at some point in our careers to accomplish repetitive operations.  In simple cases we do little or no error handling and just “deal with it” when the script fails.  For more complex scripts, perhaps triggered automatically on events or a schedule, we put in some simple error handling.  That might just focus on resolving the most common error conditions, and raising the proper notifications for uncommon or otherwise unhandled errors.  Moreover, the scripts are often written to manage resources that we (or a small cadre of our co-workers) own.  So a DBA might create a backup script that is used to do backups of all the databases owned by their team.  If the script fails then they, or another member of their team, are responsible for resolving the situation.  If the team makes a change to a database such that the scripts fail, the responsibility for resolving the issue remains with them.  This can be as human intensive or as automated as your environment supports, because it all rests with the same team.

In the case of a managed service the operational administration (“undifferentiated heavy lifting” such as backups, patching, failover configuration and operation, etc.) of the database instance is separated from the application-oriented administration (application security, schema design, stored procedure authoring, etc.).  The managed service provider creates automation around the operational administration, automation that must work against a vast number (i.e., “millions” was where we ended up in Part 2) of databases owned by a similarly large number of different organizations.

In Part 2 I demonstrated that the Escaped Failure Rate (EFR), that is the number of failures that required human intervention, had to be 1 in 100 Billion or better in order to avoid the need for a large human shield (and the resulting costs) to address those failures.  Achieving 1 in 100 Billion requires an extreme level of automation.  For example, there are failure conditions which occur so infrequently that a DBA or System Engineer might not see them in their entire career.  At hyperscale, that error condition might present itself several times per day and many times on a particularly bad day.  As an analogy, you are unlikely to be hit by lightning  in your lifetime.  But it does happen on a regular basis, and sometimes a single strike can result in multiple casualties (77 in one example).  At hyperscale on any given day there will be a “lightning strike”, and occasionally there will be one resulting in mass “casualties”.  So you need to automate responses for conditions that are exceedingly rare as well as those that are common.

As the level of automation increases you have to pay attention to overall system complexity.  For example, if you are a programmer then you know that handling concurrency dramatically increases application complexity.  And DBAs know that a whole bunch of the complex work in database systems (e.g., the I in ACID) is focused on supporting concurrent transactions.  When thinking about automation, you make it dramatically more complex by allowing concurrent automation processes.  In other words, if you allow concurrent automation processes against the same object (e.g., a database instance) then you have to program them to handle any cases where they might interfere with one another.  For any two pre-defined processes, assuming they have no more than modest complexity, that might be doable.  But as soon as you allow a more general case the ability to ensure the concurrent processes can successfully complete, and complete without human intervention, becomes impractical.  So when dealing with any one thing, for example a single database instance, you serialize the automation.

I kicked this series off discussing database size limits.  The general answer for why size limits exist is the interaction between the time it takes to perform a scale storage operation and how long you are willing to defer execution of other tasks.  Over time it became possible to perform scale storage on larger volumes within an acceptable time window, so maximum size was increased.  With the advent of EBS Elastic Volumes the RDS automation for scale storage can (in most cases) complete very quickly.  As a result they don’t block other automation tasks, enabling 16TB data volumes for RDS instances.

The broader implications of the requirements for extreme automation are:

  • If you can’t automate it, you can’t ship it
  • If a user can interfere with your automation, then you can’t deliver on your service’s promises, and/or you can’t achieve the desired Escaped Failure Rate, and/or they will cause your automation to actually break their application
  • A developer is able to build a feature in a couple of days that might take weeks or months of effort to sufficiently automate before being exposed in a hyperscale environment

One of the key differences that customers notice about managed database services is that the privileges you have on the database instance are restricted.  Instead of providing the administrative user with the full privileges of the super user role (sysadmin, sysdba, etc.) of the database engine, Amazon RDS provides a Master user with a subset of the privileges those roles usually confer.  Privileges that would allow the DBA to take actions that break RDS’ automation are generally excluded. Likewise, customers are prohibited from SSHing into the RDS database instance because that would allow the customer to take actions that break RDS’ automation.  Other vendors’ managed database services have identical (or near identical) restrictions.

Let’s take a deeper look at the implication of restricted privileges and lack of SSH and how that interacts with our efforts to limit EFR.  When a new version of software is released it always comes with incompatibilities with earlier versions (and bugs of its own of course).  A classic example is where a new version fixes a bug with an older version.  Say a newer version of database engine X either fixes a bug where X-1 was ignoring a structural database corruption, or introduces a bug where X can’t handle some condition that was perfectly valid in X-1.  In either case, the upgrade in place process for taking a database from X-1 to X fails when the condition exists, leaving the database inaccessible until the condition is fixed.  To fix this you have to SSH into the instance and/or access resources that are not accessible to you. Now, let’s say this happens in 1 out of 1000 databases.  If the service provider doesn’t automate the handling of this condition then, since the customer can’t resolve it themselves, the service provider will need to step in 1000 times for the 1 million instance example.  Did you read Part 2?  That’s not a reasonable answer in a hyperscale environment.  So the managed service can’t offer version upgrade in place until they’ve both uncovered these issues, and created automation for handling them.

Similar issues impact the availability of new versions of database software (even without upgrade in place).  Changes (features or otherwise) that impact automation, be that creation of new automation or changes to existing automation, have to be analyzed and work completed to handle those changes.  Compatibility problems that will break currently supported configurations have to be dealt with.  Performance tuning of configurations has to be re-examined.  Dependencies have to be re-examined.  Etc.  And while some of this can be done prior to a database engine’s General Availability, often changes occur late in the engine’s release cycle.  A recent post in the Amazon RDS Forum was complaining about RDS’ lack of support for MySQL 8.0, which went GA last April.  So I checked both Google Cloud SQL and Microsoft Azure Database for MySQL and neither of them supported MySQL 8.0 yet either.  To be supportable at hyperscale, new releases require a lot of work.

Let me digress here a moment.  The runtime  vs. management dichotomy goes back decades.  With traditional packaged software the management tools are usually way behind in supporting new runtime features.  With Microsoft SQL Server, for example, we would constantly struggle with questions like “We don’t have time to create DDL for doing this, so should we just expose it via DBCC or an Extended Stored Procedure?” or “This change is coming in too late in the cycle for SSMS support, is it ok to ship without tool support?” or “We don’t have time to make it easy for the DBA, so should we just write a whitepaper on how to roll your own?”  The SQL Server team implemented engineering process changes to improve the situation, basically slowing feature momentum to ensure adequate tools support was in place.  But I still see cases where that doesn’t happen.  With open source software (including database engines), the tooling often comes from parties other than the engine developers (or core community)so the dichotomy remains.

It’s not just that management support can’t fully be done until after the feature is working in the database engine (or runtime or OS or…), it is that for many features the effort to provide proper management exceeds the cost of developing the feature in the first place.  On DEC (nee Oracle) Rdb I was personally involved in cases where I implemented A runtime feature in a couple of hours that turned into many person days of work in tools.   Before I joined AWS I noticed that RDS for SQL Server didn’t support a feature that I would expect to be trivial to support.  After I joined I pressed for its implementation, and while not a huge effort it was still an order of magnitude greater than I would have believed before actually understanding the hyperscale automation requirements. So while I’m writing this blog in the context of things running at hyperscale, all that has really changed in decades is that at hyperscale you can’t let the management aspects of software slide. 

There is a lot more I could talk about in this area, but I’m going to stop now since I think I made the point.  At hyperscale you need ridiculously low Escaped Failure Rates.  You get those via extensive automation.  To keep your automation operating properly you have to lock down the environment so that a user can’t interfere with the automation.  That locked down environment forces you to handle even more situations via additional automation.

When all this works as intended you get benefits like I described years ago in a blog I wrote about Amazon RDS Multi-AZ .  You also get to have that managed high availability configuration for as little as $134 a year, which is less than the cost of an hour of DBA time.  And the cloud providers do this for millions of instances, which is just mind-boggling.  Particularly if you recall IBM Founder Thomas Watson Sr’s most famous quote, “I think there is a world market for maybe five computers.”



Posted in Amazon, Azure, Cloud, Computer and Internet, Database | Tagged | 2 Comments

Keezel – Another Internet Security Device

I’m always on the search for new security tools, and this time my hunt took me to Keezel.  For full disclosure, I liked the concept so much I made a token investment in Keezel via crowdfunding site StartEngine.  Keezel is a device a little larger than a computer mouse that creates a secure WiFi hotspot you use between your devices and another WiFi (or wired Ethernet) network.  It uses a VPN to communicate over the public network, so your traffic can’t be compromised.  You connect it to a hotel, coffee shop, or other location that has a public/semi-public network you can’t fully trust, then you connect all your devices to the Keezel’s WiFi.   So VPN in a box, or puck if you prefer.

Keezel has a few features beyond giving you a VPN.  It can block access to known Phishing sites, and also provides an ad-blocker.  Both features are off by default but are easy to toggle on.  While you may already have software that provides these features, it no doubt has gaps.  For example, iOS only supports ad-blocking in Safari itself. And I’ve previously discussed how non-browser apps displaying web pages showed ads that attempted to download malware to a Windows PC.  Multiple layers of checks for phishing websites is also valuable given that one source of dangerous URL information may block a site before others.

Keezel has a built-in 8000mAh battery so you can use it without plugging in for a day.  You can also use the battery to charge your phone etc.  The latter feature is more important than it sounds, because the battery makes the Keezel heavy.  When I travel with the Keezel I can leave one of my Mogix portable chargers behind making it roughly weight neutral from a backpack perspective.  It’s perfect for the all too frequent cases where the only seats available in an airport lounge or coffee shop are the ones without nearby outlets.

There is one big question-mark on a Keezel, why use one vs. VPN software on the device?  There are a number of reasons.  The first is that you may have devices that can’t install VPN software.  The Keezel lets you take your Fire TV stick, Echo, and other “IoT” devices on the road while keeping them off unsafe networks.  The second is Keezel’s anti-phishing and ad-block technology.  The third is that VPN services often have a limit on the number of devices they support per subscription.  For example, ExpressVPN limits you to 3 simultaneous connections.  While that is fine most of the time, occasionally you may want to exceed that number.  Fourth, while you may be perfect in turning on your VPN whenever you connect to a public network most people aren’t.  For example, what about your spouse or kids?  With their devices already set to automatically connect to Keezel, all you need do is connect it to the public WiFi and all devices being used by your party automatically are connected to a secure network.

The major downside I’ve found to Keezel is performance, as it peaks out at about 10Mbps for me.  Keezel says the range is 4-20Mbps.  I can do much better than that with ExpressVPN.  For example, on a 1Gbps FIOS connection I saw 400+ Mbps from an iPhone 7s Plus with no VPN, ~60 Mbps with ExpressVPN, and the aforementioned 10 Mbps from Keezel.  Of course public hotspots don’t usually offer high raw speeds, so the Keezel limits may actually be unnoticeable.  I haven’t tested it enough to be sure.

Pricing is also a factor to be considered.  ExpressVPN costs me $99/year.  A Keezel starts at $179 with the lifetime Basic service.  Basic has a speed limit of 500Kbps, so is mostly for email and light browsing.  A device with a year of Premium service, which brings the “HD Streaming Speed” goes for $229.  Premium service can be extended (or added to the Basic device) for $60/year.  So while Keezel is initially a little expensive, over multiple years (or many devices) it can work out to be quite cost-effective.

There are some things I’d like to see from Keezel that would make it a better security device.  Blocking malware serving sites, not just phishing sites, is a clear one.  Reports are another feature I’d like to see, since I like to spot check my networks for potential bad actors.   Additional URL filtering capability (e.g., “family safety” as a filtering category) is also desirable.  Overall, I’d like Keezel to provide security features more comparable to the EERO Plus service for EERO devices.  And, of course, I would like to see much higher performance than they currently provide.

What is my personal bottom line on Keezel?  For day-to-day use, where I walk into a Starbucks and need to kill an hour between meetings, I will stick with ExpressVPN to protect any device that needs WiFi.  When I’m staying at a hotel, I’ll use the Keezel to create my own secure WiFi network.  For scenarios in-between?  I’m undecided.

Posted in Computer and Internet, Mobile, Phishing, Privacy, Security | Tagged , | 2 Comments

Amazon and the ACLU

Google, Microsoft, and Amazon have all been under pressure these last few months over being suppliers of technology to the law enforcement and defense markets.  Pretty much all technology vendors have sold into these markets since well, the beginning of technology as we know it.  And much of the technology we know and love today grew out of government, particularly military, requirements and projects.  The Internet and GPS are the two most obvious, but others are all around us.  Supercomputers, though now used for many commercial applications, exist almost entirely because of decades of U.S. nuclear weapons labs’ insatiable thirst for compute power. The current level of microchip ubiquity owes a lot to U.S. Military concerns that U.S. industry would be unable to keep up with the military’s need for advanced semiconductors, and thus they funded SEMATECH for the first decade of its existence.  While there has always been some public opinion risk to selling technology into law enforcement and defense markets, the current wave of pressure is based on a new dynamic.  The cloud changes everything, where the technology provider doesn’t just (fairly quietly) sell hardware and software into a controversial market but also operates it (rather publicly) for its customers.  Make the service something AI-related, the 21st Century equivalent of 20th Century nuclear weapons and energy concerns, and you have a topic ripe for public discourse.

Before getting more directly into the ACLU taking issue with the Amazon Rekognition service that AWS offers I was going to set a little more context.  The current cloud leaders are primarily (or at least, in the case of Microsoft, heavily) focused on consumer-direct offerings. It’s a lot easier to use public indignation as a weapon when a company sells to the public then when its customers are other industrial companies.  For example, how much public pressure could you put on IBM or Digital Equipment to stop selling for defense use?  You go to Lockheed or Boeing or Northup-Grumman and say “stop buying from these guys because they sell to the CIA, Air Force, Navy, etc.” and they look at you like you have two heads (or none at all actually) because those are their customers too.  Bad analogy?  Ok, you go to Ford and tell them to stop and they start telling you about this Ford Aeronautics subsidiary that sells to the defense market.  Ford Aeronautics (which Ford sold to Loral in 1990) was a subcontractor on multiple nuclear missiles, amongst other things.  Now Ford would seem to have been a target for protests against nuclear weapons, but I suspect any such effort in the 50s-80s would backfire.  How about GE, GM, Goodyear, Chrysler, etc.?  Same story.  So while Digital Equipment Corporation never made military-specific products, their products were used everywhere in defense and law-enforcement realms.

Until at least Watergate, and really up until the end of the cold war, being a supplier to the defense of the United States was nearly always a net PR positive.  Let’s not forget that John F. Kennedy was elected President partially by hammering home a message that the U.S. was behind in nuclear missiles, the so-called Missile Gap, and appointed Ford-executive Robert McNamara to be Secretary of Defense.  Or that Ronald Reagan later used the industrial might of the U.S. to force an end to the Cold War.  I don’t bring this up to be political, but rather to point out these issues are often orthogonal to political party or philosophy.

So now to the ACLU.  The ACLU has gone to war against Amazon Web Services offering facial recognition technology (Amazon Rekognition) to law enforcement agencies.  Note that Rekognition is not specifically about facial recognition, and doesn’t specifically target law enforcement requirements.  It is a generalized image (and video) recognition technology, and it is this generality that makes it a cost effective commercial offering.  Facial recognition is, not surprisingly, a popular use case.  The ACLU’s first attack came back in May when they discovered Rekognition was being used by some law enforcement agencies for facial recognition.  Then this week they launched another barrage by showing that using default settings Rekognition falsely identified members of Congress as matching images found in a mugshot database.  I felt really bad for the Rekognition leadership, former co-workers and friends, as I’m sure they never expected to find themselves being attacked by the ACLU.  However, in some ways this was obviously coming.  The ACLU doesn’t appear to have much influence with Law Enforcement, it is a generally adversarial relationship.  The ACLU doesn’t appear to have much more of a fan base amongst members of the current Congress.  So attacking a technology supplier, particularly one part of a consumer-focused company, is one of the few tools at the ACLU’s disposal.  In other words, you can’t get Law Enforcement to stop using facial recognition so maybe you can make it harder for them to obtain the technology.

For all the hoopla here, AWS has no exclusivity on providing facial nor general image recognition technology.  Beyond other commercial technology suppliers, the FBI, Homeland Security, and other large law enforcement agencies have privately developed and operated systems for doing facial recognition.  What AWS has done with Rekognition is democratize the availability of this technology, making it affordable for (amongst many others) smaller law enforcement agencies.  If AWS stops selling Rekognition to Law Enforcement it will have no impact on, for example, the NYPD’s use of facial recognition.  It may create a country of have and have not agencies, where the NYPD has the ability to scan a crowd for a kidnapped child but small departments can not.  Admittedly that’s the positive spin on Rekognition, a more negative spin is that New York will become an Orwellian nightmare while small cities and towns remain free of the surveillance state. If you believe preventing small agencies from having access to Rekognition will keep the surveillance state at bay then I have a bridge to sell you in Brooklyn, surveillance cameras (which you can rip out) and all.  What will really happen is that an alternate service, from a provider without a consumer business and perhaps privately held (so even shareholder pressure doesn’t work), will emerge.  Or Congress could even mandate that a Federal Government-developed solution be offered to local law enforcement agencies at subsidized pricing.

This leads back to where this is really going, that attacking Rekognition is all about trying to force the Federal government to put in place acceptable (to the ACLU of course) rules for the use of facial recognition technology.  Microsoft’s Brad Smith argued this exact end-game a couple of week’s ago.  While regulation, even more so premature or overreaching regulation, is not something I’m a fan of some regulation in this space is inevitable. Without it we will end up with a patchwork of legal rulings that attempt to map 21st Century technology to our Bill of Rights and century-old laws that are aging badly in the face of new technology.  Brad called out some very good issues that should be addressed.

Today’s blowup is largely a technology stunt by the ACLU.  Let’s say you want to present a picture with an animal in it and ask one of three questions.  Question one is “Is there a dog in this picture”.  Question two is “Is it a Bernese Mountain Dog”.  Question three is “Is it MY Bernese Mountain Dog”.  The use cases for these three questions may be very different, and the confidence level required may be different as well.  The default confidence level for Amazon Rekognition is 80%, which is fine for doing quick scans of photos looking for dogs.  Yes you will get an occasional false positive in there, such as a coyote, fox, or house cat.  Asking the Bernese Mountain Dog question likely requires more than 80% confidence to avoid an overwhelming number of false positives, because there are enough other breeds with similar colors.  Or take the Greater Swiss Mountain Dog, the differences (most obvious to the casual observer is coat length), means at 90% you may still see a lot of Swissies in with the Berners.  Trying to pick “my” dog out of the crowd probably requires 95% confidence and even then will yield occasional false positives, something I know from my own experience looking at a Berner picture and mistakenly thinking it was of my dog.  So when the ACLU used an 80% confidence level to match members of Congress with mugshots yielded a bunch of potential Congressional criminals that should have come as no surprise.  80% seems like basically what you’d get from a mediocre criminal sketch artist drawing.  Enough to take a closer look at someone, but not a definitive match.  Had the ACLU used the 95% confidence level it would have seemed like less of a stunt and more of a real warning about use of the technology, but I suspect the press will mostly echo the ACLU’s message.

For me the ACLU’s attack on Amazon Rekognition damages their credibility, and as a sometimes contributor/member probably sends me into another cycle of being negative on them.  I just don’t like seeing good, and indeed broadly game changing, technology being used as a whipping boy to get around their (or anyone’s) public policy impotence.  I guess I’m just not generally a “the ends justify the means” kind of guy.




Posted in Amazon, AWS, Cloud, Computer and Internet, Google, Microsoft, Privacy | Tagged , | 2 Comments

Amazon and Sales Tax

This week’s decision by the U.S. Supreme Court that overturns a pre-Internet requirement that a company have a physical presence in a state in order to be compelled to collect sales tax on behalf of that state is the biggest legal gift Amazon has received in a long time.  It is perhaps the biggest legal gift it has ever received.  Of course that can be hard to tell from all the press this week, so I’m going to dive in on it.  One important disclaimer, I had nothing to do with (and have no proprietary information about) the retail side of Amazon.  These are my personal opinions.

In the early days of the Internet the lack of sales tax on most transactions played a role in the growth of eCommerce.  Of course, one can debate just how significant that role was since the lack of sales tax was only one of the attractions.  The lower price of the purchase itself, convenience of shopping from your desk at work or at home, and (my continuing favorite) access to a vast set of items, sizes, and styles that you couldn’t easily find in local brick and mortar stores, arguably had far more influence over the shift to online shopping.  However significant the lack of sales tax on the Internet may have been back in the late 1990s, its role has been diminished over time.  I don’t think there are any Millennials who heard about this weeks Supreme Court ruling and went “that’s it, I’m going to have to start going to the mall every Saturday”.   For you Millennials and Centennials, that was the normal shopping experience in the pre-Internet days.  Particularly in the golden age of Blue Laws, when stores did not stay open late enough to shop after work during the week and then were forced by law to close on Sunday.  So Saturday was it.  But I digress.

My overall guess is that the lack of sales tax played at most a minor factor in the growth of eCommerce, and most of that boost came in the first few years.  But it might have played a bigger role in who the winners and losers were in eCommerce.  It’s pretty obvious on the surface that if a web search shows up two suppliers of Product X at the same price, and one collects sales tax and the other doesn’t, that you are likely to buy from the one that doesn’t. (Barnes and Noble) might have been disadvantaged by having to collect sales tax while did not, but and other early pure-play online book sellers had the same sales tax advantage as Amazon yet it was Amazon and Barnes and Noble as the last two standing.  If you time travel back to 2000, the Amazon vs. discussion wasn’t around sales tax it was about Amazon’s recommendation engine.  Both were about equal at letting you buy a book, but Amazon was the far better site for discovering new things to read.  Moreover, it was Amazon’s leadership in e-books that drove the longer term shift in book buying and reading habits.  Kindle, not Sales Taxes, was the ultimate differentiator.

As eCommerce grew the pressure for retailers to collect (and remit) sales taxes grew with it, and since Amazon was growing the most that put the focus on Amazon.  Until fairly recently Amazon was reluctant to collect sales taxes.  While there are no doubt technical complexities involved (and the Supreme Court decision references those, some states have complex rules with multiple taxing authorities), it was mostly competitive.  Amazon collecting sales tax when key competitors do not leaves it at a disadvantage.  Given how data-driven Amazon is, they no doubt knew exactly how much negative impact there would be in any given state when they started collecting taxes in that state.  They could then compare the business advantages of having a physical presence in the state with the negative impact of collecting sales tax in making decisions.  Offering Prime Now (which requires local distribution centers), pop-up kiosks for Echo, Fire TV, and other digital products, having AWS sales offices, etc. outweigh the (likely slight) downward pressure on sales from collecting sales taxes.  So after a few years of gradually expanding the states it collects sales taxes for, Amazon went to collecting sales tax in all 50 states.

Amazon still doesn’t collect sales tax when it is providing a marketplace for third-party sellers.  Legally that is the seller’s responsibility, and this is a case where I think there are likely technical complexities at work too.  With the physical presence model Amazon would have needed to be aware of every location the third-party seller had a physical presence so it knew to collect the tax.  If the seller failed to notify Amazon that it had established a physical presence in another jurisdiction then it could have left Amazon legally exposed.  At least it seems likely that Amazon’s attorneys would have been making that point.  But with the physical presence requirement no longer in force, it it would be easy for Amazon to collect sales tax for any state based purely on shipping address.

For Amazon what the Supreme Court ruling does is level the playing field.  No eCommerce competitor will be able to undercut it based on not collecting sales tax.  And its own marketplace sellers will not be able to undercut direct Amazon sales by matching Amazon’s price but not collecting sales tax.  Since Amazon already collects sales tax on its own sales, there is no change in its position relative to brick and mortar competitors like WalMart.  For Amazon, this Supreme Court ruling looks like a complete win.


Posted in Amazon, Retail | 6 Comments

Adblockers are the new AntiVirus

Back in November I wrote a blog entry about good browsing habits being insufficient to protect you from malware.  Here is an update.  This week I had three brushes with malware, all three having to do with news aggregators.  One came through Microsoft’s News app (previously called the MSN News app), one through the Flipboard app on Windows, and one through Yahoo’s web portal.  Both the Microsoft and Yahoo cases were attempts to get me to install a Fake AV.  The one that came through Flipboard was worse, it was a drive-by download (meaning it downloaded a file to my computer without my being prompted to allow that).  Fortunately Windows Defender Antivirus caught and quarantined the drive-by file.  And this happened despite my having tightened up my browsing habits further, by absolutely refusing to click on sponsored links in news aggregators.

First a word on sponsored links (particularly in news aggregators): JUST SAY NO to clicking on these links.  These are enticing socially engineered stories designed to pull you in to a website that is all about serving ads.  You recognize it, usually a slide show with a single slide per page and numerous (even dozens of) ads around it.  You have to click-through the slide show, with each slide resulting in numerous additional ads being displayed.  The pages are even designed so that elements display later, moving the “Next” button after a few seconds.  If you try to click next before the page has fully rendered you end up clicking on an ad instead.  These are evil pages, but the news aggregators love to include links to them because they are paid to include them.  Actually calling the sponsored links evil may be a little too harsh.  The content can be useful, or at least entertaining, but it comes at a high price.  That price is enabling malware distribution.

The real culprit here are ad-serving networks.  The ad-serving networks appear to have very poor control over their customers including malware in ads they submit for distribution.  Someone wants to pay them to display an ad 5000/times a day, no problem!  So amongst the tens or hundreds of thousands, or millions, of legitimate ads they serve up on websites each day occasionally one with malware shows up as well.  And these ad-serving networks are being used everywhere, from little mom and pop websites, to large news organization websites, to our lovable sponsored slide shows.  The more ads you see, the higher the odds a malicious ad will be displayed as well.  Some you might have to click to have a malicious result, but just like that (evil!) auto-play video others pose a threat just by being loaded.  My two brushes with Fake AV are perfect examples.  I went to a legitimate mainstream website and the scary Fake AV window displayed with no further action on my part.  Yes it would have taken overt action to actually download malware, but the whole point of Fake AV is to scare the user into performing the download.  It works all too often.

What makes the sponsored links pages so dangerous is the sheer number of ads they serve.  One slide show can display hundreds of ads.  Do a couple of slide shows a day and you are seeing many thousands of ads a week.  Under those conditions, hitting an ad distributing malware is going to happen with some regularity.

But I was hit this week without going through a sponsored link.  In fact I looked at the claimed source in the news aggregator.  In all three cases it looked like a news site I was familiar with.  In the Flipboard case I now believe it wasn’t; more on that later.  What is true is that in all three cases I was operating without ad blocking software.

Recall that browsers are really made up of webpage rendering engines that turn HTML, CSS, and JavaScript into the pages that we view and interact with.  Those engines can be invoked independent of the environment around them that we know of as “The Browser”.  It is The Browser (Edge, Chrome, Firefox, etc.) that we are using with direct browsing that provides capabilities such as invoking ad blocking extensions.  The engines themselves neither perform (general) ad blocking nor invoke extensions.  So when an app such as Microsoft News renders a web page, it does so without the ad blocking extension you’ve installed in your browser of choice.  And in something related, when you use the InPrivate (Incognito, etc.) modes of the browser the extensions are disabled.  This explains why most of the ad-carrying malware I see comes from news aggregators.  Even my Yahoo example turned out to be an InPrivate window I’d launched to log in to a website with an ID other than the one I normally use.  I’d just forgotten to kill it when I was done with that one task, and used it for general browsing.  InPrivate disables extensions because they might leak information you are trying to keep private, so I was caught without my usual protection.

That brings me to the main point.  In the beginning there was Antivirus software.  Then we discovered software, such as browser toolbars, that were tracking us and stealing information from our machines, so we created Anti-Spyware.  We created Firewalls to block undesirable network access, intrusion detection and prevention systems, various white listing solutions (app stores, SmartScreen, AppLocker, etc.) to limit the running of bad code, etc.  But there is one more tool needed for security, Ad Blocking software.

Historically Ad Blocking has been more about convenience (i.e., ads are annoying) and performance (lots more to download to display the ads).  Since ad personalization is a driver behind many intrusions on our privacy, and a channel for distributing malware, we need to treat them as malicious.  Ad blocking software prevents the ads themselves, as well as other web page elements used to track us (presumably for ad personalization) from being rendered on a web page.  Today you have to use a browser extension, but ad blocking as a default feature in web browsers is just around the corner.  Though I suspect that will still leave us with the problem that it is a browser feature, not a core engine feature, and thus not always available when pages are rendered.

I have not yet gone the route of a paid system-wide ad blocker like Adguard for Windows, but I’ll likely give it a try.  If that will work for blocking ads in applications like Microsoft News, then it would be worth paying to get the extra protection.  In the meantime on Windows I’m using the free Adguard Adblocker extensions for Edge, Chrome, and Firefox.  I use 1Blocker on iOS.  Well, at least that’s what I do as of this writing.  I’ve tried a number of them on iOS and found very little difference in the user experience.

One comment on Flipboard and the drive-by download.  I originally thought this was ad delivered, but on reflection I may be wrong.  It looked like a story on a mainstream website, but took a long time to load.  The link may actually have been to an intermediate site that first did the download then redirected to the mainstream site.    Looking like, and eventually redirecting to, a mainstream site was the social engineering to get me to click on the story link.  Trusting that Flipboard was being careful to avoid displaying misleading story links was a mistake on my part.  All news aggregators have the problem that in order to give you everything you are looking for they will include stories from the long tail as well as mainstream sites.  If their curation processes can’t identify long tail websites that are compromised or misleading (or simply not careful about content or ad networks), then they make it that much harder to stay safe on the web.  So Flipboard may have been an ad problem, or it may have been something worse.  At this point all I can say for sure is that my trust in Flipboard has been diminished.

The days of the ad-supported “free” Internet appear to be coming to an end.  Privacy concerns with the tracking needed to do extensive ad personalization has moved blocking ads and trackers from a niche to mainstream desirability.  The abuse of ad networks to distribute malware will make an blockers pretty much mandatory, and will soon result in ad blocking being built-in to browsers.  At that point, how do you make money off advertising?  The ad industry may have a window to clean up their act and prevent the industry’s collapse, but that window is small and shrinking fast.




Posted in Computer and Internet, Security, Windows | 1 Comment

Playing the Amazon Blame Game

Does Macy’s tell Gimbels?  Gimbels, Korvettes, Gertz, Lechmere, Lafayette, Woolworths, Montgomery Ward, Bradlees, and Zayre are amongst the dozens if not hundreds of retailers that I recall from my youth that have long since disappeared.  Many others merged into that blob now known as Macy’s, which isn’t Macy’s at all.  Macy’s itself suffered the indignity of being swallowed by arch-competitor Federated Stores, along with almost all other department store chains in the country, who then homogenized them all under the Macy’s banner. And Sears, once the undisputed king of retail in America, lost its leadership position to Walmart and has spent the last few decades steadily slipping towards oblivion.  More on Sears, its sister Kmart, Korvettes, and the interesting story of Zayre, coming up after this commercial break.

Do you suffer from anxiety?  Is your industry about to fall victim to this irresistible force?  Relax!  With Time Machine in a Bottle you can go back to 1994 and tell your younger self that the millennials and centennials are coming.  Yes, you too can try to convince your younger self that they’ll survive Y2K and Walmart only to have their throats ripped out by generations who never knew a world without universal computation and connectivity.  And you can regale them with stories of how Boomers and Gen X were happy to help the millennials and centennials feed on your entrails.   That’s Time Machine in a Bottle; When you really want to understand the futility of trying to get non-technologists to understand the coming impact of technology.

Retail is a tough long-term business.  The winners and losers change with consumer tastes, demographics, and shopping habits.  With a few exceptions, the retailers who dominated the city and town center shopping scenes of the 19th and pre-WWII 20th century failed to capitalize on the post-WWII move to the suburbs by what we today call the Traditionalists (aka, “The Greatest Generation”).  Many that did failed to hold the attention of the Baby Boomers and GenX.  Malls died, big box stores took over.  The headlines are about Amazon now, but for over a decade the headlines were focused on how Walmart was destroying local retail. Life still isn’t easy for Walmart, for example they are still banned from opening stores inside the New York City limits.

Woolworth defined the “5 and 10” store concept and was joined by S.S. Kresge amongst others.  Woolworth was the largest retailer in the world as recently as 1979, but “5 and 10” was a dying format.  It was one of the ones that didn’t really translate to the suburbs.  Woolworth tried other formats, eventually selling its WoolCo department stores to Walmart.  It closed the U.S. Woolworth stores, but the company still exists.  Although it was failing overall, Woolworth was being successful with sporting goods.  Today we know it as Foot Locker.  S.S. Kresge also moved beyond its “5 and 10” roots by opening larger general department stores under the name Kmart.  That happened about the same time as Wal-Mart (as it was then styled) was founded and Dayton’s started Target.  This was a really rich category actually, with chains such as Zayre, Bradlees, and Ames also coming into existence in the late 50s and early 60s.  Too many apparently, as most disappeared leaving Walmart and Target to become America’s iconic Brick and Mortar general merchandise retailers.  They were joined by specialty big box stores like Home Depot and Best Buy, and membership stores like Costco and BJ’s Wholesale Clubs, to dominate the late 20th Century/early 21st Century retail scene.  With the exception of Federated Stores a.k.a. Macy’s, few pre-50s major retailers are relevant today.

I could write pages on my perspective on retail history but, beyond probably being boring, I really want to focus on the current transition in retail and other histories.  I posit that everything we “blame” on Amazon would have happened anyway, it is just happening 3-5, maybe even 10, years faster than if Jeff Bezos and company weren’t in the picture.  Well, you say, if it weren’t for Amazon then Walmart, Target, Best Buy, etc. would be dominating e-commerce.  Really?  That’s not what the history of previous transitions in retail suggests.  It suggests that new leaders emerge from each transition, with most old leaders struggling and either coming out the other side significantly diminished (ala Kmart and Sears) or gone entirely.  If Amazon wasn’t there then someone else would have emerged to become “Amazon Light”.  But it likely wouldn’t have been one of the top brick and mortar retailers.

Let me illustrate this with a company whose name I actually don’t recall.  A former colleague had come from leading IT at a medium size multi-store general merchandise retailer.  He told me that the CEO, a very sharp retail guy, had signed over their website to a third part under a 10 year contract because “the sales from the web didn’t even add up to the sales from one retail store”.  A couple of years later e-commerce had exploded, but this retailer found itself unable to participate.  It’s unclear if they will still be in business by the time they can reclaim their online presence.  That’s a pretty typical story for a legacy player in a transitional environment.  Think back to how tentative the brick and mortar crowd really was at the start of eCommerce.  Barnes & Noble, which was expected to wipe out Amazon when it started selling books online, formed a separate company ( with other players to go after the new market, before eventually buying back the piece it didn’t own.

Recall what Jeff Bezos said last year when asked what Day 2 looks like?  “Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death.”  That is Sears.  Sears was Amazon in oh so many ways.  It was a Day 1 company from the 19th Century through the 1980s.  Its catalog operation was as important, and I believe as loved, as is today for many decades.  Even as a technology provider Amazon may have AWS, but Sears had Prodigy. In other words, it had enough foresight to see the coming importance of online almost a decade before the explosion of the public Internet.  But in 2003, just as e-commerce was really starting to take hold, it closed its general merchandise catalog business.  Sears had become a Day 2 company, and it is closing in on that final step of Day 2.

If Sears, the first A-to-Z national retailer that delivered everything to your doorstep (or at least the local railroad or stage-coach station) in even the smallest communities, and made the transition to bricks and mortar as American’s first embraced cities and then suburbs, couldn’t lead America into eCommerce then no existing retailer was going to do it.  How a company that had all the pieces, from the catalog to the online system to a century of “last mile” experience to having thrived over the course of 100 years of dramatic changes in retailing could so thoroughly miss this transition is almost beyond comprehension.  But there it is, they left a gap and Jeff Bezos was happy to fill it.

Which brings us more towards current battles, particularly the battle for customers’ between Walmart and Amazon.  This is less about “Day 2” then about long-term consumer preferences.  Let me start with two examples.  Korvettes was a large east coast discount department store chain that later in life fancied itself a competitor to Macy’s.  Actually think of the positioning as like Walmart to Target today.  While there was overlap, they largely appealed to different audiences with Macy’s being more upscale.  Korvettes wanted to attract the more upscale crowd and upgraded its merchandise (and correspondingly prices) as it tried to change its image.  The move failed as the more upscale shoppers, epitomized by my first girlfriend’s older sister who said “I’d never set foot in Korvettes”, resisted all incentives.  I think Korvettes’ executives finally realized it when they ran a heavily advertised sale on Mr. Coffee machines at something like half of Macy’s prices, then stood at a conference room window (the offices and flagship store were in Herald Square, across the street from Macy’s flagship store) and watched numerous shoppers still walking out of Macy’s with Mr. Coffee machines.  At least my father, a (IT) VP at Korvettes, came home that night knowing the magnitude of the problem.  Korvettes attempt ended up alienating their core customer base and they didn’t come back, becoming a factor in Korvettes’ demise.

The second, and shorter story, involved Walmart itself.  They decided to go upscale to get the shoppers who were frequenting Target.  I heard the exact same quotes from my affluent friends who were big fans of Target, that they wouldn’t be caught dead in Walmart.  And while I do shop there, I hate the experience and will always go to Target instead if one is convenient (and I haven’t already ordered online, which is the more common case the last few years).  Walmart itself recognized the lack of success, and reversed course before incurring much harm.

So as you look at on-line retail over the next few years Walmart has a much bigger challenge in front of it than  It both has to defend against losing its core customer base, and it has to attract customers to shop at Walmart online who have a poor overall view of the brand.  This problem would exist even without Amazon in the picture.  Meanwhile Amazon, with a brand that is viewed as Nordstrom-like service at Walmart-like prices, can work to attract Walmart’s core customers with little risk of harming the brand.

One retailer that has been successful in spite of the growth of Amazon and eCommerce is actually Zayre.  Zayre, the brand and original stores are long gone, but the company lives on.  Back when I got my first apartment I mentioned to my father that I was largely furnishing it from a store called Zayre.  He proceeded to tell me how respected they were in retail, particularly for their advanced use of technology but also just for being forward thinking.  And indeed Zayre saw the future of retailing and started to shift.  It opened an early competitor to Costco, BJ’s Wholesale, and one comparable to Home Depot called Homebase (which later closed). It opened T.J. Maxx.  It then sold the Zayre name and stores and renamed itself TJX Companies.  Eventually it spun out the membership-based stores, bought off-price competitors like Marshalls and Sierra Trading Post, and created off-priced home furnishings store Homegoods.  TJX continues to thrive even as most retailers struggle, although it too must figure out how to better address Millennials and Centennials or someday face the music.

What Amazon is good at is focusing primarily on customers, rather than raw technology, products, or the competition.  It finds unmet, or poorly met, needs and tries to delight the customer with alternate solutions.  It also tries to skate to where the puck is going, not where it is.  It keeps course correcting until it intercepts the puck.  And if it smells blood in the water, that is if Amazon has enough success with an initiative to really know it is going to intercept the puck, it goes all in.  If Amazon enters your market the reaction shouldn’t be “Oh S&^( Amazon is going to kill us”, it should be “what can we do to serve our customers better?”  Put another way, if Amazon is entering your market then the problem isn’t them, it is you.




Posted in Amazon, Computer and Internet, Retail | 2 Comments