The Death of Windows Subsystem for Android (WSA)

The recent news of Microsoft discontinuing WSA may come as a surprise, but what surprised me most was that it took me a minute to recognize the TLA. WSL? That’s one I know. I used WSL and know a lot of other people (developers of course) who do as well. WSA? Never used it. I’ve moved on to a Mac personally, but my wife is still a Windows user. It would come as a total surprise to her that she could run Android apps on her Windows machines. And I bet that is true for upwards of 99% of Windows users.

WSA could continue on for one of two reasons. The main reason would be that it was driving a lot of adoption or usage of Windows. I see no evidence that is the case. The secondary reason would be that it provided its own significant revenue stream. I don’t even know how Microsoft monetized it (i.e., did Amazon give them a cut from its app store?), but low usage and most apps of high interest being free does not suggest any non-trivial level of revenue.

One former PM for WSA put this on the lack of revenue and the absence of support for the Google Play store. Amazon has done a great job of getting apps into its app store over the years, but the reality is that developers who rely on Google Play Services don’t want to re-engineer their apps for relatively low-volume third-party app stores. In other words, it isn’t the lack of the Google Play store that is the problem it is the lack of the proprietary Google services that many apps rely on. This is the same problem that plagued Amazon’s Fire Phone. It is why Amazon’s Fire Tablets are great as entertainment devices, but you probably buy a Samsung Galaxy Tab if you want a more general-purpose Android tablet. It is the same reason Windows Phone struggled. Having 80% of popular apps doesn’t cut it when 80% of users care deeply about something in the 20% of apps that you don’t have. My wife repeatedly asking me if one particular app was available on Windows Phone yet still echoes in my head; a form of PTSD I suppose.

There are two other factors to consider in the lack of interest in WSA. The first is simple, the devices on which it makes the most sense to run Android apps just don’t exist. Thinking about running Android and/or iOS apps on Windows dates back to when Phones and Tablets were still a priority for Microsoft. Neither have been part of the Windows strategy for many years now. Consider the death of WSA as an indication that Microsoft has no intention of another major push to put Windows up against the iPad or to introduce another Windows-based phone. Windows client is about things mostly used with a keyboard and pointing device.

The second factor is Web Apps. On my Macbook I have a couple of iOS apps installed. Just a couple. They are for things that have neither a web version nor a MacOS version. In many other cases I tried the iOS app then discovered turning the website into a Web App provided a much better experience. I use Microsoft Edge on my Mac, and if I can’t tell those Web Apps are actually websites then I imagine that the Windows experience must be a notch better!

The bottom line is that WSA would have been great on my Dell Venue 8 Pro. But that was 2014. In 2024 WSA is a capability with little real purpose except as a weird checkbox.

Posted in Microsoft, Windows, Windows Phone | Tagged , , | 2 Comments

MFA – Still best described as PITA

This morning I read an article on how Multi-Factor Authentication has only been implemented by 22% of Microsoft Azure Active Directory customers. Now I’m a big believer in MFA, and protect as much as possible with it. But I still find it a gigantic Pain-In-The-Ass, and thus struggle to get others to use it. Why? So many things just go wrong at all the wrong times. A simple example? I used to be able to approve Microsoft Authenticator requests on my Apple Watch. They still come through on the watch, but the approvals always fail. So now I have to dig my phone out to do the approval. Want another one? I get approval requests in Microsoft Authenticator that I can’t identify that I assume are attempts to break into my account, but in reality they could be one of my devices trying to sync and the service involved decided that would be a good time to require re-authentication. I’m never presented with enough information to make a good decision. Need another? Some service (hey Microsoft 365, I’m talking to you) will decide right at a critical time-sensitive moment that every one of my devices needs to re-authenticate. Instead of dealing with the time-sensitive issue I’m spending the morning (or whatever) trying to get my email/one drive/etc. access back. Another? My Mac keeps telling me it needs a password for a Microsoft 365 account, but has no way to enter one. And no way to trigger the authentication request to Microsoft Authenticator for me to approve it. One of my co-workers has more accounts and far more devices than I and whenever this happens I get a call (because I’ve been our Microsoft 365 admin) asking how we can make this not happen. I have spent years in fear that he will ask me to disable MFA. Another? I love the concept of hardware tokens, but I absolutely hate the reality. I never have it with me when whichever service (not just Microsoft’s services) decides its time to reauthenticate. It’s fine with my work computer, that has one permanently inserted in a USB port. But my tablet? My personal computers (where I might have one, but it isn’t the right one)? Etc. No. If I have it on a keychain I will inevitably have the other car, and its keychain, with me when a service decides to randomly reauthenticate. Another? I have services that allow multiple MFA devices, but if I don’t have the primary one with me the UI doesn’t actually let me pick the alternative one!

And I haven’t even gotten to the problem of what happens when you lose your phone, or it breaks. The recovery process can be the equivalent of fixing identity theft. Not because anything was stolen, but because there are so few ways to prove you are you. You can’t even get human beings on the phone, and then sometimes they can’t help you. I have actually awoken from nightmares in which that happened when I was in a foreign country. My wallet and phone have been lost or stolen, and all the money in the world can’t get me out of Dodge because no one will believe I’m me. They just keep asking for numbers from code generators I don’t have access to, or for me to approve requests that I will never see nor have a way to approve.

Now I suffer through the horrors of the MFA world because I assign a high value to the threat. But in a world where a lot of people still don’t turn on PINs to access their phones, and use passwords like “password”, why does anyone think that world wants to deal with the horrible user experience of MFA? Microsoft is now pushing passwordless access, and it sounds great, except that it is the MFA nightmare on steroids. Lose your phone and you might as well be a contestant on Naked and Afraid. Which is a perfect introduction to why passwords have been temptingly close to the perfect security solution, they work even if you’re are butt naked. That should be the test for any password replacement, it functions universally even if you are butt naked.

I’m sure some will catch on that the Azure Active Directory data is for businesses, which you think should be able to force their employees to use MFA. Apparently you haven’t met the business owners/CxOs who want their executive assistant, personal assistant, and certain other employees to have broad access to their resources. Not everything, in fact very few things, are properly set up for sharing. Be that within families or within businesses. With just passwords you can share things that aren’t set up to enable cross-account sharing. Or easy cross-account sharing. With MFA you are between screwed and putting a lot of upfront and recurring effort into making sharing work. So business resistance to MFA starts at the top.

In all too many cases MFA screws up people’s risk/reward calculations, turning them completely upside down. That isn’t going to be fixed by whipping them into a frenzy over the risk side. It can only be fixed by making the cost of the authentication requirement essentially zero. And the ways we are doing that are going in the wrong direction. I know people are going to bring up technologies like Windows Hello, but those are tied to specific hardware. I can’t walk up to a random strangers computer and log into my account using its camera or fingerprint reader to prove I’m me. Until we have something besides passwords that meets the “butt naked” test, we are going to have continuing resistance to MFA and password alternatives.

Posted in Computer and Internet, Security | 1 Comment

Twenty years, but seems like yesterday

Writing about September 11th is always hard for me, and I almost skipped it (again). Maybe just a tweet or two? No, too much to say.

Twenty years has some extra significance to me as I think about the twentieth anniversary of the attack on Pearl Harbor. I was 5 on that anniversary, and I don’t recall it being treated as special though I’m sure it was. What I realize now, what I’ve come to realize in general as I’ve aged, is the view my parents and grandparents must have had when I was a child. World War II wasn’t history to them, it was yesterday. The Korean War wasn’t history to them, it wasn’t even over (and technically it still isn’t, there is just an armistice). We can talk about Afghanistan, and it being America’s longest war, but WWII led directly into the Cold War. From Pearl Harbor to the end of the Cold War was 50 years. The U.S. had troops in Germany and Japan for more than 46 of those 50 years. Did my parents and grandparents realize on December 7th 1961 that they’d been in endless war, a war that would not end for another 30 years? For completeness we still have troops in Germany, Japan, and South Korea. Our botched exit from Afghanistan is not the subject of this post, its just part of the context. We’ve been fighting against, or helping to defend, Germany for 80 years. Mind boggling. But back to 9/11. When I was a child my parents wanted nothing to do with Germany or Japan. No German or Japanese cars for them, for example. Eventually they forgave Japan for Pearl Harbor, but as Jews they could never forgive Germany. When I was very young I could not understand this at all, as it seemed the transgressions they were reacting to were ancient history. Now on the 20th anniversary of 9/11 I truly understand, for them the WWII trauma was still fresh.

On 9/11/2001 Microsoft CVP Lewis Levin and I were in Minneapolis on a customer visit to the St. Paul Companies. We’d flown in the night before, our fist stop on an investigation tour of what “BI Applications” Microsoft might build. The local Microsoft account rep picked us up and took us to the customer’s offices. An assistant picked us up in the lobby and escorted us to a conference room. Along the way there was a sign announcing that former President George H.W. Bush was to speak at a company event that afternoon. Everyone was excited about President Bush’s visit. As we entered the conference room the assistant said “Did you hear a plane hit the World Trade Center”. I was surprised, but not shocked, to hear this news. My head immediately went to the thought of the occasional idiot private pilot who screws up their aerial tour of NYC and manages to crash into a building. Someone flying along the Hudson and getting too close to the WTC was not that hard to conceive. The assistant said nothing else about it, she had mentioned it somewhat nonchalantly, and went off to get us coffee. The next couple of hours are somewhat of a blur to me.

I can’t remember if the assistant delivered the coffee and then returned with the next bit of news, or if she just came back with it and no coffee. The reason I raise this is that normally I recall shocking amounts of detail, so I find these gaps disconcerting. Of course, they deserve to be there. To put it simply, she walked back into the room and said “A second plane has hit the World Trade Center.” I don’t think I said it out loud but my brain went “that can’t be a coincidence” and a kind of disorientation overcame me that I have not experienced before or since. She also said they were evacuating the building and escorted us out. Once back to the account reps car we could listen to the confusion on the radio, but by the time we were back at the hotel we still didn’t really know anything. We walked up to a TV just in time to see the South Tower fall. Lewis and I stayed glued to the television the rest of the day.

I’m a native New Yorker, so I took the attack really personally. Moreover, it took about a day to confirm that no family or friends were lost in the attack. There was a close call, a cousin who worked at 22 Cortland Street directly across from the towers. 22 Cortland, the Century 21 Department Store building, was severely damaged when the towers collapsed. Other family members were also in lower Manhattan at the time, but not in the WTC, and made their way north to safety. For weeks afterwards I would monitor High School and Technology related distribution lists for news. People from an office I’d worked at in the 70s were at a meeting at Windows on the World and perished when the North Tower collapsed. When their names finally came out I was relieved that it was no one I personally knew had perished, but it was only two degrees of separation. And that applied across the board, no one I knew personally but only two degrees of separation.

What I realize on the twentieth anniversary of 9/11 is that it will forever seem like yesterday. It will forever be a little raw. The horrors of the attack itself are prevalent in my thoughts, but the freedoms lost as we sought to prevent another such attack are not far behind. 9/11 altered the trajectory of our society (and of the world) in ways that those who weren’t adults (or at least close) at the time will never understand. To many of my readers, and certainly for their children, 9/11 is ancient history.

Which brings me to an observation that is both humorous and sad. I have seen complaints that 9/11 is not taught in schools. So let me tell you about World War II, the Korean War, and the education of the Baby Boom generation. We were not taught about those wars in school either. I remember an incredible level of frustration that history pretty much ended with World War I. Only now do I understand. Twenty years isn’t enough time for something to become history. It took almost a decade to track down and kill Osama Bin Laden. The trial of 9/11 mastermind Khalid Sheikh Mohammed is still not scheduled. The Taliban are back in power in Afghanistan. There is no closure. The history books aren’t done yet. No wonder 9/11 itself feels fresh and raw.

I want to end with something positive and forward looking, but to do so we need to look to the past. At the end of the Vietnam War the U.S. absorbed large numbers of Vietnamese refugees fleeing the the communist government. They added immeasurably to our society, becoming a major contributor to the late 20th Century melting pot. Now we have an opportunity to absorb a large number of refugees from Afghanistan, refugees who were our supporters, friends, and allies in the years after 9/11. 9/11 may be fresh and raw to me, but that also means it is Al-Qeada and their supporters in the Taliban that I see as the enemy. If we can refresh the American Melting Pot at their expense, that will be quite the victory.

Posted in Computer and Internet | 1 Comment

Windows XP’s Twentieth Anniversary

Ian McDonald recently posted a retrospective in honor of the 20th anniversary of the release of Windows XP. Because Windows XP didn’t age well in the era of exploding Internet security threats we have a tendency to demean it. But if you are looking at it in 2001 is was a sea-change release that strongly met customer needs. It would take over 8 years until a worthy successor emerged in the form of Windows 7. It would still take years for Windows XP to fade from general use, and it is still used in some embedded systems. BTW, Ian was still one of the leaders of Windows, though on the separate Windows Server team that released Windows Server 2008 based on the same Core as Windows 7. He was still plugging away at Windows Server when I left Microsoft in 2010.

Since I didn’t start blogging until after leaving Microsoft I never got to write all the good things about Windows XP. My first real blog about Windows XP was decidedly negative. It was a decade after release and XP had become the whipping boy for bad Internet security. I’m not going to try to go back and give Windows XP the glowing review that it deserved at the beginning of the century. But let’s all acknowledge that Windows XP, more than any other software, signaled the transition to 21st Century client software.

To all my friends who were on the NT5/NT5.1 team, congratulations on the 20th anniversary.

Posted in Computer and Internet | 1 Comment

Reading tea leaves on the SEWIP Block III

Cybersecurity is all the rage these days, but decades before we worried about computer viruses, worms, ransomware, keyloggers, etc. (ok I did write a keylogger in 1973) the military was employing electronic warfare. Now this space, indeed all military tech, has fascinated me since I faced the prospect of being drafted during the Vietnam War. Consider it a hobby. Of course given how the military works, with varying degrees (but usually significant) secrecy and even intentional misdirection, a lot of tea leaf reading is required to get a real picture of technical capabilities. So this is a little story of how a potentially huge military technical advance was either accidently or intentionally revealed without explicitly calling it out. Because I must assume my reader base has little background on this topic I’ll set the stage, and try to do it as concisely as I think makes sense. So, with apologies to the experts for oversimplifying, skipping over important details, or introducing (hopefully minor) errors in my background workup let’s get on with the show.

In 1967 the Israeli Destroyer Eilat became the first ship to ever be sunk by an anti-ship missile, Soviet Styx missiles launched from an Egyptian Komar-class missile boat. This set off efforts around the world both to acquire anti-ship missiles and to come up with countermeasures against them. In the Yom Kippur War of 1973, Israel used electronic countermeasures to render both Syrian and Egyptian launched anti-ship missiles impotent, and thus achieve naval supremacy. They had also developed their own anti-ship missile, the Gabriel, against which the Soviet-supplied Syrian and Egyptian ships had no equivalent electronic defenses. Oops. So while kinetic defensive weapons (i.e., anti-ship missile interceptors) get most of the press, since the dawn of the anti-ship missile era electronic countermeasures have been the more important part of the ship self-defense picture. In the U.S. the electronic countermeasures system for ship defense is primarily represented by the AN/SLQ-32. It has receivers that search for enemy radar signals, classifies them (i.e., matches them to known signals like the signatures in an anti-virus scanner), alerts the crew, and directs the employment of countermeasures. To cut to the chase, one of those countermeasures would be to transmit radar signals that the original radar thinks are returns from its own transmissions and in doing so create false targets. You may have seen this “in action” in movies and TV shows; they think they see a target ship or aircraft and suddenly there are 5 or 10 or hundreds of them on their sensors. They can’t tell the difference between the false radar returns and the actual target. Back to reality, an active radar guided anti-ship missile approaches the area where a target ship is supposed to be and turns on its radar to search for the target ship. The AN/SLQ-32 detects the missile’s radar, matches the radar’s transmission to a library to determine what the missile is and then calls for various countermeasures that (amongst other things) includes transmitting the false radar returns. Hopefully you get the anti-ship missile to believe one of the false returns is the real target and it splashes into the water rather than hitting you.

There is a constant cat and mouse game of improving missile’s ability to find the real target despite being jammed and decoyed (called counter-countermeasures) and improving the countermeasures. The primary effort (there are many others) to improve the U.S. Navy’s electronic countermeasures is called the Surface Electronic Warfare Improvement Program (SEWIP). SEWIP updates the AN/SLQ-32 and has been executed in three “blocks”. Block I replaced obsolete components. Block II replaced the signal receiver and analysis capabilities. Block III replaces the transmit (aka active) side of the AN/SLQ-32, the part that does jamming, decoying, etc. Block III is supposed to be a dramatic improvement in electronic attack capability.

When I first read Tyler Rogoway’s interview with Northrop Grumman’s Mike Meaney over at The War Zone I took little note of one thing Meaney said: “Then finally, we can even use our system for simple versions of radar, and use it for different types of radar functions, as well.” The interview is the first real look at SEWIP Block III that I’ve seen, and Meaney was throwing out things that it could do beyond electronic attack. So my first impression (and Tyler’s too…I asked) was that it provided a backup in case the ship’s primary radar was out of commission. This could be as simple as it needs to be taken offline briefly for maintenance, or as serious as battle damage has severely degraded or halted its operation. Yup, that seems useful. But now we can get to the entire point of this blog entry.

Throughout my life I’ve processed things while driving my car. In fact, you’d be surprised how many of my technical and business contributions over the years came to me while driving. In this case, I had a “Holy &^#*” moment about Mike Meaney’s comment on using SEWIP Block III as a radar. The AN/SLQ-32 system is designed to receive and analyze adversaries radar signals. The SEWIP Block III (and, for completeness, earlier active components) is designed to transmit what look like the adversary’s radar signals. So if SEWIP Block III is able to function as a radar, it would not need to look at all like a U.S. Navy radar system. It could be made to, and by default is more likely to look like, someone else’s (likely an adversary’s) radar! People who follow this space probably also just had a “Holy &^$*” moment, but for everyone else I will explain.

One of the primary ways to locate and attack a ship (or aircraft or land-based air defense system) is to search for its electronic emissions (“signal intelligence”). That’s one of the two major techniques used by ocean surveillance satellites and patrol aircraft (the other being synthetic aperture radar). So the satellite is running around picking up signals, and if it sees the signals generated by a U.S. AN/SPY-1 radar it knows an AEGIS ship is in the general area. What happens today when an adversary’s signal intelligence satellite is overhead is that a ship will go into an emissions control (“EMCON”) state, turning off or limiting transmissions until the satellite has passed out of view. While this may be effective, it means that the ship’s situational awareness is greatly reduced for some period of time. Depending on overall circumstances (e.g., an adversary has already located you and is in the process of launching an attack) that could be disasterous. But what if you had another option, to operate a radar that didn’t identify the ship as a U.S. ship? Now under EMCON you could turn off the AN/SPY-1 but have the SEWIP Block III emulate a Chinese, Russian, or other radar. You get some of your situational awareness back, but retain a lot of the ability to hide from detection by signal intelligence. Note only that, if you are attacked by missiles that are looking for your radar transmissions (an increasingly used technique), they are unlikely to target a ship emitting their own military’s radar transmissions. You are somewhere in the East China Sea and the Chinese ocean surveillance satellite picks up a radar transmission that looks like it is from a Chinese ship, or maybe a North Korean Corvette using a Chinese-supplied radar. It just isn’t going to set off alarms, and even if the PLAN decides to investigate the signal it is going to take a long time for it to verify there was no Chinese or North Korean ship in the area. By that time the U.S. ship is long gone from where it was detected. Meanwhile if something was happening (e.g., an attack) that the ship needed to know about, being able to use active radar (no matter how limited) increased the chances that you were able to detect it.

I’ve given one example of how the SEWIP Block III’s ability to emulate an adversary’s radar might come in handy. But it does not take much imagination to find other potential uses that really play into the entire “fog of war“, use of deception, getting inside the adversary’s OODA loop, etc. During a battle you take one or more of your ships and have them approach the adversary force from the flank using your emulation of the enemy’s radar. Yes you have far less capability than if you were using your native radar, but for this mission that is an acceptable risk. If the adversary (who is focused on the U.S. forces it is engaged with) fails to notice your true nature for long enough, you might be able to approach within range to surprise them with your anti-ship missiles. Alternatively, you use your ability to make your emissions look like the adversary’s to confuse them about where their own forces are located. And that could be just the edge your forces need to achieve victory.

Of course perhaps I am just letting my imagination run wild. But I have to believe my imagination is running years behind that of naval technologists and strategists. In any case, Mike Meaney seems to have revealed a capability far beyond what has been previously suggested.

Note: Since I’m a mere hobbyist here I would not be at all surprised to find I’ve made more errors in both fact and judgement than I would with technology where I have a first party professional relationship. Please feel free to leave appropriate comments!

Posted in Computer and Internet | Tagged , , , | 2 Comments

Masking Ignorance

I didn’t really know what to call this posts, but it is about the resistance to using masks in public to fight Covid-19. Whenever this comes up there is a tremendous amount of name calling on both sides of the question. So it’s important to remember how we got here.

The message from the professionals, the scientists for all of us who almost religiously follow science, back in late February and early March was “The only masks that will help prevent coronavirus are N95s, you can’t have them because hospitals need every one they can get their hands on, and if you could get them you are too stupid to use them properly”. Seriously that is what they were saying. What about surgical masks? “Nope, won’t help”. What about…. “Nope, the virus is teeny tiny and just goes through anything other than an N95 like a red hot knife through butter. Now go away and hide out in your basement, and send any spare N95 or surgical masks you have to us”. Ok I’m being snarky, but this is the message that was being sent out to the public.

A few weeks go by and suddenly the message changes to “Wear masks in public, but not N95 because we still need all of those for us”. “Cover your nose and mouth with a bandana, or sew one from pillow cases, or….” So a homemade mask or surgical mask will stop the coronavirus? “Not exactly” Huh? “<insert lots of mumbling about droplet sizes> So the virus travels on droplets? “Yes” And surgical masks and home made masks can stop droplets? “Yes” So the mask will protect us from the coronavirus? “No”.

I interrupt this frustrating exchange for a “commercial” message. The vast majority of the American Public are not scientists, but they are also not stupid. At this point a huge percentage of the population will see mask wearing as a “go fetch a rock” exercise, while some other large percentage of the population will blindly follow what the experts and leaders say, and some modest percentage will seek to further understand why it makes sense to wear a (non-N95) mask even if the conversation has so far made little sense. Now back to our regularly scheduled scientist/leader gibberish exchange.

So if these homemade masks won’t protect me from the coronavirus, why wear them? “You’ll protect other people”. Huh? “<lots more mumbling about droplet sizes> WTF?

Sorry, another “commercial” interruption. You’ve just told everyone who isn’t fascinated by the topic of droplet sizes that droplets can go one way through your improvised mask but not the other way, which makes no sense. Those who thought this was a rock fetch have now doubled down on that conviction. The blind followers of guidance now have altruism added to their reasons to wear a mask. And those outside the medical community that really have the patience to understand the whole droplet thing, which I’m convinced is on the same order of magnitude as my Twitter followers, actually get why this might help. I’d go back to the conversation, but I think that’s pointless.

The only hope for mask wearing is to make N95s available, because then you get a redo on all the other nonsense. Yes they will block the virus, and thus the value in wearing them is unquestionable. Well, except for all the other wishy washy and contradicting information out there. No, not from random sources (like me!) but from “the experts”. According to the CDC things like getting the disease by touching a surface and then your face comes under the heading “The virus does not spread easily in other ways”. And what about the eyes? The CDC says “possibly” when you touch your eyes with hands that have the virus on them, but is silent on what happens when droplets or aerosolized virus get in your eyes. Meanwhile some ophthalmologists think transmission through the eyes is being underrated. If that’s the case it means that an N95 mask won’t protect you sufficiently either. That’s just another trustbuster revelation coming down the pike. What we don’t know about coronavirus is undermining trust in every piece of guidance provided by scientists and government.

There is a lot of other stuff before you go all negative on those who don’t wear masks, because their experience is not your experience (remember over half the counties in the U.S. have had zero cases). Now when I run into these people (and I’m always in a mask when I do) I think they are being terribly inconsiderate; it wouldn’t hurt them to wear a mask even if it turns out that has little impact on the spread of Covid-19. But I actually understand their position, at least the general version of it. If you don’t generally trust government (and I don’t BTW), and then government and (in this case) the scientific guidance they are promoting is confused, contradictory, and non-sensical, you are going to reject it. And over the last few months that’s exactly what the guidance on masks has been, confused, contradictory, and at least seemingly non-sensical.

UPDATE: As of May 13th 2020 the World Health Organization continues to say “If you are healthy, you only need to wear a mask if you are taking care of a person with COVID-19.”

Posted in Computer and Internet | 6 Comments

Covid-19 Impact on AWS

The tl;dr version is that Covid-19 is putting at best mildly upward pressure on AWS growth rate, with dramatic acceleration in some areas and significant drop-offs in others. But to think that any PaaS/IaaS vendor can see dramatic growth while a world-wide recession is getting underway is just silly.

Let’s kick off this discussion of AWS by focusing first on one of the few pieces of data out there, Microsoft published a blog entry that they had seen a 775% increase in cloud use, implying Azure, and then corrected that number to apply to the Teams service offering in Italy.

teams

In that blog entry they provided a priority list should resources be in short supply. There have also been sporadic reports of resource shortages in Azure, but I don’t know how prevalent that has been or how it differs from normal. All Cloud providers will, from time to time, have temporary resource shortages as sudden demand changes for some particular combination of requirements. On AWS for example you could have an instance type which is plentiful in most Availability Zones (AZ) within a Region but not in one particular AZ. If you request that type in that specific AZ you will get an error, while if you don’t specify an AZ or pick another AZ there is no problem. As much as they try to avoid it, that is just business as usual (e.g., one AZ may be out of physical space so just can’t have as much capacity). In any case, the Azure blog and associated rumors set off a minor frenzy that Cloud providers, including AWS, were having trouble dealing with overwhelming demand coming out of the Covid-19 crisis.

Given AWS hasn’t said anything about business trends in light of Covid-19, “Cloud Economist” and overall awesome Cloud commentator Corey Quinn did the very rational thing and checked EC2 Spot prices. For those who don’t know about Spot, AWS makes its spare capacity available for temporary use at very low prices. Basically they auction it off. The downside is that it can be taken away from you with a 2 minute warning if they need the capacity for normal allocations. That lets AWS maintain a lot more spare capacity than would be economically viable if they couldn’t get some revenue for it. They have enough capacity that customers have demonstrated things like getting 1.1 MILLION vCPUs worth of EC2 Spot instances to run a natural language modelling job. What happens with Spot is that as demand goes up, either to dip deeply into spare capacity for normal EC2 use or because of more Spot demand itself, Spot prices rise. So checking Spot prices is one of the most obvious ways to see if EC2 is seeing demand beyond what AWS’ planning and supply chain can deliver. For one instance type in one region Corey found a modest uptick in pricing since the beginning of January. But I looked at other instance types, and in other regions (including two I know have physical expansion challenges) and found no price change since the start of the year. I even found older instance types that were now cheaper than 3 months ago. Corey confirmed that his broader look matched mine. So Spot pricing is telling us that (broadly speaking) AWS capacity is not being overwhelmed by changes in demand from Covid-19.

And why should it? Did Covid-19 cause any company to accelerate its move of SAP from on-premises to the Cloud? Did Restaurant chains suddenly up their need for Cloud services as they moved from full service to takeout only, or even ceased operation? What about retail in general? Yes, AWS hosts a lot of retailers (despite the competitive pressure from their parent being competitor Amazon) and, moreover, hosts a lot of services that restaurants and retailers utilize.  How many Square Readers are sitting idle in small retailers because the government has ordered those retailers to stay closed? Would you guess that Square has scaled up or down their use of AWS these last few weeks?

On the other hand, we know that AWS customers like Netflix, Zoom, and Slack are seeing unprecedented demand as people stay at home. Other work tools that run on AWS, such as those from Atlassian, are likely seeing expanded use. Software development is something that is widely done from home on a part-time basis in normal times, so it is continuing during the Covid-19 shutdown with even greater reliance on the Cloud. AWS hosts a lot of Dev/Test workloads. In fact, many companies that were reluctant to move production workloads to the cloud have been using it for Dev/Test for years. I know in my own company the Covid-19 stay-at-home change has lead a few developers who were using in-office development machines to move to AWS instances instead. Just as Microsoft has reported increased demand for Azure Virtual Desktops, I’m sure AWS has seen increased demand for Workspaces. Chime must be growing tremendously as well, and the other productivity services are likely seeing some acceleration. But the entire suite of productivity services is, unlike O365 for Microsoft or Gsuite for Google, likely immaterial to AWS’ results and thus at best is offsetting some shrinkage elsewhere.

AWS has other positives going for it, for example Gaming is one of its biggest vertical segments. Over the years most gaming companies have built their backends on AWS, so with all the usage growth in gaming that a Stay-at-Home lifestyle brings AWS is probably seeing incredible demand growth. Basically, anything home entertainment related is growing, and that is a real positive for AWS. Consumption of online news? Good for AWS. Web use in general? Excellent for AWS. Healthcare usage? Likely tremendous increase. Government usage? Increase. Companies that still do most of their brick and mortar IT on-premises, but use cloud services for their online presence? An area that AWS would see acceleration. Delivery services? Probably up, but has some perhaps surprising headwinds. Doordash probably way up, but Grubhub has a stronger focus on deliveries to businesses and that has evaporated. So are they up overall? Uber Eats up, Uber overall down. Basically B2C delivery(-like) services up, B2B down. I’m not sure how it nets out in terms of AWS usage.

Let’s cycle back to the negative. We are seeing a sudden and deep dip in economic activity, and that can’t be good for AWS in the short term. The bigger AWS has become, and the more it is the norm for organizations of all sizes and types to make use of AWS, the more exposed AWS is to the economic cycle. So it is very hard to imagine that overall its growth accelerates while the majority of customers are under such duress or even going out of business. My best guess is that for Q1 AWS benefited more from acceleration in areas like gaming and home entertainment than it was harmed by a general business slowdown. But for Q2 I think AWS can’t avoid having the overall global economic slowdown be reflected in its own numbers.

Now in the long term the Covid-19 crisis will lead to a net business positive for AWS. Companies will put in long-term strategies that support more distributed, at home, work.  Both as a normal part of business, and as a contingency for future disruption. The move to online business models will accelerate and new services emerge. Companies will accelerate movement away from their own data centers and services that are more fragile than the cloud providers. They will accelerate the replacement of legacy applications with cloud-native applications. Some companies will not survive, but the demand will return for the services they provided. The companies that emerge to provide those services will be “born in the cloud”, benefiting AWS in particular and Cloud providers in general. Areas that require tremendous computing resources, like the modeling for drug discovery, will see great expansion and drive a lot of AWS demand. Etc.

Bottom line here is don’t expect AWS to have significant resource shortages during Covid-19, in part because of great work AWS does in managing capacity, but also because demand from many customers will drop to offset the growth from others. And from a business  perspective, AWS growth will at best experience a modest uptick and the longer this goes the more likely it experiences an overall slowdown from its broad exposure to the economy.

 

Posted in Amazon, AWS, Azure, Cloud, Computer and Internet, Microsoft | Comments Off on Covid-19 Impact on AWS

The CDC and Seniors, a disconnect

Because of Covid-19 the CDC is recommending that those over 60 (and anyone with chronic medical conditions) stock up and prepare to stay at home for the foreseeable future. Some articles have gone so far as to equate foreseeable future with being through 2021. While I am sure that is sound conservative medical advice about Covid-19 given current data, it also ignores the reality of being over 60.

By your mid-60s you are looking at a life expectancy of around 20 years. Moreover, at some point in your 70s your physical (and perhaps cognitive abilities) will likely decline significantly. So the CDC guidance is to flush 10% of your remaining life expectancy, and perhaps 20% (or more!) of your remaining fully active life, down the toilet to reduce the chance that you will die from Covid-19. Using some current data (and all data about Covid-19 is, at this point, suspect) that means giving up 20% of your active life trying to avoid a 4.6% chance of dying if you contract the SARS-CoV-2 virus.

To put a slightly different spin on it, if you are in your 60s and contract the SARS-CoV-2 virus you have a 95.4% chance of survival. But while you are busy trying to improve on that 95.4% survival rate by following the CDC’s advice to be a hermit, the ills of aging are racing at you like a train in a tunnel. Let me give one example, the rate of new cancers rises dramatically as you age. It roughly doubles from your late 50s to late 60s. The all cancer 5 year survival rate is 69.3%. Now imagine trying to improve on that Covid-19 95.4% survival rate by following the CDC’s advice to be a hermit only to find, when you are finally free to start living your life again, that you’ve developed cancer and only have a 69.3% chance of survival. And that’s the all cancer rate, what about stomach (30.9%) or esophageal (18.7%) cancer survival rates? And even if you survive, you will have given up another year or more of your life to fight the cancer. Now do you want to start talking about Alzheimer’s and Dementia? Starting at age 65 the risk of contracting Alzheimer’s increases at 23% per year of age. I’m going to stop looking at data now, because I’ll make myself horribly depressed.

The truth for those of us in our 60s is that our mortality has become real even if, for the moment, we are healthy and active. Becoming hermits for a year or two means, quite literally, giving up the few remaining good years of our lives. I’m pretty sure this is not the right trade-off. Certainly it isn’t for me.

Although not specifically part of my point I do want to add one thing that has been pointed out to me by others. The health risks, particularly mental health risks, of long-term isolation are significant. So while I’m making the point that you could be giving up the few remaining good years of your life, they were making the point that extended social distancing could actually accelerate your decline.

Please don’t think I’m advocating against most measures to avoid and stop the spread of SARS-CoV-2. Wash your hands frequently. Avoid crowds. Try not to touch your face. Stay home if you are sick. Etc. But if the CDC thinks that people in their 60s should hide in their homes for many months or even years then they aren’t looking at the big picture.

Posted in Computer and Internet | Comments Off on The CDC and Seniors, a disconnect

Endings and Beginnings (Part 2 – Gaia Platform)

Shortly after I retired from Amazon my wife and I decided we would need to temporarily relocate to New York to help out with family. Just after we arrived in NY my semi-retirement plan would take a u-turn. Literally. Back to Seattle.

One day I checked my voicemail and on it was the message “Hi Hal, it’s David Vaskevitch. I have an idea I wanted to talk to you about.” David is a friend, was my hiring manager at Microsoft back in the 90s, and is Microsoft’s former CTO. David is perhaps best known for his long-term vision, and had a hand in creating many of the products (and non-product functions like channel marketing and Microsoft Consulting) that Microsoft is best known for.  For example, it was David’s vision for Microsoft to enter the enterprise server market and turn SQL Server (which at the time was a departmental-level targeted port of Sybase) into a Microsoft developed Enterprise-grade database. David put the strategy in place, hired the team leaders, negotiated the deal to split from Sybase, set many of the key requirements, and contributed to the overall design of what became SQL Server 7.0.  Mehtap Ozkan is our third founder. Mehtap brings product and technical judgement, and lots of startup experience, to our team. Soon David, Mehtap, and I were in conversations that evolved into a concrete product concept and fundraising began.

In January of 2019 the first tranche of funding arrived and Gaia Platform LLC was off and hiring. In Bellevue WA, 2550 miles from where I was temporarily living. That was some commute, but fortunately by August I was back in Colorado and commuting to Bellevue on a regular basis. Another failure at retirement for me.

So what was interesting enough to draw me out of retirement this time? One was addressing a problem that I’d been interested in since the 70s, the other was addressing  another platform shift that is underway. I can’t resist a good platform shift, so let’s talk about that first.

In computing we’ve had a number of platform generations over the years, and I’ve managed to participate in all of them to date. Mainframes, Minicomputers, PCs/PC Servers, Mobile, and Cloud are the main ones. I don’t think IoT is, by itself, a platform shift and I see “The Edge” as part of the maturation of the Cloud platform. So what is the next platform? Autonomous Machines. We have some examples today, take iRobot’s Roomba robot vacuums. iRobot has sold over 25 million robots to date, most of which are Roombas. So only a small fraction of the existing ~1.2 Billion households in the world have Roombas. And projections are another 2 Billion homes need to be built by the end of this century to accommodate population growth. Then what about robot mops, robot lawnmowers, autochefs, etc.? We haven’t even gotten to the potentially far larger industrial side of things and we are looking at an eventual market for tens of billions of autonomous machines.

So am I talking about robotics as a platform shift? Not exactly, while many robots will be autonomous machines not all autonomous machines are robots. What is a drone air taxi? Or a medical diagnostic machine you deploy to a natural disaster site? Or an autonomous tractor plowing fields? Overall platform shifts have resulted in a roughly order of magnitude larger number of systems. PCs are order 1 Billion, mobile is order 10 Billion, and we expect there will be 100 Billion Autonomous Machines over some reasonable period of time.

Each new hardware platform shift has also resulted in a software platform shift, though initial systems tried to reuse the prior generation’s technology. Anyone else remember IBM sticking a 370 co-processor in the IBM PC XT (the IBM PC XT/370) so you could run mainframe software on it? Or the DEC Professional series of PDP-11 based PCs? Sticking a PDP-11 processor in a PC form-factor and running a variant of  (the very successful on minicomputers) RSX-11M OS and software stack on it does not a successful PC make. The winner in each generation of platform has been a new software stack designed to support the new use cases.

The need for a new software stack brings us to the other problem, programming is too hard. There are two dimensions to this, making it easier for the professional programmer and making it easy for the non-programmer to create and maintain applications. In the 70s, 80s, and 90s application backlog was top of mind in the IT world. In the 70s even something as simple as changing a report would get thrown on the stack of requests with a year or more waiting period until the limited set of programmers could get around to it. This is why end user report writing tools like RAMIS, FOCUS, Easytrieve, etc. were among the most popular of mainframe 3rd party software. Datatrieve was huge in the minicomputer era. Crystal Reports was an early winner in the PC business software world. Many of these evolved into more end-user query tools. Other “4GL” tools were more oriented towards full application generation, like DEC RALLY, while other end-user data access tools like DEC Teamdata turned out to be the forerunner of today’s end-user BI tools.

From about the mid-80s to the mid-90s we were in a golden age of addressing the need for creating applications without programmers, or to make it easier for application programmers (as opposed to system programmers or today’s software engineers) to build applications. These efforts peaked with Microsoft Access for non-programmers and Microsoft Visual Basic (VB) for application programmers. Before VB the development of GUI application user interfaces was the realm of a narrow set of systems programmers, VB made it possible for all levels of developer to create state of the art end-user experiences for the GUI environment. There were other efforts of course, Sybase PowerBuilder had a tremendous following as well. And then progress seemed to stop. While the later 90s saw lots of new tools emerge to make website creation easier, Microsoft Frontpage for example, that area too devolved into one of extreme complexity (e.g., the current popularity of the “Full Stack Developer”) unless your needs could be met by prepackaged solutions. Today there is somewhat of a renaissance in Low-Code/No-Code solutions, but not at the level of generality that Microsoft Access and Visual Basic provided.

Going back to the professional programmer for a moment, that world just keeps getting more complex the more tools and systems we introduce. Yet we still haven’t addressed key problems that have been with us since (at least) the 80s, such as the impedance mismatch between databases and programming languages. At some point we pretty much declared Object-Relational Mapping Layers (ORM) the solution and moved on. But this is a terrible solution, perhaps best described by Ted Neward as “The Vietnam of Computer Science“.  We think we can do better, not by creating another/different layer but by creating a new database engine that combines the relational and graph databases along with a programming model that is more native for professional programmers. It also sets the foundation for us to provide a Declarative Programming environment that will support increased productivity for traditional programmers, low-code development, and no-code development.

Of course one of the greatest advances in years is that machine learning has now become a useful, nee essential, tool in application building. It is also a form of declarative programming, and we see it as part of our overall platform. But we aren’t going to build new ML/DL technology, we are going to integrate existing technology it into our overall platform. We think about a Perceive-Understand-Act application flow, and ML/DL is the technology of choice for the Perception part. We are building technology for Understand (i.e., database) and Act (i.e., declarative programming) and to integrate these three areas (and other technologies such as traditional programming languages or the Robot Operating System) together as a platform for Autonomous Machines.

It’s a tall order for a startup to be addressing this broad of a space, and platforms are not considered the easiest startup plays, but I like hard problems. Of course we can’t deliver everything all at once, and one of the challenges is figuring out how to put product into the hands of customers early without compromising our ability to make key architectural advances that are critical to delivering on the overall vision. We are working our way through that too. As with almost anything in tech the biggest challenge has been hiring the right set of people to pull this off. Our hiring continues, so if any of this interests you please feel free to reach out.

 

 

 

 

 

 

 

Posted in Autonomous Machines, Computer and Internet, Database, Programming | 2 Comments

ARM in the Cloud

I know I’m long overdue on a “Part 2”, but wanted to slip this in first. I’ve long been a skeptic on ARM becoming a mainstream processor choice for servers. But today’s announcement by Amazon Web Services of the ARM architecture Graviton2 processor and the M6g, C6g, and R6g instance families has me rethinking my position. As is often the case I’ll start with some historical perspective and then discuss today’s AWS announcement.

In the late 1980s and early 1990s it was widely believed that RISC (Reduced Instruction Set Computer) architectures would replace CISC (Complex Instruction Set Computer) architectures such as the then leading VAX, x86, IBM 360/370,Motorola 68000, etc. instruction set architectures (ISA). The reasons for this were two-fold. First, it was believed that in any given semiconductor process technology a RISC processor would have 2x the performance of a CISC processor. This was largely because with CISC you were devoting an ever increasing percentage of the available transistors to overcoming the inherent bottlenecks of the ISA, while with RISC those transistors could be devoted to increasing performance. Second, the complexity of designing CISC ISA processors had reached the point where the semiconductor technology could advance more quickly than you could design a processor for it, so you were always behind the curve of taking advantage of Moore’s Law. RISC ISA processors were easier to design, and thus would better track the semiconductor process improvement timing.

One thing to keep in mind was the original RISC concept was to create a new ISA every time you made a new processor. So you never really had to waste silicon on legacy, you did a new optimal ISA for each processor and made it a compiler problem to re-target to each processor. Of course software people quickly made it clear to the hardware folks that this was a non-starter, that the cost of supporting new processors effectively (finding and fixing ISA-specific bugs, tuning performance, issuing patches to released software, etc) would outweigh the performance improvements of not having a fixed ISA. So we moved on to fixed RISC ISAs that would survive through multiple generations of processors. By 1995 RISC was well on its way to world domination. IBM, Apple (Mac), and Motorola had moved to Power ISA. DEC moved to Alpha and HP to HP/PA. Acorn Machines was a PC manufacturer that created its own RISC processor (the Acorn RISC Machine) and operating system (RISC OS). Acorn would later shift its focus away from PCs to its RISC ISA, dropping “Acorn” in favor of “Advanced”, renaming the company ARM, and licensing its architecture and designs to various companies. Other RISC chips also appeared including the Intel i960 and the MIPS line. MIPS in particular looked that it would become “the Intel” of RISC processors, though it would eventually falter. And as we now know, ARM would be the only RISC ISA to really thrive, by riding the growth of the market for mobile devices. But at the start of 1995 it looked like we were going to have RISC everywhere.

So what happened in 1995? The Intel Pentium Pro. The Pentium Pro could hold its own on performance with that year’s RISC chips while maintaining full x86 compatibility. How did Intel do it? First off they clearly had made advances in chip design tools that let them move much faster than other companies working on CISC. And they adopted an approach of compiling CISC instructions into RISC-lie ROPS and then making the rest of the processor work like a RISC processor. But maybe more importantly, they had a generational lead on introducing semiconductor manufacturing processes. So even if the assumption that in any given semiconductor process technology RISC would be 2x CISC held, Intel being a process technology generation ahead negated that RISC vs CISC advantage.

Intel’s process technology advantage held for twenty years, allowing the x86 to extend its dominance from the desktop to the data center. With the exception of Power, which IBM continued to advance, RISC ISAs disappeared from the server and desktop world. But RISC had another advantage, its simplicity made it easy to scale down to smaller low-power microprocessors for embedded and mobile applications. Today pretty much every mobile phone and mainstream tablet uses a processor based on the ARM ISA.

A number of years ago ARM and its semiconductor partners began trying to bring RISC back to the server and PC markets where it was originally expected to dominate. On the PC front ARM has made some limited progress, particularly with ChromeOS PCs and more recently in Windows PCs such as Microsoft’s Surface Pro X. But so far that progress represents a tiny portion of the PC business. In servers we’ve seen several abortive efforts and essentially no adoption. Until now.

Last year Amazon Web Services introduced an instance family, the A1, based on an ARM ISA processor of its own design called Graviton. Side note, in most cases (Apple is the counter-example) semiconductor designers license not the ARM ISA but an actual “core” design from ARM. That is the case with AWS. This was a pretty niche offering, and (to me at least) signaled likely another failed attempt to bring ARM to the mainstream server market. For example the A1 was not something you could benchmark against their Intel-based instances and end up with a direct comparison. It was more niche targeted.

Today AWS brought their second generation ARM processor, the Graviton2, to its three most mainstream instance families. Those are the M (general purpose, or balanced), C (compute intensive), and R (memory intensive) families and we now have the M6g, C6g, and R6g families. They even did some performance comparisons of the M6g against the Intel Skylake SP-powered M5. And they were quite favorable to the M6g.  But Skylake SP is an older Intel generation, and a comparison with the Coffee Lake SP and AMD’s Rome would be more telling. These have already made their way into some models in the C5 and C5a families. Intel is also accelerating its product cycles so I expect it to regain a performance lead, though perhaps not enough to deter the growth of Graviton. Graviton is likely to retain a price/performance lead in any case.

So what happened to allow ARM to (apparently) catch up to Intel in the data center? I think there are three factors at play. First, recall the original RISC premise that in any given semiconductor process technology RISC should be 2x CISC performance and that this turned out not to matter with the x86 because Intel was a generation ahead on semiconductor process. Intel no longer has that generational advantage, and by some measures (e.g., smallest feature size) is behind semiconductor foundries such as the one AWS uses, TSMC. The second factor is we have a modern, some might say the most important modern, “system” vendor, AWS, leading the charge. Instruction Set Architectures have tended to thrive when backed by a powerful system vendor, not as pure artifacts of the semiconductor industry. x86 is the dominant PC and Server chip today because of selecting it for the IBM PC. ARM’s success came from DEC adopting it to create the StrongARM family, which was the dominant processor used in PDAs and early smartphones. Even ARM originator Acorn used StrongARM in its systems. Earlier dominant ISAs came from the system vendors themselves, particularly DEC and IBM. Now, just as DEC boosted ARM into dominance in the mobile device market, it looks like AWS will do the same for servers. Third, because AWS can optimize the ARM architecture and licensed core into its own chips for the controlled environment that is the AWS Cloud it can tune chip designs far more than someone trying to create a general purpose offering for on-premise servers.

So is it game over for the x86? After decades of watching, and working with, Intel I doubt it. Though it isn’t just AWS that Intel has to worry about. If AWS starts to show real success with Graviton than Microsoft and Google will be empowered to go full-bore with ARM in the cloud as well. And then there is the persistent rumor that Apple wants to move the Mac to ARM processors of its own design. With failed efforts to break into the mobile device market, pressure from all corners growing in the PC market, and now an apparently competitive mainstream ARM offering in the server market, Intel can’t stay on the modest improvement at high unit cost bandwagon much longer.

Posted in AWS, Cloud, Computer and Internet | Tagged , | 2 Comments