16TB Cloud Databases (Part 1)

I could claim the purpose of this blog post is to talk about Amazon RDS increasing the storage per database to 16TB, and to some extent it is.  It’s also an opportunity to talk about the challenges of a hyperscale environment.  Not just the world of AWS, but for Microsoft Azure, Google Cloud, and others as well.  I’ll start with the news, and since there is so much ground to cover I’ll break this into multiple parts.

As part of the (pre-) AWS re:Invent 2017 announcements Amazon RDS launched support that increased maximum database instance size from 6TB to 16TB for PostgreSQL, MySQL, MariaDB, and Oracle.   RDS for Microsoft SQL Server had launched 16TB database instances back in August, but with the usual RDS SQL Server restriction of them not being scalable.  That is, you had to pick 16TB at instance create time.  You couldn’t take a 4TB database instance and scale its storage up to 16TB.  Instead you would need to dump and load, or use the Native Backup/Restore functionality, to move databases to the new instance.  If the overall storage increase for RDS was lost in the noise of all the re:Invent announcements, the fact that you could now scale RDS SQL Server database instance storage was truly buried.  The increase to 16TB databases benefits a small number of databases for a small number (relatively speaking) of customers, the scalability of SQL Server database instance storage benefits nearly all current and future RDS SQL Server customers.

While RDS instances have been storage limited, Amazon Aurora MySQL has offered 64TB for years (and Aurora PostgreSQL was also launched with 64TB support).  That is because Aurora was all about re-inventing database storage for the cloud. so it addressed the problems I’m going to talk about in its base architecture.  In the case of non-Aurora RDS databases, and Google’s Cloud SQL, Azure Database for MySQL (or PostgreSQL), and even Azure SQL Database (which despite multiple name changes over the years, traces its lineage to the CloudDB effort that originated over a decade ago in the SQL Server group) have lived with the decades old file and volume-oriented storage architectures of on-premises databases.

Ignoring Aurora, cloud relational database storage sizes have always been significantly limited compared to their on-premises instantiation.  I’ll dive into more detail on that in part 2, but let’s come up to speed on some history first.

Both Amazon RDS and Microsoft’s Azure SQL Database (then called SQL Azure) publicly debuted in 2009, but had considerably different origins.  Amazon RDS started life as a project by Amazon’s internal DBA/DBE community to capture their learnings and create an internal service that made it easy for Amazon teams to standup and run highly available databases.  The effort was moved to the fledgling AWS organization, and re-targeted to helping external customers benefit from Amazon’s learnings on running large highly-available databases.  Since MySQL had become the most popular database engine (by unit volume), it was chosen to be the first engine supported by the new Amazon Relational Database Service.  RDS initially had a database instance storage size limit of 1TB.  Now I’m not particularly familiar with MySQL usage in 2009, but based on MySQL’s history and capabilities in version 5.1 (the first supported by RDS), I imagine that 1TB covered 99.99% of MySQL usage.  RDS didn’t try to change the application model, indeed the idea was that the application had no idea it was running against a managed database instance in the cloud.  It targeted lowering costs while increasing the robustness (reliability of backups, reliability of patching, democratization of high availability, etc.) of databases by automating what AWS likes to call the “undifferentiated heavy lifting” aspects of the DBA’s job.

As I mentioned, Azure SQL started life as a project called CloudDB (or Cloud DB).  The SQL Server team, or more precisely remnants of the previous WinFS team, wanted to understand how to operate a database in the cloud.  Keep in mind that Microsoft, outside of MSN, had almost no experience in operations.  They brought to the table the learnings and innovations from SQL Server and WinFS, and decided to take a forward-looking approach.  Dave Campbell and I had spent a lot of effort since the late 90s talking to customers about their web-scale application architectures, and were strong believers that application systems were being partitioned into separate services/microservices with separate databases.   And then those databases were being sharded for additional scalability.   So while in DW/Analytics multi-TB (or in the Big Data era, PB) databases would be common, most OLTP databases would be measured in GB.  Dave took that belief into the CloudDB effort.  On the technology front, WinFS had shown it was much easier to build on top of SQL Server than to make deep internal changes.  Object relational mapping (ORM) layers were becoming popular at the time, and Microsoft had done the Entity Framework as a ORM for SQL Server.  Another “research” project in the SQL Server team had been exploring how to charge by units of work rather than traditional licensing.  Putting this altogether, the CloudDB effort didn’t go down the path of creating an environment for running existing SQL Server databases in the cloud.  It went down the path of creating a cloud-native database offering for a new generation of database applications.  Unfortunately customers weren’t ready for that, and proved resistant to some of the design decisions (e.g., Entity Framework was the only API offered initially) that Microsoft made.

That background is a little off topic, but hopefully useful.  The piece that is right on topic is Azure SQL storage.  With a philosophy apps would use lots of modest sized databases or shards and understand sharding, charging by the unit of work which enabled multitenant as a way to reduce costs, the routing being built above a largely unchanged SQL Server engine, and not supporting generic SQL (and its potential for cross-shard requests), Azure SQL launched with a maximum database size of 50GB.  This limit would prove a substantial pain point for customers, and a few years later was increased to 150GB.  When I asked friends why the limit was still only 150GB they responded with “Backup.  It’s a backup problem.”  And therein lies the topic that will drive the discussion in Part 2.

I’ll close out by saying that relatively low cloud storage sizes is not unique to 2009, or to Amazon RDS and Azure SQL.  Google Cloud SQL Generation 1 (aka, their original MySQL offering) was limited to 500GB databases.  The second generation, released this year for MySQL and in preview for PostgreSQL, allows 10TB (depending on machine type).  Azure SQL Database has struggled to increase storage size, but now maxes out at 4TB (depending on tier).  Microsoft Azure Database for MySQL and PostgreSQL is limited to 1TB in preview, though they mention it will support more at GA.  RDS has increased its storage size in increments.  In 2013 it was increased to 3TB, and in 2015 to 6TB.  It is now 16TB or 64TB depending on engine. Why? Part 2 is going to be fun.







Posted in Amazon, AWS, Cloud, Computer and Internet, Database, Microsoft, SQL Server | Comments Off on 16TB Cloud Databases (Part 1)

Amazon Seattle Hiring Slowing?

An article in today’s Seattle Times discusses how Amazon’s open positions in Seattle is down by half from last summer and at a recent low. I don’t know what is going on, but I will speculate on what could be one of the major factors. Let me start by covering a similar situation at Microsoft about 20 years ago. Microsoft had been in its hyper growth phase, and teams would go in with headcount requests that were outrageous. Paul Maritz would look at a team’s hiring history and point out that to meet their new request they’d need to hire x people per month, but they’d never exceeded hiring more than x/2 per month. So he’d give them headcount that equated to x/2+<a little>, and then he’d maintain a buffer in case teams exceeded their headcount allocation. Most teams would fail to meet their headcount goals, a few would exceed them, but Microsoft (at least Paul’s part) would always end up hiring less (usually way less) than the total headcount they had budgeted for. It worked for years, until one year came along where most teams were at or near their hiring goals and a couple of teams hired way over the allocated headcount. Microsoft had over-hired in total, and some teams were ahead of where they might have been even with the following year’s budget allocation. From then on there was pressure on teams to stay within their allocated headcount, both for the year overall and for the ramp-up that was built into the quarterly spending plans.

Could something similar be happening at Amazon? Could this be as simple as Amazon telling teams “no, when we said you can hire X in 2017 we meant X”, and enforcing that by not letting them post 2018 positions until 2018 actually starts? Amazon is always looking for mechanisms to use, rather than counting on good intentions, and having recruiting refuse to open positions that exceed a team’s current fiscal year’s headcount allocation would be a very solid mechanism for enforcing hiring limits.

It will be interesting to see if job postings start to grow again when the new fiscal year starts. That would be the most direct confirmation that this is nothing more than Amazon applying better hiring discipline on teams.

Posted in Amazon, Computer and Internet | 2 Comments

Sonos One

I’ve been a Sonos fan for years, and recently talked about my first experience using Alexa to control older Sonos equipment.  I have one room, my exercise room, that isn’t covered by the Niles multi-zone system.  I was using an Echo in there, but with the launch of Sonos One decided I had an option for better sound.  I swapped out a first generation Echo for the Sonos One.

Starting with the good news, the Sonos One sounds great, as expected.  While I find the Echo good for casual music play, the Sonos One is better when you really want to fill the room.  If you wanted to go another step Sonos supports pairing two like devices for stereo (separate left and right channels) playback.  If you have a Play:1, there are reports of it being possible to pair the One and Play:1 for stereo playback.  I have a Play:1 at a second home that isn’t being used, so it might just be coming to the ranch.  Also, I wouldn’t be surprised to see a “Sonos Three” and/or “Sonos Five” next year, but that is overkill for my current use case.

As an Alexa device the Sonos One is a little weak, at least currently.  It supports most, but not all, of the functionality of the Echo.  Since the device is so music-centric that may not be a problem, but caveat emptor.  For example, last time I tried Flash Briefing (News) it wasn’t supported though Sonos said it was coming soon.  Getting the news during a morning workout is something I want.  Alexa Calling and Messaging isn’t supported, and that may not show up for a long time.  If you want my personal speculation on that one, Amazon may be reluctant to share your contacts with a third-party device.  So a design that worked on first-party devices like the Echo wouldn’t easily adapt to those using Alexa Voice Services (AVS).  Of course, in time Amazon could find a solution.  Sonos emphasizes that the Sonos One will continue to be updated in the future, going so far as to say “Both Sonos and Alexa keep adding new features, sound enhancements, services and skills to make sure both your music and voice control options keep getting better. For free. For life.” For one of the most obvious examples, Spotify wasn’t available at launch but support was added just weeks later.

My big beef with the Sonos One is that the far-field microphone array doesn’t seem to be working very well.  Now this confuses me, because when I tested it on arrival it seemed just fine (though not up to the Echo).  That is, I could give a command while music was playing and it would hear and respond appropriately.  This morning I was listening to music and the Sonos One was almost uninterruptable.  I finally tried talking so loudly that the Echo Show in my office started to respond, while the Sonos One just a few feet away continued to ignore me.  After my workout I applied the latest update from Sonos, which claimed to fix a bug introduced in the previous update.  The next time I workout I’ll see if that made the One’s microphone array more responsive, and update this posting.

Update (12/13): I’ve had more success with the Sonos One’s microphones on subsequent occasions, so either my Sonos One needed a reboot or it needed the software update I mentioned.    They still aren’t as responsive when playing music as the Echo, but work well enough that I had no real problems switching what we were listening to across an entire afternoon of cleaning our basement.

Bottom line:  If you are buying primarily for music playback than the Sonos One is a good alternative to the Echo.  But if music is more of a secondary use, then you are probably better off with one of the Echo devices.

For another take on the Sonos One, I found the Tom’s Guide review useful.


Posted in Amazon, Computer and Internet, Home Entertainment | Tagged , , | 4 Comments

Good browsing habits gone bad

We always think that the best protection against web-distributed malware is to exercise caution while browsing.  But what if you aren’t even browsing in the classic sense, and an application renders a malware infested page?  I found out this morning.

I grabbed my first cup of coffee this morning and launched the Windows 10 MSN News app.  I’d been reading stories for about 30 minutes when a story in my “Microsoft” search tab caught my eye:  “Microsoft Issues Black Friday Malware Warning”.  It showed as being from the International Business Times, not one of the obscure sites that MSN News sometimes picks up.  I clicked on the tile and started reading.  Suddenly my Surface Book 2 started talking.  The coffee wasn’t yet working so I couldn’t quite make out what was being said, but I thought “%^*%” auto-play video, so I clicked the back arrow to get rid of the page.  The woman with the English accent didn’t stop talking.  I killed MSN News, still she droned on.  I clicked on Edge and there it was, the MSN News article had somehow launched a browser tab with some kind of phishing/ransomware/malware site.

What the woman was saying was something about my computer was found to have “pornographic malware” and that I had to contact them.  I saw that the web page had a phone number on it, and darn but I was too busy trying to kill this to write it down.  On top was a modal dialog box:2017-11-24You’ll notice there is no checkbox for “prevent web page from launching dialog boxes”, or whatever Edge says.  I killed the dialog box and saw that underneath was another dialog box with that checkbox.  But before I could check it the above dialog box was back.  At one point I did check it in time, only to have the web page try to go to full screen mode.  Fortunately Edge let me block that.  So this second dialog was apparently a fake as well.

Unable to do anything to kill this from within Edge I launched Task Manager.  I really wanted to keep my other tabs so I tried killing just the process for the malicious one.  It didn’t work, it just kept re-launching.  I killed the top level process, re-launched Edge, and killed the malicious tab without opening it.  Nope, that wasn’t enough.  The malicious page came back to life.  I went through the whole thing again and this time clicked on the tab to start fresh.  Then I went into settings and cleared everything.  This finally seemed to stop it.

Next came a scan, then an offline scan, with Defender.  I followed that up with a Malwarebytes scan.  Nothing.  It looks like Edge managed to keep this beast from breaking through and making system changes, but I’m not confident about that yet.  I’m going to take a deeper look before declaring victory.

Maybe the worst part of this is I have no way to report it to Microsoft, or anyone else.  I couldn’t copy the offending URL from the address bar because of the modal dialog.  And I discovered that when you go into Edge’s browser history you can either re-launch the page or delete the history item, but you can’t Copy the link.  I spent some time looking around to see if Edge stored history in human readable format, but eventually gave up.  I don’t see a way to report the bad story in MSN News, but now I’ll go try to find it elsewhere.

Bottom line: Don’t think that good browsing habits will save you.  I’ve been using the MSN News app since it was first released with Windows 8, with this being the first malicious story I’ve found.  And it was an infected web page on a mainstream site.

Update (11AM Eastern): I scanned the IBT web page for this story using several tools, such as Virustotal, and came up blank on any malware.  So I viewed the story directly.  Nothing bad happened.  So while the problem occurred while I was viewing the IBT story in MSN News, it isn’t clear what really caused the malicious page to launch.  Also went and checked the family member’s WiFi router I’m on and discovered it wasn’t up to my standards for security settings.  I hardened that up.

Posted in Computer and Internet, Microsoft, Security, Windows | 8 Comments

Product Launches and Vendor Conferences

My favorite tech conference, AWS’ re:Invent is coming up next week.  I attended the last three as an Amazon VP, and was hoping to attend this year as a customer, but unfortunately can’t make it.  I’ll be streaming the keynotes, but really would have loved to experience the dynamics from the other side.  I’ve had that (both employee and customer) experience with some of the Microsoft conferences. So hopefully next year with re:Invent.  If you are going to re:Invent for the first time you are in for a treat.

If we go back 40 years ago there really weren’t vendor conferences.  There were industry conferences such as the Joint Computer Conferences (JCC), and user group conferences such as DECUS and SHARE.   DECUS was technically owned by Digital Equipment Corporation (DEC), but other than DEC providing administrative support, it was run by volunteers.  I attended many DECUS conferences.  First as a customer and later as an employee.  Thirty years ago COMDEX replaced the JCC as the big industry conference.   At the same time vendors were beginning events of their own.  While DECUS continued on as a user group, DEC held it’s first DECworld. DECworld ’87 set the standard by which today’s vendor conferences are (or should be) measured.  COMDEX peaked nearly 20 years ago, and by the early 2000s there was a clear bifurcation in approach between IT and Consumer focused conferences.  The IT audience would be addressed primarily by vendor conferences while the Consumer ecosystem would be addressed by industry conferences such as the Consumer Electronics Show and the Mobile World Congress.

Conferences have always served as major venues for product and technology unveilings.  Most people don’t realize, or don’t know, how profoundly the Internet has changed how products are launched.  Forty years ago you held a press conference, or just sent out a press release.  It would take a week or two for it to appear in Computer World or Electronic News, which were weekly newspapers, and a couple of months later it would appear in magazines such as Datamation.  If you were an existing customer you’d probably here about a new product from your sales rep, since all sales were direct in those days.  Industry Conferences became increasingly important for product launches for one major reason, press coverage.  Only in rare circumstances could you get a large number of members of the press to travel to your press conference, but they all attended COMDEX.  Although the Internet now allows for virtual attendance to press conferences, the depth of engagement is still better in-person.  And if you are a small vendor, no reporter is going to attend your press conference.  But if they are already at an industry conference looking for interesting things to write about, well then you have a shot.

Big vendors realized a couple of things.  First, they had little trouble gaining press attention on their announcements, particularly with the advent of the Internet, without the high costs of participating in industry conferences.  Second, a show of their own allowed for much better direct engagement with their customer base.  Consider that for IT executives, developers, etc. tracking and deep learning about new vendor products and services is a side job.  Few are going to have that in their annual goals.  Normally you are trying to get a few minutes of their time between their worrying about if the website changes are going to be ready for Black Friday, and how they are doing on their budget, to pay attention to your new products or services.  But at your own conference you get a day, or two, or three where you are their day job.  They can pay attention to news about new offerings, and gain deep knowledge in both those and existing offerings.  Now they can engage with other customers on how your products and services are being used.  Now they can engage with your partner ecosystem to find solutions for their business problems.  Now they can take the time to dig deep on areas of interest.  Now you can get an amazing amount of direct feedback from customers in a very efficient way.  Vendor conferences are primarily about deeper customer engagement, but while you have their attention it is the ideal time to introduce new offerings.  And of course, most customers attend to hear about new things.  You need both the meat (e.g., deep dive sessions) and the sizzle (e.g., keynote launches) for your conference to succeed.

I don’t think most vendors target their product cycles specifically to their conferences, but it is a somewhat natural part of their annual life-cycle.  For example, notice how Microsoft’s three major conferences bracket the end of their fiscal year.  Build is 6 weeks before the end of the fiscal year, Inspire is a week into the new fiscal year, and Ignite is late in the first quarter.  Given planning, budgeting, and goal setting in any company tends to be around fiscal year boundaries you just naturally tend to have more to say at the end of a fiscal year than in the middle.  If in the spring you have planning discussions and make product decisions, and the headcount to deliver doesn’t formally materialize until July 1, and then you have to ramp up, your new deliverables are going to be more heavily weighted in late spring or early summer of the following year.  Moreover, the conferences do act as a forcing function in that teams really want to be able to launch at one of those conferences.  So they will go the extra mile to be ready.

Amazon’s fiscal year is the same as the calendar year, so it ends December 31st.  re:Invent, which covers the purposes of all three of Microsoft’s major conferences, comes a few weeks before the end of the year.  You have the same dynamics of the planning/budgeting cycle and teams wanting to be ready by re:Invent.  You have an additional dynamic of the goal setting process that Jeff Bezos has talked about in shareholder letters.  You are coming down to the wire on meeting year-end goals, so you are making the extra push to finish things up.  The result is an explosive set of announcements ready to go.  There are so many  that AWS has gone to doing many just before re:Invent.  This year there were 85 launches in the week and a half  leading up to Thanksgiving, up from 56 in 2016.  If you saved those launches for re:Invent itself many, if not most, would be lost in the noise.  Despite them often being the most customer impactful of the years’ launches.  Adding storage scaling to RDS SQL Server is transformational for customers using, or interested in using, RDS for their SQL Server workloads.  But how does it compete for attention on a scale of rolling out a tractor trailer (Snowmobile) during Andy Jassy’s keynote last year?  Better it was launched just before Thanksgiving than at re:Invent.

While there are likely more of the modest sized launches coming next week, what we are all waiting to see are the big launches that re:Invent is known for.  For those, and the rest of the re:Invent 2017 keynotes, you can live stream them here.


Posted in Amazon, Computer and Internet, Microsoft | Tagged | 2 Comments

Surface Book 2

I picked up my Surface Book 2 (SB2) from the UPS Store last Friday and wanted to provide my impressions.

Setup was in the Wow category.  You turn on the SB2, connect to WiFi, answer a couple of questions, log into your Microsoft account, and a few minutes later I’m looking at the same lock screen as on my other PCs.  This is really a testament to Windows 10 improvements, but when combined with the speed of the SB2 the experience is almost scary good.  Next I signed the SB2 up for Windows Insider builds then went back to tuning up the system.  I’d expected it would take a day for the Windows Insider build to come through, but it started downloading immediately.  That too was an unusually good experience.

One of the most impressive things about the setup process, but also the most confusing, is what to do about Microsoft Office.  The SB2 comes with the Microsoft Office apps preinstalled.  It also has the Get Office app prominently displayed.  As an Office 365 Home subscriber I was left confused on what my next step should be.  Do I go to office.com and add the machine, which would trigger an Office 365 install?  Do I click on Get Office? That is really counterintuitive given Office is already there.  Then it occurred to me that the Office apps already have the ability to log in to your Microsoft Account, and I wondered what would happen if I just did that. I ran Microsoft Word and logged into my Microsoft Account and it automatically configured the SB2 up as one of my Office 365 machines.  Basically there is no setup needed for Office 365, just log in.  IF you know that’s all you need to do.  Next up for Microsoft, find a way to make this clearer.

I’d picked up the SB2 mid-afternoon and, despite having a lot of other things to do that day, by my normal bedtime it was completely ready for use and customized like I’d been using it for months.

Although it has only been 4 days, the SB2 has been rock solid.  When I got my original Surface Book a couple of years ago there were signs of flakiness (crashes, failure to sleep, etc.) right out of the gate.  I’ll know better in a few weeks, but initial impression is that Microsoft took the reported quality issues in the Surface line to heart before releasing the SB2.

On the batter front I did no formal measurement.  What I did was charge the SB2 overnight Saturday and then see how long I could go between charges with just normal use.  It was a lighter usage period than normal for me, still it was Tuesday night, with 32% battery still remaining, when I decided to charge again.  Even with the original SB I would find colleagues with Macbooks bringing chargers to day-long meetings while I would squeeze by without one.  With the SB2 I’d never feel the need to carry a charger with me for the business day.  And could easily see myself going a couple of my normal usage days before needing a charge.

One of the original SB complaints was that the screen gave a little too much when you touched it.  The SB2 screen seems stiffer, much more comparable to regular notebooks with touch screens.  Another complaint was lapability.  Since the SB/SB2 “screen” is a full tablet with its own battery it is top heavy.  With the SB this meant that if you placed it on a surface with even a slight backward slant, such as your legs when sitting, when you lifted your hands from the keyboard it would fall over backwards.  The SB2 seems a little more balanced, but not a lot.  If I kept my legs square while sitting then it was great on my lap.  But if there was a slant towards the back you could see the front edge of the base unit start to lift up.  So you need to be a little careful when using the SB2 on your lap.  My lapability rating is “acceptable”, but if you are a heavy lap user there are better options.

Overall, I’m extremely happy with my SB2.  Maybe after a few weeks of real use I’ll find something negative to say, but right now it gets a 5/5 star rating from me.

Update (11/23): I realized I used the SB2 on my lap for a few hours last night without experiencing, or even thinking about, it tumbling off.  Just want to be clear that lapability really is acceptable.  



Posted in Computer and Internet, Microsoft, Windows | 6 Comments

Where is my Surface Book 2?

This is not a complaint, it is me anxiously waiting for a new device in a way that I haven’t for years.  You see, I have been without a PC for about 6 weeks.  Until then I had my work-owned Surface Book.  I never used it for personal things, but then I was too busy to do much anyway.  An iPad Pro, or the occasional use of a shared PC, handled my personal needs.

I’m writing this on my wife’s HP Envy all-in-1.  It’s very nice, but I wouldn’t dare sign it up for Windows Insider builds.  Down in my office I have her previous all-in-1, but it is super flakey and almost unusable.  A person-week of trying to fix it got me back to “super flakey and almost unusable”, so it is abandoned.  I need something I can make mine.  I can take Windows Insider builds.  I can set it up as a developer workstation.  I can use WSL and get a nice Ubuntu Linux environment running.  I can play with all the other neat stuff Microsoft has done in the last few years.

My most pressing need was for a new notebook.  I’ve always been a fan of the Surface Pro, so that was the default choice.  Then Microsoft introduced the Surface Laptop.  I still plan to carry a tablet with me, so didn’t need the tablet aspect of the Pro to the extent of the old days, when I carried just one device with me.  If you don’t care about having a tablet, then why not benefit from having the better keyboard?  The lapability of the Surface Laptop is much better than the Pro, so that was attractive. I started to lean towards that.  Or a Dell XPS 13.  My wife has a 3 year old XPS 13 and it is a wonderful thin and light notebook.  It is by far the notebook she has been the most satisfied with over the years.  The new ones seem even better.  Concerns about the quality of Surface devices kept the Dell in contention.

I knew another Surface-related announcement was coming, but wondered if I could really hold off.  Fortunately I did, because the Surface Book 2 solved another dilemma for me.  Should I buy a notebook and a desktop?  I really didn’t want another desktop as the Cloud has taken over many of the needs for one.  But I still wanted one high-powered PC.  The Surface Book 2 tips the balance.  Add a Surface Dock when I’m in my office and I’ll never know I don’t have a desktop.  How can I say that since I don’t have the Surface Book 2 yet?  Remember I spent a couple of years living with a Surface Book (and the AWS Cloud of course) as my only work PC.

So hopefully the Surface Book 2 will arrive tomorrow as scheduled.  Alexa will let me know the moment it hits the UPS Store, and I’ll be in my car 5 minutes later!

Posted in Computer and Internet, Microsoft, Windows | 2 Comments

Microsoft Phone (Part 2)

Does anyone think it was a coincidence that Joe Belfiore’s admission Microsoft’s journey to make Windows succeed on phones was dead came just days after he had announced that Microsoft was bringing the Edge browser to Android and iOS, and had taken its Garage born Arrow Launcher for Android and rechristened it as the Microsoft Launcher?  Microsoft may not have made a grand public strategic vision statement, but it sure telegraphed one.

Let’s step back for a moment to the topic of first party Microsoft apps on iOS and Android.  Bringing the Office apps to non-Microsoft devices wasn’t part of a grand new mobile strategy, they were driven by the Office 365 effort.  If you want someone to pay you $99/year forever, vs $150-250 every 5-10 years, then it has to work on all the devices they use.  I use a consumer example, but this holds for Enterprises as well.  And so the effort to bring Office clients to iOS and Android was begun under then CEO Steve Ballmer.  Want confirmation that bringing Office clients to iOS and Android was about Office 365 and not a broader change in strategy?  Work on Office for iOS began long before Microsoft acquired Nokia’s mobile business.  Windows Phone was still the big bet.

After Satya Nadella became CEO and freed non-Windows teams to pursue strategies that were not necessarily linked to Windows, the floodgate really opened on non-Office Microsoft apps coming to iOS and Android.  MSR experiments and Garage efforts generally targeted one or the other.  Any team that had a mobile component to its strategy created clients for both.  After all, in a Cloud world client ubiquity is important.  Ballmer set Microsoft’s transition to a Cloud company in motion, Satya poured gasoline on the fire by un-shackling teams from Microsoft’s “mutually reinforcing businesses based around Windows” business model.

Separately the Windows Phone effort continued to wither.  Microsoft’s purchase of Nokia was written off.  Rumors of new devices, such as a Surface Phone, the birth of the Universal Windows Platform (UWP), and bridges for bringing iOS and Android apps to the UWP kept hope alive that there was a new strategy for making Windows succeed on phones.   But that strategy soon unraveled.  Cancellation of the Android bridge really telegraphed the death of Windows on Phones.  The iOS bridge made it easier to port apps from the iPhone, and Xamarin made it easier to develop new multi-platform apps, but both relied on developers investing effort to target Windows on phones.  Running Android apps, unchanged, on Windows would have been a poor user experience.  But it would have closed the app gap.  Once Microsoft cried uncle on the app gap, Windows on phones was dead.  It took a frustrating 18 months for Microsoft to publicly acknowledge it.

I wonder how it went down.  Was there a grand discussion of a new mobile phone strategy?  Did Satya tell Terry “Everyone else in the company has a strategy for dealing with iOS and Android, what’s yours?”.  Did the Windows team just come to the realization that they were building a bunch of new features that wouldn’t really be compelling unless they could be used in conjunction with the phones people were actually carrying?  Working on something on my laptop then deciding to continue on my desktop is just “nice”.  Trying to do something on my phone that is so painful you want to scream, and being able to stop and continue on a PC where it is child’s play, now that is compelling.  And so we have Edge on both iOS and Android, and the Microsoft Launcher on Android to make that happen.

Top-down or bottom-up, these moves lead to the question, is there a bigger strategy here?  I sure hope so.  Microsoft now has all the pieces it needs to offer a preconfigured out of the box Microsoft-experience Android Phone.  You can configure one yourself right now, but it is a painful and tedious process more suitable for enthusiasts than for general users.  When I trashed Android a few years back one of the key reasons was the amount of time it would take to get it working the way I wanted.  Manually creating a Microsoft Phone with all the pieces Microsoft has created just magnifies that complaint.

Last year I bought one of those Amazon Android phones with Offers and Ads to play with.  It cost me $59, I think, and I played with it for a couple of weeks to see what Google had done with a newer version of Android.  It never even had a SIM in it.  After the Microsoft Launcher announcement I found it in my collection of abandoned toys and decided to try to configure it as a Microsoft Phone.  It took me a while, but I got it close.  I stopped when I realized cramming all that Microsoft goodness into a phone with only 8GB was a challenge.  I had to remove the Amazon apps.  I had to start removing parts of the Google ecosystem as well, which strategically is more of a concern.  And a phone that was already sluggish became even more so.  But then I was configuring an older entry-level phone the way a power user might configure a new mid-range or flagship device.  Ignore the negatives, it was entirely possible to create a Microsoft-centric experience on an Android phone.

Having a bunch of pieces that users can take and configure varying levels of Microsoft-experience on their phones is good.  On iOS it is the only option.  I wouldn’t be surprised if Microsoft approached Apple about how to offer more of a Microsoft-experience iOS phone.  Given Microsoft’s enthusiasm for Android lately we can guess that Apple wasn’t very receptive.  But the bunch of pieces approach is quite limited, and on Android Microsoft could go much further.  It could work with OEMs to create Microsoft experience versions of devices that have pre-built Android images that come configured with Microsoft Launcher, Edge, etc.  Or it could find a way to do a one-button setup of the Microsoft experience.  I can think of one way to make that possible, but perhaps there are multiple.  In any case, it would be a step well beyond the Microsoft Apps app or what they’ve done with the Samsung Galaxy S8 Microsoft Edition.

Is there anything Microsoft should NOT do?  Yes, they shouldn’t try to hide or bury the other Android goodness.  Block installation of arbitrary applications from Google Play?  Death.  Have a OOBE experience that doesn’t embrace the Google ecosystem in parallel with the Microsoft ecosystem?  Death.  Lock the device to Bing?  Death. In other words, Microsoft Phone is a true Android Phone with a Microsoft-centric experience on it, not something that has Android underneath but a completely different user experience and app ecosystem ala the Amazon Fire Phone’s Fire OS.

So what will 2018 and 2019 bring?  So far Microsoft has been pretty cautious about its approach to Microsoft Phone, and I expect that to continue in 2018.  They need to bring Edge to GA.  Microsoft Launcher needs to mature some more.  Keep improving the Cortana experience on Android.  They probably have other apps and experiences in the works that will be released individually.  Maybe they can find a way to make multi-app installation easier, or even have a one-button “make it a Microsoft Phone” answer sometime during the year.

While Samsung and other large OEMs want devices that have their own OOBE, small players like BLU seem willing to try multiple things.  BLU does Amazon offers and ads phones, they also had offered Windows Phones.  It would probably be easy to get them to make a pre-configured Microsoft-experience Android Phone.  I’d be mildly surprised if Microsoft pushes hard on preconfigured Microsoft-experience devices in 2018. Given Microsoft’s failure with Windows Phone, they would be better off with a humble approach.  Keep building the base of users who are willing and able to self-configure a Microsoft-experience until it reaches the point where there is strong market pull for preconfigured devices.  Maybe that happens a year from now, but it is a safer bet that it happens in 2019.

Whatever the details of how it plays out, a replacement strategy for Windows Phone is finally apparent.  It is Android, brought to you by the Windows team.









Posted in Computer and Internet, Microsoft, Mobile, Windows Phone | 14 Comments

Microsoft Phone (Part 1)

There is a lot of focus on the death of Windows 10 Mobile, bringing an end to Microsoft’s dream of a world running Windows on every client device.  But I don’t believe Microsoft feels Mobile is any less important now than it did a decade ago when the rise of the iPhone caused it try the Hail Mary pass that resulted in Windows Phone 7.  Just because the Windows kernel is out of mobile doesn’t mean Microsoft is.  Any effort in Mobile has to address three areas: (1) Application Platform, (2) First Party Applications, and (3) User Experience.

A decade ago Microsoft had trouble seeing a path to success without Mobile, and a path to success in Mobile other than by making Windows succeed on Mobile devices.  I’m not going to revisit that entire discussion (of which I was involved in only small parts), but it took Microsoft down the road to an eventual dead-end.  The good side of that failure is that Microsoft was forced to revisit alternate strategies.

For example, I was in the process of transferring to DevDiv at the time that the Windows Mobile 7 Reset (the start of the Windows Phone 7 project) occurred.  My job in DevDiv was going to be to drive cross-platform development tools to allow Microsoft-oriented (e.g., C#/.NET) developers to write applications for multiple mobile platforms.  By the time I started in the position the strategy had changed to put all the wood behind the Windows Phone 7 arrow, with cross-platform development abandoned.  Now a decade later we have Microsoft fully back on that original DevDiv thinking, with Xamarin under its wings and most .NET technologies available as open source.  There is even evidence this strategy is working.

The rise of cross-platform .NET also represents the realization of .NET as the Microsoft platform.  That was always DevDiv’s dream, but proved controversial across Microsoft. .NET CLR’s contribution to the Longhorn/Vista disaster strengthened the case against .NET as a client application platform.  Windows Phone 7’s bet on .NET Compact Framework for all 3rd party apps was controversial, and Terry Myerson took a lot of flack for it.  Then Windows 8 tried to back away from .NET, alienating much of the developer community.  Some twists and turns later, .NET has become the universal application platform.

Another strategy discussion from a decade ago was putting Microsoft Applications on the iPhone.  Without revisiting, it wasn’t clear how to make that strategy succeed in the mindset of the day.  For example, would Apple’s control of the platform translate into control of the productivity apps?  It was also a financial problem, given selling client software was the only way to monetize most client applications at the time.  And the Office client applications were the source of much of the company’s profits.  Once they could be monetized in the cloud (or via enterprise servers) this problem went away.   The failure of Windows Phone to take off forced Microsoft to figure out a financially viable strategy for having its apps on IOS and Android years earlier than would have happened if Windows Phone succeeded.  And cloud monetization, along with the new mindset established by Satya Nadella, has led to an explosion of Microsoft applications for Android and IOS.

So we have a Microsoft Phone application platform, and we have the first party Microsoft Phone apps, what about user experience?  Well this is the area I’d classify as nascent.  Right now the applications are just standalone things you install from Google Play or Apple’s App Store.  And you painfully (especially if you are using 2FA) have to log into each.  Given Android is more flexible than IOS, Microsoft is focusing more attention on Android.  There is a Microsoft Apps App to help you find the available Microsoft applications.  But all it does is launch the Google Play store one by one for the apps you want.  The Microsoft Store is selling the Samsung Galaxy S8/S8+ Microsoft Edition that gets provisioned with some of the apps as part of the out of the box experience, but I can’t find a review for what that is like.  And then there is the Microsoft Launcher that starts to bring an overall Microsoft flavor to the Android experience.  Microsoft Edge is in preview, an important part of linking IOS and Android to the overall Microsoft experience.  Microsoft Launcher also offers to install some first party Microsoft apps for you, but it is a subset of the list in the Microsoft Apps app.  And it also just launches Google Play once per app to get and install.  When viewed as I want a couple of Microsoft’s apps on my phone this is all fine.  But when viewed as “give me a Microsoft Phone experience” it sucks.  In Part 2 I’ll delve into that and speculate on what 2018/19 will bring.



Posted in Computer and Internet, Microsoft, Mobile, Windows Phone | Tagged , | 3 Comments

Using DNS for Security – Comodo Dome Shield

I’ve written a number of times about DNS offerings that allow for increased security.  Blocking you from going to a website that does a drive-by download of malware, blocking phishing sites, blocking your IoT devices from talking to a BOT command-and-control domain, etc. While I think that we are moving to a more comprehensive alternative of whole-home internet security devices, a malware-blocking DNS service remains useful for many of us.  In particular, when you want to increase security without changing hardware or you can’t get the new hardware solutions to work.  I wrote about my disaster with CUJO, and I found one case where EERO’s new EERO Plus offering won’t work.  I’ll write about my EERO experience later, but for those considering buying one to use EERO Plus beware that isn’t an option if your Internet Provider uses PPPoE.  CenturyLink customers, this means you.  You can use EERO in Bridge Mode, but that precludes the use of EERO Plus.  So in my CenturyLink-connected house I’m trying a new option, Comodo Dome Shield.

Back in 2012 I first wrote about using DNS to block malware using OpenDNS and Norton Connectsafe.  I’d already moved away from OpenDNS since they reserved most of their malware-blocking for the enterprise offering.  Norton Connectsafe remains an option for a DNS that blocks domains based on Norton Safe Web’s scanning for malicious sites.  Comodo also has their free Secure DNS, that is an alternative to Norton Connectsafe.  Recently I discovered that Comodo had introduced an Enterprise-oriented DNS service that includes full URL filtering capability.  And they were offering it for free.

There are two advantages to using Comodo Dome Shield.  The first is transparency.  Norton Connectsafe and Comodo Secure DNS are pretty much black boxes, where you can’t tell what subcategories of malicious domains they are protecting you from.  While Comodo Dome Shield offers a default security rule for blocking phishing/malware/spyware that is also somewhat of a black box, you can create your own rule and choose the subcategories you want protection against.  For example, it isn’t clear the default rule blocks DDoS sources.  But by creating your own rule you can make sure those are blocked.  You can also block addresses for known spammers, for example.

The second advantage of Comodo Dome Shield is that it give you complete control over blocking access to non-security related domains.  Want to block access to Gambling sites, you can do that with a content rule.  I don’t have a reason to block access other than for security, but I did use a content rule to block access to so-called “Parked Domains”.  These are domains that have fallen into disuse and usually are just landing pages with links to other pages.  In my experience the links on those parked domains all too often lead to malicious sites.  And since the control of the parked domain is often questionable, the odds that it is taken over and used to distribute malware seems much higher than with actively maintained domains.

The disadvantage of blocking more domains is that the odds of you blocking a safe and useful domain go up.  For example, there will be lag between a parked domain being claimed and used legitimately and during that lag (if you are blocking parked domains) you won’t be able to access the site.  Or, it is pretty common for a domain to be flagged as sending SPAM even though that was a temporary situation.  It can be very difficult for someone to get themselves removed from a spam blacklist once they are added, so if you block domains classified as spammers be prepared to lose access to some pretty mainstream sites from time to time.  That’s the reason the default security rule, as well as Norton Connectsafe, don’t block those domains.  So use your power to block broader categories with caution.

Because Comodo Dome Shield is aimed at enterprises it is a bit difficult to set up and manage.  You have to sign-up, and (if you have a dynamic DNS address) run an agent on a machine on your network that is always on.  The agent keeps Comodo informed of your network’s current external IP address so it can map requests to your filters instead of just using the defaults.  You create rules for security and content blocking, and associate them with a policy for your network.  You then point your router at Comodo’s DNS servers and your policy is enforced.  Comodo Dome Shield also provides comprehensive reporting so you can see what all the devices on your network are accessing or being blocked from accessing, and it can be an eye-opener.  As an enterprise product, Comodo Dome Shield has other capabilities that I haven’t explored, such as using agents on roaming devices to enforce domain access rules.

Many of you are thinking, well that’s a lot of work when I just want fire-and-forget protection against malicious domains. It was the more powerful capabilities of Comodo Dome Shield that attracted me, but Comodo Secure DNS or Norton Connectsafe are more appropriate for most people.  Certainly if I didn’t like playing around with security offerings as a hobby I’d just point my DNS at one of the secure DNS offerings and be done with it.

In most cases you should be using a secure DNS to protect your home network.  While that is built-in to the new generation of security-centric networking devices, you can easily set up your router to use one.  And in this era of IoT devices, where you can’t run security software on the device, the extra layer of protection of a secure DNS is one of the few things you can do to protect your home.




Posted in Computer and Internet, Security | Tagged , , , | Comments Off on Using DNS for Security – Comodo Dome Shield