Microsoft “can’t win for losing”

When it comes to the consumer, Microsoft’s history can best be described as “I got it. I got it. I got it. <THUMP> I ain’t got it.”.  Today is the 4th anniversary of my Xbox: Fail blog post, and this week Microsoft put the final nail in the coffin of Kinect.  So it really is an appropriate point to talk about Microsoft and the consumer.  Microsoft is not a consumer-focused company, and never will be despite many attempts over the decades.  Recognition of this reality, and an end to tilting at windmills, is one of the things that Satya Nadella seems to have brought to the table.

First let’s get something out of the way, we need to refine what we mean by the label “consumer”.  It isn’t simply the opposite of business/organizational users.  Microsoft has always done just fine in providing individuals with personal productivity and content creation tools.  The Windows-based PC remains at the center of any complex activity.  Sure I book some flights on my iPhone or iPad.  But when I start putting together a complex multi-leg trip the PC becomes my main tool.  Office has done well with consumers, and continues to do so in spite of popular free tools from Google.  And over the last few years Microsoft has gained traction with the artistic/design crowd that had always gravitated towards the Mac.  So when we talk about the consumer we really are talking experiences  that are left of center on the content consumption to content creation spectrum.  Microsoft will always be a strong player on the right of center content creation scale, be it for individuals, families, or organizations.  But, other than console gaming, they aren’t going to be a significant player on the left of center experiences.  And Microsoft fans are going to go crazy over that.

The end of life for Kinect is the perfect illustration of Microsoft’s inability to be a consumer player.  The Xbox One with (then mandatory) Kinect was introduced a year before the Amazon Fire TV and a year and half before the Amazon Echo.  It was originally tasked with becoming the center of home entertainment, and offered a voice interface.  Go read my Xbox: Fail piece for how it wasn’t ready to live up to that design center.  It’s pretty typical Microsoft V1 stuff.  Unfortunately the Xbox One was also V1 from a console gaming perspective, so Microsoft focused on making it more competitive in that niche and abandoned pushing forward on the home entertainment side.  Imagine that, Microsoft had a beachhead of 10s of millions of voice-enabled devices in place before Amazon even hinted at the Echo, and failed to capitalize on it.  You can repeat that story many times over the last 25 years.

It isn’t that Xbox One was the perfect device for the coming voice assistant, or streaming TV, revolutions.  The need to be a great gaming console gave it much too high a price point for non-gamers.  But Microsoft could have continued to evolve both the experience and produced lower priced, non-gaming focused, hardware.  Contrast what Microsoft did with what Amazon did around the Echo.  When the Echo was introduced it was considered a curiosity, a niche voice-operated speaker for playing music.  When Amazon started to gain traction with the Echo and Alexa, they went all in, and as a result have a strong lead in today’s hottest segment of the consumer technology space.  It reminded me a lot of Microsoft’s pivot to the Internet back in 1995.  But in the Xbox One case, Microsoft had the vision (at least in general direction), but failed to capitalize on it.  Failed to even make a serious attempt.  Now, at best, it could fight it out for a distant 4th or 5th place in voice assistants and home entertainment.  This consumer stuff just isn’t in Microsoft’s DNA.

The death of the Groove Music Service is another example, and maybe more telling on why Microsoft hasn’t been able to crack the code on the consumer.  Groove is just the latest name for Zune’s music service.  When MP3 players became popular Microsoft jumped on the bandwagon based on its DNA, it relied on 3rd parties that it supplied with technology (e.g., DRM).  When that didn’t even turn out to be a speedbump on the iPod’s adoption, it finally introduced the Zune as a first party device.  To have as good an experience as an iPod, the Zune needed an iTunes equivalent and what we now know as the Groove Music Service was born.  Despite the jokes that failure often leads to, the Zune was a quite nice device. But since it couldn’t play the music you’d acquired with iTunes there really was no iPod to Zune migration path.  By the time Zune came on the market the game was already over.  As Zune died other consumer-focused device efforts came to the fore (Kin, Windows Phone 7, Xbox One) and the music service lived on.  But since the devices never gained traction neither did the music service.  And for Microsoft the music service was never a player on its own, it was just a necessary evil to support its consumer device experience.  And with that mindset, the failure to gain traction with consumer devices meant Groove was superfluous.  Sure Groove could have owned the segments that Spotify and Pandora now dominate, but that was never what Microsoft was going for.  And now, it is too late.

Being a content creator or distributor is not in Microsoft’s DNA.  It has an immune system that rejects it time and time again.  Microsoft made a big play on consumer titles in the early to mid 90s, remember Microsoft Dogs and Encarta?  Offerings like these are very manpower intensive because they need a lot of content production, editing, frequent updating, sell for very little, are expensive to localize, and often don’t even make sense globally.  So Microsoft concluded they didn’t fit well with its business model and backed away from all but a few major titles such as Encarta.  While Encarta was great for its time, the Internet left it competing with Wikipedia.  That destroyed what little economic value Encarta had.  Other content-oriented efforts, such as Slate, were disposed of to save costs when the Internet Bubble burst.  The MSNBC joint venture was allowed to dissolve when its contract came up for renewal.  And so on.

I could even say that great end user experiences are not in Microsoft’s DNA, though that one is more debatable.  Usually it is thought of as being consistently second to Apple.  So rather than saying they aren’t in Microsoft’s DNA, I’d say that Microsoft user experiences are almost always compromised by more dominant aspects of its DNA.  And that keeps it from being a great consumer experience company.

What is Microsoft good at?  Creating platforms that others build on.  Doing work that is technically hard, and takes a lot of engineering effort, that it can sell over and over again.  High fixed cost, very low variable cost, very high volume, globally scalable has been its business model all along.  Consumer businesses usually have moderate to high variable costs, so there is problem number one.  Only the top two players in a segment usually can achieve very high volume, so unless Microsoft achieves leadership early in a segment it never can get high enough volume to have a successful business model.  A head-on charge against the established leaders rarely works, and when it does it is a financial bloodbath.  So you may not need to be the first in market, but you need to be in early enough for the main land grab (or wait for the next paradigm shift to try again).  And global scaling of consumer offerings is way more difficult than for platforms or business-focused offerings.

Microsoft seems to have resolved to focus on its DNA.  It will be supportive, even encouraging, of third parties who want to use its platforms to offer consumer services but avoid going after the consumer directly.  So you get a Cortana-enabled smart speaker from Harmon-Kardon, a high-end Cortana-enabled thermostat from Johnson Controls, a set of smart fixtures from Kohler that use Amazon’s Alexa for voice control but Microsoft Azure for the rest of their backend, and an agreement with Amazon for Cortana/Alexa integration.

Will Microsoft introduce consumer devices or services in the future?  Possibly, but they will suffer the same fate as its earlier attempts.  And I’m not throwing good money after bad (and I did throw a lot at every consumer thing Microsoft ever did).  I recognize that these attempts are at best trial balloons, and at worst ill-advised ventures by those intoxicated at the potential size of market.  Microsoft is an arms supplier.  It should supply arms to companies going after the consumer, but avoid future attempts to fight consumer product wars itself.

 

 

 

Advertisements
Posted in Computer and Internet, Home Entertainment, Microsoft | Tagged , , , | 11 Comments

Amazon moving off Oracle? #DBfreedom

A bunch of news stories, apparently coming off an article in The Information, are talking about Amazon and Salesforce attempting to move away from the use of Oracle.  I’m not going to comment specifically on Amazon, or Salesforce, and any attempt to move away from Oracle’s database.  But on that general topic.  And a little on Amazon (Web Services) in databases.

tl;dr It might not be possible to completely migrate off of the Oracle database, but lots of companies are capping their long term Oracle cost exposure.

There are a ton of efforts out there to make it easier for customers to move off of the Oracle database.  The entire PostgreSQL community has had making that possible as a key priority for many years.  There are PostgreSQL-derivatives like Enterprise DB’s Postgres Advanced Server that go much further than just providing an Oracle-equivalent.  They target direct execution of ported applications by adding PL/SQL-compatibility with its SPL, support for popular Oracle pre-supplied packages, offering an OCI connector, and other compatibility features.  Microsoft started a major push on migrating Oracle applications to SQL Server back in the mid-2000s with SQL Server Migration Assistant.  They re-invigorated that effort last year.  IBM has a similar effort for DB2, which includes its own PL/SQL implementation.  And, of course, the most talked about effort the last few years is the one by AWS.  The AWS Database Migration Service (DMS) and Schema Conversion Tool (SCT) have allowed many applications to be moved off of Oracle to other databases.  Including to Aurora MySQL, Aurora PostgreSQL, and Redshift which, take advantage of the cloud to provide enterprise-level scalability and availability without the Oracle licensing tax.

Note that Andy isn’t specifically saying 50K migrations off of Oracle, that’s the total number for all sources and destinations.  But a bunch of them clearly have Oracle as the source, and something non-Oracle as the destination.

On the surface the move away from Oracle database is purely a balance between the cost of switching technologies and the cost of sticking with Oracle.  Or, maybe in rare cases, the difficulty achieving the right level of technological parity.  But that isn’t the real story of what it takes to move away from Oracle.

Sure many apps can be manually moved over with a few hours or days of work.  Others can be moved pretty easily with the tooling provided by AWS or others, with days to weeks of work.  The occasional really complex app might take many person-months or person-years to move.  But if you have the source code, and you have (or can hire/contract) the expertise, you can move the applications.  And people do.  A CIO could look at spending say $5 Million or $25 million or $100 million to port its bespoke apps and think they can’t afford it.  Or they could look at that amount and say “ah, but then I don’t have to write that big check to Oracle every year”.  So if you think long-term, and hate dealing with Oracle’s licensing practices (e.g., audits, reinterpreting terms when it suits them, inviting non-compliance then using it to force cloud adoption, etc.), then the cost to move your bespoke applications is readily justified.  So what are the real barriers to moving off Oracle database?

Barrier number one is 3rd party applications.  Sometimes these aren’t a barrier at all.  Using Tableau?  It works with multiple database engines, including Amazon Redshift, PostgreSQL, etc.  Using ArcGIS?  It just so happens that PostgreSQL with the PostGIS extension is one of the many engines it supports.  Using Peoplesoft?  Things just got a bit more difficult.  Because Peoplesoft supported other database systems when Oracle acquired it there are options, but they are all commercial engines (e.g., Informix, Sybase, and of course Microsoft SQL Server) and I don’t know how well Oracle is supporting them for new (re-)installations.  You can’t move to an open source, or open source compatible, engine.  If you are using Oracle E-Business Suite?  You’re screwed, you can’t use any database other than the Oracle database.   Given that Oracle has acquired so many applications over the years, there is a good chance your company is running on some Oracle-controlled application.  And they are taking no steps to have their applications support any new databases, not even the Oracle-owned MySQL.

Oracle’s ownership of both the database and key applications has created a near lock-in to the Oracle database.  I say “near” because you can in theory move to a non-Oracle application, and may do so over time.  But when you’ve lived through stories of companies spending $100s of millions to implement ERP and CRM solutions, the cost of swapping out E-Business Suite or Siebel makes it hard to consider.  Without that, there goes complete elimination of your Oracle database footprint.

Now on to the second issue, Oracle’s licensing practices.  I’m not an Oracle licensing expert, so I will apologize for the lack of details and potential misstatements.  But generally speaking, many (if not most) customers have licensed the Oracle database on terms that don’t really allow for a reduction in costs.  Let’s say you purchased licenses and support for 10,000 cores.  You are now only using 1000 cores.  Oracle won’t allow you to just purchase support for 1000 cores, if you want support you have to keep purchasing it for the total number of core licenses you own.  And since they only make security patches available under a support contract, it is very hard to run Oracle without purchasing support.  If you have an “all you can eat” type of agreement, to get out of it you end up counting all the core licenses you currently are using.  You can then stop paying the annual “all you can eat” price, but you still have to pay for support for all the licenses you had when you terminated the “all you can eat” arrangement.  Even if you are now only using 1 core of Oracle.

To top it off, you can see how these two interact.  Even if just one third-party application keeps you using the Oracle database, you will be paying them support for every Oracle license you ever owned. Completely getting off Oracle requires a real belief that the short to mid-term pain is worth the long-term gain.

So does this “get off Oracle” thing sound hopeless?  NO.  For any healthy company, the number of cores being used grows year after year.  It doesn’t matter if you have an “all you can eat” agreement, all you are doing is committing yourself to an infinite life of high support costs.  What moving the moveable existing apps, and implementing new apps on open source/open source-compatible engines, allows you to do is stop growing the number of Oracle cores you license.  You move existing applications to PostgreSQL (or something else) to free up Oracle core licenses for applications that can’t easily be moved.  You use PostgreSQL for new applications, so they never need an Oracle core license.  You can’t eliminate Oracle, but you can cap your future cost exposure.  And then at some point you’ll find the Oracle core licenses represent small enough part of your IT footprint that you’ll be able to make the final push to eliminate them.

Switching topics a little, one of the most annoying things about this is the claim in some of the articles that Amazon needs to build a new database.  Hello?  AWS has created DynamoDB, RedShift, Aurora MySQL, and Aurora PostgreSQL, Neptune, and a host of other database technologies.  DynamoDB has roots in the NoSQL-defining Dynamo work, which predates any of this.  Amazon has a strong belief in NoSQL for certain kinds of systems, and that is reflected in the stats from last Amazon Prime Day.  DynamoDB handled 3.4 trillion requests, peaking at 12.9 million per second.  For those applications that want relational, Aurora is a great target for OLTP and RedShift (plus Redshift Spectrum, when you want to divorce compute and storage) for Data Warehousing.  You think the non-AWS parts of Amazon aren’t taking advantage of those technologies as well?  Plus Athena, Elasticache, RDS in general, etc.?  Puhleeze.

Posted in Amazon, Aurora, Computer and Internet, Database, Microsoft, SQL Server | Tagged , , | 2 Comments

Service Level Agreements (SLA)

I wanted to make some comments on Service Level Agreements (SLAs), so we interrupt our scheduled Part 2 on 16GB Cloud Databases.  A Service Level Agreement establishes both an expectation between a service provider and a customer of the level of service to be provided, and often a contractual commitment as well.  There are three ways to establish an SLA.  First, you can just pull it out of your a**.  Basically the customer says I want an availability SLA of 99.9999999 and you say “Yes, Sir!”, even though that is impossible to deliver.  Maybe when it comes to contractual commitments you include so many exclusions that it becomes possible (e.g., “outages don’t count against availability calculations for SLA purposes”, would be a good start).  Second, you can figure out what is theoretically possible based on your design.  I’d also prefer my SLAs be based on actual data, not just what math says should be possible.  So the third way is math plus data.  But even that turns out to be nuanced.  You can influence it both by the exclusions (e.g., customer caused outages don’t count is a pretty obvious, and valid, one), and by what penalties you are willing to accept when you miss the SLA.

When you miss an SLA you are penalized in two ways.  Contractually there may be financial penalties, such as a 10% reduction in your bill, for missing the SLA.  An SLA will eventually be breached.  When you establish the SLA based on data and math, you know what the financial penalties of those breaches will be.  You can pick the SLA based on what level of financial cost you are willing to accept.  In other words, SLA breaches just become a cost of doing business.  What’s the difference between an SLA calling for 99.9%, 99.95%, 99.99%, or 99.999% uptimes?  Just an increase in your cost of good sold.

The second penalty is reputation risk.  When you breach an SLA it causes harm to your reputation.  If a customer runs years before having an SLA breach, that breach does little to damage your customer relationship.  As long as you don’t breach the SLA again for a long time.  If you breach SLAs frequently, customers learn they can’t trust your service.  They may even seek alternatives.

Customers don’t even care about the financial penalties of an SLA breach.  Those are trivial compared to the cost of the breach to their business.  Meeting the SLA is what they really want. They see the financial penalty as an incentive for you to meet your SLA.  The service provider’s accountants and lawyers will certainly want to make sure the business plans accomodate the SLA breaches, but as long as it does they will accept the SLA breaches.

A service provider willing to absorb a higher financial penalty from SLA breaches, and with a low concern for reputational risk, can set an SLA that they can’t consistently meet. A service provider with great concern for reputational risk will set an SLA they can consistently meet, even if it means that SLA is lower than its competitors.  The former favors the marketing advantage of a high SLA, the latter favors actual customer experience.

Which would you rather have, a service that claims 99.999% availability but only delivers it 99.9% of the time, or one that claims 99.99% availability and delivers it 99.99% of the time? The 5 9s SLA sounds great, but it has 10x the breaches of the 4 9s SLA!  Do you want an SLA that your service provider almost always meets or one that sounds, and is, too good to be true?

Personally I’ll take the consistent SLA, for two reasons.  First, because I can and will design around an SLA I can trust.  But one that is fictional will cause me to make bad decisions.  Second, because the service provider giving me an SLA that will reflect my actual experience is a service provider I can trust.

Bottom line, take SLAs with a large grain of salt.  Particularly when you can’t tell how often the SLA is breached.  Moreso if a service provider offers an SLA before having gained a significant amount of operational experience.  And if you can get a service provider to tell you how often they breach their SLA, more power to you.

 

 

 

 

Posted in AWS, Azure, Cloud, Computer and Internet, Google | Tagged | Leave a comment

16TB Cloud Databases (Part 1)

I could claim the purpose of this blog post is to talk about Amazon RDS increasing the storage per database to 16TB, and to some extent it is.  It’s also an opportunity to talk about the challenges of a hyperscale environment.  Not just the world of AWS, but for Microsoft Azure, Google Cloud, and others as well.  I’ll start with the news, and since there is so much ground to cover I’ll break this into multiple parts.

As part of the (pre-) AWS re:Invent 2017 announcements Amazon RDS launched support that increased maximum database instance size from 6TB to 16TB for PostgreSQL, MySQL, MariaDB, and Oracle.   RDS for Microsoft SQL Server had launched 16TB database instances back in August, but with the usual RDS SQL Server restriction of them not being scalable.  That is, you had to pick 16TB at instance create time.  You couldn’t take a 4TB database instance and scale its storage up to 16TB.  Instead you would need to dump and load, or use the Native Backup/Restore functionality, to move databases to the new instance.  If the overall storage increase for RDS was lost in the noise of all the re:Invent announcements, the fact that you could now scale RDS SQL Server database instance storage was truly buried.  The increase to 16TB databases benefits a small number of databases for a small number (relatively speaking) of customers, the scalability of SQL Server database instance storage benefits nearly all current and future RDS SQL Server customers.

While RDS instances have been storage limited, Amazon Aurora MySQL has offered 64TB for years (and Aurora PostgreSQL was also launched with 64TB support).  That is because Aurora was all about re-inventing database storage for the cloud. so it addressed the problems I’m going to talk about in its base architecture.  In the case of non-Aurora RDS databases, and Google’s Cloud SQL, Azure Database for MySQL (or PostgreSQL), and even Azure SQL Database (which despite multiple name changes over the years, traces its lineage to the CloudDB effort that originated over a decade ago in the SQL Server group) have lived with the decades old file and volume-oriented storage architectures of on-premises databases.

Ignoring Aurora, cloud relational database storage sizes have always been significantly limited compared to their on-premises instantiation.  I’ll dive into more detail on that in part 2, but let’s come up to speed on some history first.

Both Amazon RDS and Microsoft’s Azure SQL Database (then called SQL Azure) publicly debuted in 2009, but had considerably different origins.  Amazon RDS started life as a project by Amazon’s internal DBA/DBE community to capture their learnings and create an internal service that made it easy for Amazon teams to standup and run highly available databases.  The effort was moved to the fledgling AWS organization, and re-targeted to helping external customers benefit from Amazon’s learnings on running large highly-available databases.  Since MySQL had become the most popular database engine (by unit volume), it was chosen to be the first engine supported by the new Amazon Relational Database Service.  RDS initially had a database instance storage size limit of 1TB.  Now I’m not particularly familiar with MySQL usage in 2009, but based on MySQL’s history and capabilities in version 5.1 (the first supported by RDS), I imagine that 1TB covered 99.99% of MySQL usage.  RDS didn’t try to change the application model, indeed the idea was that the application had no idea it was running against a managed database instance in the cloud.  It targeted lowering costs while increasing the robustness (reliability of backups, reliability of patching, democratization of high availability, etc.) of databases by automating what AWS likes to call the “undifferentiated heavy lifting” aspects of the DBA’s job.

As I mentioned, Azure SQL started life as a project called CloudDB (or Cloud DB).  The SQL Server team, or more precisely remnants of the previous WinFS team, wanted to understand how to operate a database in the cloud.  Keep in mind that Microsoft, outside of MSN, had almost no experience in operations.  They brought to the table the learnings and innovations from SQL Server and WinFS, and decided to take a forward-looking approach.  Dave Campbell and I had spent a lot of effort since the late 90s talking to customers about their web-scale application architectures, and were strong believers that application systems were being partitioned into separate services/microservices with separate databases.   And then those databases were being sharded for additional scalability.   So while in DW/Analytics multi-TB (or in the Big Data era, PB) databases would be common, most OLTP databases would be measured in GB.  Dave took that belief into the CloudDB effort.  On the technology front, WinFS had shown it was much easier to build on top of SQL Server than to make deep internal changes.  Object relational mapping (ORM) layers were becoming popular at the time, and Microsoft had done the Entity Framework as a ORM for SQL Server.  Another “research” project in the SQL Server team had been exploring how to charge by units of work rather than traditional licensing.  Putting this altogether, the CloudDB effort didn’t go down the path of creating an environment for running existing SQL Server databases in the cloud.  It went down the path of creating a cloud-native database offering for a new generation of database applications.  Unfortunately customers weren’t ready for that, and proved resistant to some of the design decisions (e.g., Entity Framework was the only API offered initially) that Microsoft made.

That background is a little off topic, but hopefully useful.  The piece that is right on topic is Azure SQL storage.  With a philosophy apps would use lots of modest sized databases or shards and understand sharding, charging by the unit of work which enabled multitenant as a way to reduce costs, the routing being built above a largely unchanged SQL Server engine, and not supporting generic SQL (and its potential for cross-shard requests), Azure SQL launched with a maximum database size of 50GB.  This limit would prove a substantial pain point for customers, and a few years later was increased to 150GB.  When I asked friends why the limit was still only 150GB they responded with “Backup.  It’s a backup problem.”  And therein lies the topic that will drive the discussion in Part 2.

I’ll close out by saying that relatively low cloud storage sizes is not unique to 2009, or to Amazon RDS and Azure SQL.  Google Cloud SQL Generation 1 (aka, their original MySQL offering) was limited to 500GB databases.  The second generation, released this year for MySQL and in preview for PostgreSQL, allows 10TB (depending on machine type).  Azure SQL Database has struggled to increase storage size, but now maxes out at 4TB (depending on tier).  Microsoft Azure Database for MySQL and PostgreSQL is limited to 1TB in preview, though they mention it will support more at GA.  RDS has increased its storage size in increments.  In 2013 it was increased to 3TB, and in 2015 to 6TB.  It is now 16TB or 64TB depending on engine. Why? Part 2 is going to be fun.

 

 

 

 

 

 

Posted in Amazon, AWS, Cloud, Computer and Internet, Database, Microsoft, SQL Server | Leave a comment

Amazon Seattle Hiring Slowing?

An article in today’s Seattle Times discusses how Amazon’s open positions in Seattle is down by half from last summer and at a recent low. I don’t know what is going on, but I will speculate on what could be one of the major factors. Let me start by covering a similar situation at Microsoft about 20 years ago. Microsoft had been in its hyper growth phase, and teams would go in with headcount requests that were outrageous. Paul Maritz would look at a team’s hiring history and point out that to meet their new request they’d need to hire x people per month, but they’d never exceeded hiring more than x/2 per month. So he’d give them headcount that equated to x/2+<a little>, and then he’d maintain a buffer in case teams exceeded their headcount allocation. Most teams would fail to meet their headcount goals, a few would exceed them, but Microsoft (at least Paul’s part) would always end up hiring less (usually way less) than the total headcount they had budgeted for. It worked for years, until one year came along where most teams were at or near their hiring goals and a couple of teams hired way over the allocated headcount. Microsoft had over-hired in total, and some teams were ahead of where they might have been even with the following year’s budget allocation. From then on there was pressure on teams to stay within their allocated headcount, both for the year overall and for the ramp-up that was built into the quarterly spending plans.

Could something similar be happening at Amazon? Could this be as simple as Amazon telling teams “no, when we said you can hire X in 2017 we meant X”, and enforcing that by not letting them post 2018 positions until 2018 actually starts? Amazon is always looking for mechanisms to use, rather than counting on good intentions, and having recruiting refuse to open positions that exceed a team’s current fiscal year’s headcount allocation would be a very solid mechanism for enforcing hiring limits.

It will be interesting to see if job postings start to grow again when the new fiscal year starts. That would be the most direct confirmation that this is nothing more than Amazon applying better hiring discipline on teams.

Posted in Computer and Internet, Amazon | 2 Comments

Sonos One

I’ve been a Sonos fan for years, and recently talked about my first experience using Alexa to control older Sonos equipment.  I have one room, my exercise room, that isn’t covered by the Niles multi-zone system.  I was using an Echo in there, but with the launch of Sonos One decided I had an option for better sound.  I swapped out a first generation Echo for the Sonos One.

Starting with the good news, the Sonos One sounds great, as expected.  While I find the Echo good for casual music play, the Sonos One is better when you really want to fill the room.  If you wanted to go another step Sonos supports pairing two like devices for stereo (separate left and right channels) playback.  If you have a Play:1, there are reports of it being possible to pair the One and Play:1 for stereo playback.  I have a Play:1 at a second home that isn’t being used, so it might just be coming to the ranch.  Also, I wouldn’t be surprised to see a “Sonos Three” and/or “Sonos Five” next year, but that is overkill for my current use case.

As an Alexa device the Sonos One is a little weak, at least currently.  It supports most, but not all, of the functionality of the Echo.  Since the device is so music-centric that may not be a problem, but caveat emptor.  For example, last time I tried Flash Briefing (News) it wasn’t supported though Sonos said it was coming soon.  Getting the news during a morning workout is something I want.  Alexa Calling and Messaging isn’t supported, and that may not show up for a long time.  If you want my personal speculation on that one, Amazon may be reluctant to share your contacts with a third-party device.  So a design that worked on first-party devices like the Echo wouldn’t easily adapt to those using Alexa Voice Services (AVS).  Of course, in time Amazon could find a solution.  Sonos emphasizes that the Sonos One will continue to be updated in the future, going so far as to say “Both Sonos and Alexa keep adding new features, sound enhancements, services and skills to make sure both your music and voice control options keep getting better. For free. For life.” For one of the most obvious examples, Spotify wasn’t available at launch but support was added just weeks later.

My big beef with the Sonos One is that the far-field microphone array doesn’t seem to be working very well.  Now this confuses me, because when I tested it on arrival it seemed just fine (though not up to the Echo).  That is, I could give a command while music was playing and it would hear and respond appropriately.  This morning I was listening to music and the Sonos One was almost uninterruptable.  I finally tried talking so loudly that the Echo Show in my office started to respond, while the Sonos One just a few feet away continued to ignore me.  After my workout I applied the latest update from Sonos, which claimed to fix a bug introduced in the previous update.  The next time I workout I’ll see if that made the One’s microphone array more responsive, and update this posting.

Update (12/13): I’ve had more success with the Sonos One’s microphones on subsequent occasions, so either my Sonos One needed a reboot or it needed the software update I mentioned.    They still aren’t as responsive when playing music as the Echo, but work well enough that I had no real problems switching what we were listening to across an entire afternoon of cleaning our basement.

Bottom line:  If you are buying primarily for music playback than the Sonos One is a good alternative to the Echo.  But if music is more of a secondary use, then you are probably better off with one of the Echo devices.

For another take on the Sonos One, I found the Tom’s Guide review useful.

 

Posted in Computer and Internet, Home Entertainment, Amazon | Tagged , , | 4 Comments

Good browsing habits gone bad

We always think that the best protection against web-distributed malware is to exercise caution while browsing.  But what if you aren’t even browsing in the classic sense, and an application renders a malware infested page?  I found out this morning.

I grabbed my first cup of coffee this morning and launched the Windows 10 MSN News app.  I’d been reading stories for about 30 minutes when a story in my “Microsoft” search tab caught my eye:  “Microsoft Issues Black Friday Malware Warning”.  It showed as being from the International Business Times, not one of the obscure sites that MSN News sometimes picks up.  I clicked on the tile and started reading.  Suddenly my Surface Book 2 started talking.  The coffee wasn’t yet working so I couldn’t quite make out what was being said, but I thought “%^*%” auto-play video, so I clicked the back arrow to get rid of the page.  The woman with the English accent didn’t stop talking.  I killed MSN News, still she droned on.  I clicked on Edge and there it was, the MSN News article had somehow launched a browser tab with some kind of phishing/ransomware/malware site.

What the woman was saying was something about my computer was found to have “pornographic malware” and that I had to contact them.  I saw that the web page had a phone number on it, and darn but I was too busy trying to kill this to write it down.  On top was a modal dialog box:2017-11-24You’ll notice there is no checkbox for “prevent web page from launching dialog boxes”, or whatever Edge says.  I killed the dialog box and saw that underneath was another dialog box with that checkbox.  But before I could check it the above dialog box was back.  At one point I did check it in time, only to have the web page try to go to full screen mode.  Fortunately Edge let me block that.  So this second dialog was apparently a fake as well.

Unable to do anything to kill this from within Edge I launched Task Manager.  I really wanted to keep my other tabs so I tried killing just the process for the malicious one.  It didn’t work, it just kept re-launching.  I killed the top level process, re-launched Edge, and killed the malicious tab without opening it.  Nope, that wasn’t enough.  The malicious page came back to life.  I went through the whole thing again and this time clicked on the tab to start fresh.  Then I went into settings and cleared everything.  This finally seemed to stop it.

Next came a scan, then an offline scan, with Defender.  I followed that up with a Malwarebytes scan.  Nothing.  It looks like Edge managed to keep this beast from breaking through and making system changes, but I’m not confident about that yet.  I’m going to take a deeper look before declaring victory.

Maybe the worst part of this is I have no way to report it to Microsoft, or anyone else.  I couldn’t copy the offending URL from the address bar because of the modal dialog.  And I discovered that when you go into Edge’s browser history you can either re-launch the page or delete the history item, but you can’t Copy the link.  I spent some time looking around to see if Edge stored history in human readable format, but eventually gave up.  I don’t see a way to report the bad story in MSN News, but now I’ll go try to find it elsewhere.

Bottom line: Don’t think that good browsing habits will save you.  I’ve been using the MSN News app since it was first released with Windows 8, with this being the first malicious story I’ve found.  And it was an infected web page on a mainstream site.

Update (11AM Eastern): I scanned the IBT web page for this story using several tools, such as Virustotal, and came up blank on any malware.  So I viewed the story directly.  Nothing bad happened.  So while the problem occurred while I was viewing the IBT story in MSN News, it isn’t clear what really caused the malicious page to launch.  Also went and checked the family member’s WiFi router I’m on and discovered it wasn’t up to my standards for security settings.  I hardened that up.

Posted in Microsoft, Computer and Internet, Windows, Security | 8 Comments