Amazon HQ2 – The Uber Test

Let me start off by saying I have no non-public information about Amazon’s search for a second headquarters, aka HQ2.  And I have no idea how Amazon’s senior leaders will ‘score’ proposals and make a decision.  But I have a couple of observations I’d like to make based on how Amazonians think.

Yesterday’s big news on the HQ2 front was how Newark was offering $7B in incentives if Amazon chose it for HQ2.  The $7B was composed of $5B over 10 years from the State of New Jersey and $2B over 20 years from the city.  Nice.  But cities that think Amazon will make its decision based on short term, and 10 years is short term, incentives are sadly mistaken.  Incentives are going to be a tie-breaker, not the primary driver of a decision.  Amazon thinks long-term, so senior leadership is going to be thinking about picking a location that they believe will be the right one for 2030, 2040, and 2050.  Incentives that mostly run out in the 2020s won’t mean much if it means having 50,000 people in the wrong place in 2035.  So they will narrow things down based on other criteria, get to a few finalists, then start weighing the value of the incentives.  At least IMHO.  I’ll get back to Newark later.

On Facebook a friend, who grew up in Memphis, suggested that city for Amazon HQ2.  This was in response to the New York Times’ analysis saying HQ2 would go to Denver.  Memphis isn’t an obvious choice, and I don’t know how it lines up with the official criteria.  So how can one quickly weigh if Memphis, or any other location, would be a good choice for Amazon?  Amazon is a company where it is always Day 1.  Amongst other things that means it tends to pursue disruptive ideas, such as using drones for deliveries.  Amazon would want HQ2 to be in a location that is supportive of a Day 1 company, if not outright still in Day 1 itself.  What I needed was a simple test for Day 1 type disruptions, and the last decade presented us with a perfect one.  Ride Sharing, and more specifically Uber.  Another interesting test would be AirBnB, but I haven’t pursued that one.

Uber has been disruptive in applying technology to an urban transportation system (taxis)  that hasn’t changed significantly in almost a century, and has deeply entrenched interests.  Moreover, that system had become highly regulated with substantial public bureaucracies and government revenue streams linked to them.  So wherever Uber (or Lyft or…) went it was going against “the system” and friction was to be expected.  What you could measure, very quickly thanks to Google and Bing, was just how much friction (or support) Uber ran into with local government.

So I did a search on Memphis and Uber and discovered that in 2013 the city of Memphis has sent its police force to arrest Uber drivers.  That seemed like a pretty extreme case of being unfriendly to Day 1 type disruptions, and played that back to my friend.  He pointed out that was 2013, so I did a search on Denver and Uber and discovered that in 2013 the Colorado Legislature became the first in the country to explicitly legalize ride-sharing.  What Colorado Governor John Hickenlooper said at the time was “Colorado is once again in the vanguard in promoting innovation and competition while protecting consumers and public safety.”  And when there was some friction in the city of Denver after that, the Police Chief acted quickly to resolve it.  Which makes more sense to locate the HQ of a Day 1 company in, a place that sent the police to arrest Uber drivers or one that acted quickly to accept and encourage the disruption?

So let’s get back to Newark.  There are many reasons I could think of for picking Newark as HQ2.  In particular the close proximity to New York City and all its benefits, with much better housing costs.  Making the reverse commute from Manhattan to nearby New Jersey cities has even become reasonably common in the last 20 years.  The incentives being offered are nice, but again they are a short-term benefit in a long-term play.  So I applied the Uber test to Newark, and it failed miserably.  Newark was still trying to force Uber into their Taxi regulatory scheme as recently as 2016, banning them from Newark Airport and train station in February and planning to enact further regulation in April.  Uber was planning to abandon the city when a last minute deal was reached.  Compare what Newark Mayor Ras Baraka said about Uber, “Just because they have a great idea, doesn’t mean they have an exemption from rules and regulations that have been in existence for decades” with the earlier quote from Colorado Governor Hickenlooper.  I was particularly caught by the “rules and regulations that have been in existence for decades” part.  That doesn’t read like a Day 1 supportive location to me.  It is quite possible that those words will come back to haunt Mayor Baraka when Amazon is evaluating the HQ2 proposals.

The Uber test isn’t perfect for a couple of reasons.  One is that pretty much no locality that regulates Taxis and Limousines won’t try to put some regulatory regime around ride sharing.  The other is that Uber itself has valued confrontation with local governments and agencies over reaching an accommodation, and that in some cases has raised the heat significantly.  For example, I know of airports where Lyft was able to pick up months before Uber, because they negotiated a deal while Uber was still trying to fight the local airport authority.  But in most cases Uber fights the regulatory fight, and gets painted as the bad guy, then Lyft and other competitors benefit.  So the Uber test isn’t black and white, it is varying shades of grey.  But compare the light grey of Denver to the very dark grey of Newark and it, at the very least, gives an indication of which city would be more accommodating to a company where it is always Day 1.

 

 

 

Advertisements
Posted in Amazon | Tagged , , , | Leave a comment

Sonos and Alexa, now for Niles…

It seems like I’m on a continuous journey of how to play music at home.  It all started when we built our ranch back in 2003-2004 and I had the bright idea of doing a whole-house A/V system.  Our timing couldn’t have been worse.  Components were not network aware.  IR, Coax, and RS-232 were the technologies of the day.  There were no smartphones or wireless controllers, let alone something like an Amazon Echo, to control the system.  It was so primitive!  And expensive.  Better options started to appear almost before we moved in, but it was too late.

Over the years I’ve ripped most of the old system and wiring out.  Gone are the Escient Fireball music server, the Sony CD Jukebox, the video distribution system so you could control the Escient remotely (basically IR repeaters would send the commands to the Escient, but you needed to display its UI on a TV).  What remains is the Niles built-in speakers, wall panels, and a Niles ZR-4630 Multi-Zone Receiver.  Years ago I hooked a Sonos Bridge (now called the Connect) to the ZR-4630, so we could play all that Sonos goodness throughout the house.  Of course, controlling this is still a bit of a pain.  You have to use an old Niles panel to select the Sonos to play in that zone, then control the Sonos with your smartphone.  And you may have to play with the Niles volume controls as well as those on the Sonos.  So not quite that simple, but better than the previous setup by a mile.

Today I linked my Sonos account with Alexa.  The Alexa app discovered the Bridge and I said “Alexa, play the Bill Joel channel from Pandora on the Media Room” and lo-and-behold…nothing happened.  Well it did happen, but I had to jump up and press the button (still labeled CD for the old Fireball) to route the Sonos to my office.  And then beautiful music was coming through the ceiling speakers.

Like others, I found the announcement of Alexa support for Sonos very confusing.  The focus of the announcement was the new Sonos One, which is basically a Sonos Play:1 enhanced with far-field microphones and Alexa support.  It looks like a really nice device, and if I had a room calling for really music-focused “Echo” I would go for the Sonos One instead.  The confusion was that the announcement made it sound like you needed a Sonos One to use Alexa.  Either that Alexa only could control the Sonos One, or that one Sonos device in the house had to be a Sonos One to control the others.  Fortunately that isn’t true, you can control existing Sonos devices on your network with Alexa today.

Despite our whole-house system we have been using Echos as standalone music devices at the ranch.  My wife tired of the complexity of using the Niles panel (which in the kitchen/family room has some weird legacy dynamics) and the Sonos app.  She liked the simplicity of an Echo and put one in the kitchen.  Now she can just walk in and say “Alexa, play The Beatles” and she has music.  There is also a weird legacy dynamic in the exercise area because we’d run out of zones on the ZR-4630.  So there is a separate legacy amplifier that had two zones itself, and well….  I will spare you the details, but it is very hard to get music into the exercise area.  I finally just put an Echo in there too, and now I can tell Alexa what to play from the Elliptical.  As well as do all the other great things Alexa supports.

If only I could control the ZR-4630 with Alexa!  Ok, that’s not going to happen given it is a 14 year old device.  So I checked the Niles website and can’t find any indication they have an Alexa-compatible replacement.  The only Alexa-compatible multi-zone receiver I know of is the Denon Heos Drive, and it is only 4 zones.  I need at least 6, preferably 7.  Given the rapid adoption of Alexa I probably don’t have to wait too long for additional multi-zone options.  Either that or I finish ripping out the whole house system and go for an entirely wireless (Sonos or Denon, for the moment) system.

Talk about first world problems.

 

Posted in Amazon, Computer and Internet, Home Entertainment | Tagged , , , | Leave a comment

Finally a Brand New WINDOWS PHONE!

Ok, I lied.  There is no Windows Phone, this is about the Apple iPhone X.  It isn’t even about the iPhone X, it is about clairvoyance.  One of the signature features of the iPhone X is Windows Hello.  Oh, sorry, I mean “Face ID”.  And Qi-based wireless charging. I think I still have some Qi wireless charging plates lying around from a 4 year old Lumia 1020 Windows Phone. Ok, so the iPhone X is full of technology pioneered by Microsoft and its ecosystem years earlier.  The only reason to be snarky is because of a long tradition of Apple and its fanboys claiming Apple is innovative and Microsoft is not.  I should be excited that these technologies are now available in the phone ecosystem I use.  Very excited.  So why do I feel let down?

I think about the iPhone X and picture how it will change my life 6, 12, or 24 months from now and draw a blank.  The first 10 years of iPhone marked a profound change in how I live my life.  In how nearly all of us live our lives.  Even those without smartphones have experienced the change.  My mother experienced her first Uber ride a few months ago.  She didn’t have a smartphone, but my cousin did. And she is only marginally aware of how it contributed to the restaurants we’ve chosen, her healthcare, her travel, or dozens of other things we’ve done on her behalf using a smartphone.

The technological advancements in smartphones the last decade are breathtaking, even when we take them completely for granted.  Take a current dilemma for the military, the potential that they will be denied access to GPS signals during a conflict.  For 30+ years militaries, particularly those of the U.S.A., have used GPS at the core of navigation.  It allows them to feed an accurate current position into Inertial Navigation Systems.  But GPS signal access can be denied, or potentially spoofed, and DARPA (the guys who brought you everything from the Internet to Stealth) have a project to provide accurate positioning information in GPS-denied environments.  One element of that program is to triangulate on existing radio signals.  Well, how does “GPS” work in today’s smartphones?

To reduce battery drain smartphones run with their satellite GPS receivers turned off.  However they are always looking for, and connected to, a nearby cell tower with the strongest signal.  So they can see one or more cell towers and they know where they are located.  When an application asks for location, the smartphone very quickly gives it a location that is triangulated from the cell towers it already looking at.  That isn’t as accurate as GPS, but it is good enough for many uses. Then it looks at all the available WiFi signals.  It can access a database of known WiFi hotspot locations and triangulate on those.  Finally, if the app has asked for maximum location accuracy, the phone fires up its GPS receiver and tries (since GPS signals don’t penetrate buildings, or the “canyons” created by skyscrapers, well) a satellite-based location.  It fuses all this information to determine a very accurate location for your phone.  You can actually see this in action in many location-based applications.  You fire up Uber and watch as the pin moves from a location tens or hundreds of feet from where you are standing to almost exactly where you are.  Map apps may also show a blue circle around the location, indicating accuracy.  The circle shrinks as a more accurate location is determined.  Many applications go one step further by having a database of known points of interest, such as hotels, restaurants, popular office buildings, etc.  So when you ask for an Uber and the cell tower triangulation says you are 50 feet from an Italian Restaurant it assumes you are at the Italian Restaurant.  I can request an Uber from inside my condo.  Location services always starts with a location around the corner from my building entrance, but the Uber app is smart enough to know I’m likely requesting a car from the Home address I’ve saved.  As a result, if GPS satellites were to suddenly all stop working, most location-based apps would continue to work much as they do today.

The maturity of location services in smartphones, and of course the apps built on them, has changed our lives.  Does the iPhone X offer anything with similar potential?  I don’t think so.

What about Augmented Reality?  That isn’t something unique to the iPhone X, but indeed has long term potential.  I may not care much about it, except as a curiosity right now, as I’m not a gamer.  But denying ARs potential would be like claiming GPS was only useful for back country hikers a decade ago. Learning that the sleepsofa we bought couldn’t make it into a guest room no matter how the delivery guys twisted and turned it, before we bought it, would be a game changer.

So why would I part with $1000 for an iPhone X.  There are many times I’d like a screen the size of my iPhone 7s Plus in a smaller package.  I’m having trouble convincing myself that is worth $1000 when the 7s Plus is no new.  I would normally wait for the Xs to make a move.  And there is a potential reason not to buy an X, the lack of a fingerprint reader.

The only reason I really care about a fingerprint reader is that Windows Hello, I mean Face ID, seems really inconvienent for Apple Pay.  What’s the sequence?  I do the usual dance trying to find the right spot on the payment terminal for Apple Pay to launch, then I have to hold it up to see my face, then I have to do the dance again to find the right position to register the payment?  I’ve gotten good at the current sequence, where I find the right position while my thumb hovers over the start button then just give it a touch when Apple Pay launches.  Forcing me to use Face ID (or a pin) instead probably means its more convenient to go back to pulling out a credit card.  I could wait and see if the Xs introduces a screen-based finger print reader, which was something Apple reportedly dropped from the X because they were having trouble getting it to work in time for launch,

So I’m not too excited by the iPhone X, and emotionally lean towards waiting for an Xs.  But that isn’t the end of the story, since my interest may grow over time.  While I have often pre-ordered devices, that hasn’t been the case with iPhones. With iPhones I find my excitement builds over time and I tend to buy them a few months after introduction.  So if February comes around and I’m carrying an iPhone X, don’t be too surprised.

Meanwhile, I was talking to a friend about what would really get me excited about mobile phones again and I gave him a one-word answer: clairvoyance.  He thought I was joking. Fortunately we both believe in Clark’s Laws so be prepared for magic, I mean sufficiently advanced science.  Why do I have to use a finger print or Face ID at all when making a payment?  Or to access the phone at all?  Clairvoyance or bust.

Behavior approaching clairvoyance is already something we are familiar with.  Android and iOS already will figure out your home and work addresses based on behavior.  Waze knows I’m going to/from work and home based on time of day and location I start a journey.  That used to only happen with my pre-designated Home and Work.  But lately I’ve noticed that it handles going from my Colorado home to Colorado work location, even though what is programmed in are my Seattle home and work locations.  Figuring things out from information we provide (e.g., a calendar entry with a meeting location in it) is just good programming.  Deriving facts and projecting behaviors seemingly out of thin air?  Clairvoyance.

Of course clairvoyance is a marriage of sensor data, historical behaviors, and cloud-based AI models.  Mobile phones play a very small part, providing a portion (large today, smaller tomorrow) of the sensory input and serving as a human-network communications interface.  So it is entirely possible to deliver clairvoyance without requiring a new mobile phone.  But, like most things, having the right design center yields a superior experience.  A mobile phone with delivery of clairvoyant experiences as the design center?  That I could get excited about.  It ain’t the iPhone X.

 

 

 

Posted in Cloud, Computer and Internet, Linux and Android, Microsoft, Mobile, Windows Phone | 5 Comments

CUJO Firewall: Approach with Caution

In my previous post I did an introduction to whole-home Internet security.  For the last few months I’ve been trying to get one of the early devices in this category, the CUJO working.  I have now tried CUJO in two totally different networking setups, and failed to get it working properly in either. I had total failure in one and a partial failure in another.

What sets CUJO apart from other devices currently hitting the market is that it is a separate security firewall, not a new router or router feature.  That suggests you can upgrade, rather than replace, your existing network.  Given some may have unusual networking needs to address, the CUJO approach would seem to offer the flexibility that some of us need.  It also would seem to address the problem of how to add security to the modem/router/access point that your internet provider supplies.

I have three home network environments that I can use to try out new products.  The first is as simple as it gets, and likely represents the environment of most Americans.  It is a 2-bedroom condo with Comcast Xfinity as the Internet provider.  Since it is small, the Xfinity combination cable modem/router/access point is completely adequate and thus its base networking setup is extremely vanilla.  The second environment has Centurylink as its Internet provider.  It uses a Centurylink-supplied Zyxel modem/router/access point, except I turn off the AP in favor of using a Netgear Orbi for WiFi.  Though not purely an Internet Provider default environment, it doesn’t stray too far either.  The third environment is very complex due to the lack of any landline Internet provider.  As I didn’t try the CUJO there, I’ll leave out the details.  All three environments contain a number of IoT devices, which leads to complexity on the LAN side.  For example, there are definitely too many ZigBee/Zwave/BLE bridges because despite using standardized protocols many vendors requires a bridge of their own.  But again, that is another story.

My first attempt to get the CUJO working was in the condo, which I often use as a test bed because…my wife is infrequently there so she will never know how badly I mess things up!  There, I said it.  So I get my CUJO, watch the videos, read everything I can find, and get ready to set it up.  I discovered that with Xfinity you need a real hack to get CUJO to work.  I’m not above hacks, so I go ahead and follow the instructions to get it working.

Before we get into that let me summarize how I understand CUJO gains access to your network traffic.  You disable the DHCP server (the thing that hands out addresses to each device in your network, like 192.168.0.23) in your router and let CUJO serve up DHCP addresses instead.  Along with the DHCP address CUJO provides the LAN-side address of the Gateway that the device should talk to in order to send data out over the Internet.  CUJO tells every device to use it, rather than the router, as the gateway.  That way every network packet to and from the device goes through the CUJO.  CUJO can then sniff packets for malicious content, block accesses to bad URLs, and monitor for unusual communications patterns that might indicate a device has been compromised.  For those who really care, CUJO also sets itself up in a sort-of double-NAT environment.  It sets itself up as the gateway at 192.168.0.1, with DHCP handling out 192.168.0.x addresses, and changes your router to sit at 10.0.0.1.

The problem with Xfinity-supplied routers is that you can’t turn off their internal DHCP server.  So instead CUJO came up with the hack I linked to above.  I tried the hack on my router and could never get CUJO to work (it would always end up with its LEDs making the frowning face).  CUJO has easy access to support.  I talked to support, tried a few things, then let them connect into my router to try to get it configured.  We never did get it to work, and the technician suggested I try doing a factory reset of the router then install CUJO again.  At this point I’d spent most of a day on the problem, and decided I couldn’t face all the things that could go wrong with a factory reset.  So I managed to undo the CUJO hack and get my home network working properly again.

The CUJO sat in my cabinet for months, until I realized it was there and that Xfinity had since sent me a newer generation modem/router/AP.  By definition, it had been “reset”! Of course they didn’t make it possible to disable the DHCP server, so CUJO’s hack was still necessary.  Having a couple of hours before I needed to leave for the airport, I reset and again installed the CUJO. The results were no better, once set up according to CUJO’s instructions my home network became completely inaccessible. And without a functioning DHCP server, even after removing the CUJO I struggled to regain access to the Xfinity router and return it to its proper configuration. I had to delay my departure for the airport, and nearly missed the flight, to get my home network back working properly before leaving. I could have tried calling CUJO support again, but I’d already spent way too much time on this device and was running up against that deadline.

Instead I’ve concluded that if you have Comcast Xfinity, don’t go anywhere near the CUJO.  I’m not saying you can never get it to work, just that it is not worth the likely aggravation of trying.  Xfinity subverts the basic mechanism that CUJO uses, and the hack means you are caught in the middle.  You could also replace the Xfinity-supplied modem/router/AP with separate non-Comcast components, and add CUJO to that, but it isn’t a normal consumer thing to do.

With CUJO and Xfinity not playing together I decided to try the CUJO on my CenturyLink network. I knew the Zyxel modem/router/ap allowed you to turn off DHCP, so it should work the way that CUJO is designed for. I followed the instructions in the CUJO app, which automatically configured the Zyxel.  Over a few hours all the devices in my house were recognized by CUJO, and were working correctly. All except a couple of WiFi security cameras. I waited 24 hours for their TTLs to expire to be sure they reached out to DHCP for a new address, but that didn’t help.  They just wouldn’t join the network.

I looked at the instructions for my cameras and they mentioned a number of cases where they could lose WiFi connectivity (e.g., switching network gear). I took one of them and went through its process for connecting to a new network. To make along story short, it was unable to get an IP address from DHCP. I did a hardware reset on the camera and tried the reconnect, no luck. I tried unplugging and re-plugging in the Zyxel, CUJO, and Orbi.  Well first my entire network got screwed up.  Nothing could connect to WiFi for the longest time.  My wired Ethernet connected PC picked up completely bogus information that I couldn’t clear with any imaginable combination of IPCONFIG commands, or the network troubleshooter.  It appeared that it had gotten some information from the router’s DHCP and some from the CUJO’s DHCP, which would make sense later.  It took a reboot to regain a useful Internet configuration.  I then tried to connect the WiFi camera again, with the same result.  It couldn’t get a valid IP address.

At this point I’d wasted hours and was no closer to getting CUJO working correctly, so I decided to remove it from my network.  When I went in to turn back on the DHCP server in the Zyxel I was surprised to find it was already on.  I don’t know if the CUJO app had failed to turn it off, or if the Zyxel somehow turned its DHCP server back on.  I thought about just turning it off, and manually verifying the configuration was proper for the CUJO, but realized it would take days to be confident that the change would “stick” and be comfortable the CUJO was working properly.  So instead I changed the Zyxel back to sitting at the gateway address, removed the CUJO, and rebooted the Zyxel and Orbi.  My network came back with all devices working.  I had to finish the setup of the camera that I’d done a factory reset on, but this time it went smoothly.  Now I had failure with CUJO on both Xfinity and Centurylink.

So conclusion number two is that even though CUJO has tried to make setup consumer friendly, it just doesn’t work reliably enough.  I’m even more sure I could have gotten the Centurylink setup to work properly, if I wanted to spend another few hours between setup and testing, than I was with the Comcast Xfinity setup.  But I worry it is all too fragile.  Because of that, I just don’t think I’ll be giving the CUJO another shot.

 

Posted in Computer and Internet, Security | Leave a comment

Whole-home Internet Security

Over the next several months I’ll be returning to blogging about one of my favorite topic areas, Internet Security and Privacy.  For this post I’ll do some background on the new generation of whole-home internet security devices.  Then I’ll do another post about the first new device I’ve used, the CUJO.

I’ve been seeking ways to enhance the security of the Internet for myself, my family, and my home for many years.  For example, back in 2011 I wrote about how you may need multiple anti-malware products to adequately protect yourself.  And in 2012 I wrote about the use of enhanced DNS offerings as an added layer of security for web browsing.  Please note that both postings are dated and contain suggestions I wouldn’t make today.  Web of Trust went through a rough patch over privacy issues. I still use it to check out suspicious sites but usually don’t run with it always monitoring my website activity.  Both OpenDNS and Immunet were acquired by Cisco and, as a result of Cisco’s business focus, have questionable futures for consumer use.  As a warning sign, an increasing number of links for the consumer OpenDNS website are broken.  In the case of OpenDNS, I’d already recommended Norton ConnectSafe instead.  Immunet was unique in its support for running concurrently with other anti-malware, but fortunately there is a better approach now.

It looks like we are entering a new Golden Age of home internet security offerings, and I hope they actually prove to be as golden in the protection they offer.  These new devices, from add-on devices such as CUJO, to mesh routers with optional add-on security services such as EERO , to security company offerings of routers (Norton Core, Bitdefender Box 2) that are finally bringing enterprise-like network edge security to the home. Why now? We have four trends coming together. 

On the demand side, the Internet of Things (IoT) is placing large numbers of devices in our homes.  These devices can’t run a full-suite of security software, they may not be updateable (i.e., to fix vulnerabilities), and their market lifetimes are short (so they may not receive security patching support even if technically updateable) even though their usage lifetimes may be long.  In other words, the WiFi lightbulb I buy today may be replaced by a new and incompatible model next year, but I’ll still be using it 5 years from now.  Last year we saw how these IoT devices could be compromised, and in this case hijacked to create a large DDoS attack.

The second trend is the cloud.  As cloud capabilities grow, the ability to use it to enhance security grows as well.  For devices to be applicable for home use they have to stay in a consumer-friendly range, say under $300 for early adopters and under $100 at full adoption.  By moving more resource intensive processing to the cloud, vendors are able to offer capabilities to devices at these price points that would otherwise cost $1000s. Of course cost efficiency of the cloud is just one way to look at it.  The cloud enables computations, on large data sets, that just aren’t possible in other environments.

The third trend, also enabled by the cloud, is the maturity of machine learning.  Put (over)simply, with machine learning you feed a model a set of known malicious examples and a set of known benign examples.  It learns how to tell the difference, so when you give it a sample that is completely unknown it can tell if it looks malicious.  The more examples you feed it, the better it can distinguish between good and bad.  The training of the model is hugely expensive and is done in the cloud.  The resulting model it generates is relatively small and fast and can live on a modest device like a home router (or the anti-malware suite running on a PC).  The router also reaches out to the cloud when it encounters something suspicious, but not clearly malicious.  And this isn’t just about analyzing executable software, you can do the same thing with network traffic.  So if the model learns what the normal network traffic to and from company A’s lightbulb looks like, then it can block suspicious traffic to or from an A lightbulb.

The fourth trend is simply Moore’s Law.  The computational power available in a device for under $300 has grown to the point of being able to fully inspect network traffic, maintain and processes expanded sets of rules, run the models output by machine learning, process automatic updates, etc.

Of course this all works for more general purpose computing devices (PCs, tablets, and phones) as well as “things”.  So while I wouldn’t suggest removing all anti-malware software from your general purpose devices, the added layer of protection from placing security at the edge of your home network is worthwhile even if you have no IoT devices.  If there is a Zero Day attack circulating for your PC, you have at least two chances (the network edge and the anti-malware running on your PC) to block it while waiting for the vulnerability to be patched. Another example, your carrier may not update your Android phone quickly enough  to protect against a known vulnerability, but that vulnerability could be blocked from ever reaching your phone.  At least while you are connected in your own home.  Therein lies the weakness of edge protection, which falls into the category of useful, but certainly not sufficient, for mobile devices.

Despite my enthusiasm for a network-edge solution for home Internet I see two major roadblocks ahead.  The first is that whole-home security solutions typically require an annual subscription for their cloud-based services.  What is the price point, or combination of price points that will appeal to a broad spectrum of consumers?  How long is the included subscription?  A month, a few months, a year?  Is there a basic level of free service and then paid enhanced services?  Etc.  This is one way vendors will differentiate themselves, and we run a significant risk that these network edge devices will become like PC anti-malware software where the subscription runs out and the devices are not updated to deal with new threats.

The second roadblock is that most people simply obtain a modem/router/access point single box solution from their internet provider.  Until those providers start including these next generation whole-home security features, adoption will not spread far from early adopters and those they directly influence.  In fact, even for an early adopter the internet providers may put roadblocks in your way of incorporating the latest security devices.  You need a major hack to use CUJO with Comcast Xfinity, which I’ll talk more about in my next post.

 

 

Posted in Computer and Internet, Security | 2 Comments

Maybe Consumer Reports is right about the Microsoft Surface family

I’ve been watching all the uproar, and denial, from Consumer Reports (CR) dropping its recommendation of Microsoft’s Surface line, but I have to tell you I think CR might be on to something.  Sure there are plenty of power users who report they’ve never had a problem with the Surface.  Even when they have, which I’ll get to in a moment.  They, and the press, just seem to want to ignore that CR operates off broad survey responses rather than the experiences of a select few.  Of course CR’s data is backward looking, and won’t reflect improvements that Microsoft has made over the last 6-12 months.  And I’ve always believed that CR’s data is biased by who they attract as subscribers.  Nevertheless, it is a valid data set.

So why do I think that CR might be right about the Surface family?  Well let’s go with my last three experiences.  The first one was when I purchased a Surface 3 (not the pro) to use as a consumer tablet (i.e., alternative to an iPad).  Subjectively I’d say that the Surface 3 did not even perform as well as my original Surface RT.  It was sluggish and I thought kind of flakey.  Then several months in the touch screen stopped working reliably.  A few weeks later I happened to catch it in a funny light and found a hairline fracture in the screen.  I didn’t remember dropping it, though I likely bumped it at some point, so this was almost certainly my fault.  I looked at the cost of repair, and it was so close to replacement cost that I did replace it.  With an iPad Pro.  I’ve dropped that a few times, and it still is working just fine.  Now take an average consumer.  The Surface 3 never was a satisfying experience.  It was not a physically robust device.  It was not affordable to repair.  How would that consumer respond to CR’s survey?  Never mind an average consumer, how do you think I would have responded to CR’s survey?

Next let’s take the Surface Book.  Like just about all owners of more recent Surface devices, I had the experience of my Surface Book failing to Sleep or coming out of Sleep all on its own.  Even power users who have tweeted they don’t agree with CR admit they had that experience with Sleep in the past.  If we talk average consumer, don’t you think that experience might lead them to be a little negative on the device and respond to CR that they’d had problems?  Now my personal experience is even worse.  I twice had the experience of putting my Surface Book in my backpack, heading to the airport, and having it come out of Sleep on its own.  In both cases it cooked itself for hours.  How hot did it get?  Well, when I reached into the backpack I burned my fingers!  After the second time the Surface Book’s screen was permanently damaged, with brownish yellow streaks along the right side and bottom of the screen.

By the way, I didn’t blindly keep using the Sleep feature (although an average consumer probably would) after the first incident.  I switched to Hibernate for months.  Then after Microsoft claimed to have fixed the problem I went back to using Sleep.  And it happened the second time.  Microsoft issued more fixes and now it hasn’t happened in a long time, but how do you think I would have answered the CR survey?

Lest one think that this is all backward looking, I had another experience just this past week.  A friend bought his daughter a Surface Studio.  A month later she said something about having lost her drawings and being unable to re-install the drawing app she was using from the store.  In fact, they couldn’t install any app.  Or run any store app.  So he asked me to take a look.  The system was completely messed up.  Attempts to fix the store failed.  Pretty much nothing store-related would work, and a lot of things in Windows 10 have a connection to the store.  I had to advise them to use Windows 10 Reset to get the system back in a usable state.  How do you think they would answer the CR survey?

I’ll contrast this with my Surface Pro 2, which has worked flawlessly from day one.  Or my original Surface RT, which was fantastically reliable as well.  From my narrow perspective, if you’d surveyed me on early members of the Surface family I’d be able to say they were very reliable.  But I haven’t had that experience with a Surface device in the last 3 years.  Does that mean I’ll avoid Surface devices in the future?  No, I’m very likely to pick up a Surface Laptop.  But I’m going to be brutal on Microsoft if that experience isn’t near perfect.

So before dismissing the CR downgrade consider that the Surface devices have had problems, and personal experience suggests those are not completely behind them.  Microsoft, rather than being dismissive of Consumer Reports’ findings, needs to double down on the quality of Surface devices.  They might also want to take another look at how they collect reliability and customer satisfaction data, because they seem to have missed that the customer experience isn’t nearly as good as their own data shows.

Posted in Computer and Internet, Microsoft | Tagged , | 8 Comments

Does Intentional finally have clear intent?

Reminder: This represents personal views, not those of my employer

Almost 16 years ago, while at Microsoft, I was asked to take a look at the work Charles Simonyi was doing on Intentional Programming (IP) to see if I could find a place to apply it.   After spending some time with Charles I came to the conclusion that while interesting, I couldn’t find a place to apply IP in the near-term. I wasn’t the first to come to that conclusion, the investment in IP was on the chopping block as it hadn’t found a productization home.  Not long after I failed to come up with a way for Microsoft to exploit IP, Charles got Bill and Steve’s blessing to take IP outside and form his own company around it.

For 15 years Charles’  Intentional Software pursued IP, though until today I didn’t know to what end.  It seemed like little more than a pet project of Charles’ that would never bear fruit.  But apparently Intentional Software eventually found a clear intent for IP, a new platform for creating team collaboration applications.  Microsoft, which has shown a strong focus on team collaboration lately (*), was so excited by where Intentional Software was going that it decided to acquire them.

Congratulations to Charles and the Intentional Software team.  Charles has been working on IP for at least 22 years, and it’s nice to see his faith rewarded.  Assuming Microsoft proves out the value of IP by turning Intentional’s collaboration platform into a successful commercial offering, it will be interesting to see where else it can apply IP.

(*) Microsoft has been obsessed with team collaboration since at least the early 90s.  So it is a little disingenuous to point out that it is a recent focus.  Exchange’s Public Folders and Microsoft Access’ multi-master replication were 1990s features intended to support collaboration and collaboration applications.

Posted in Computer and Internet, Microsoft | 2 Comments