Finishing turning Windows Blue

People seem a little shocked at how much change is coming in both the Windows 8.1 Update (a.k.a. Update 1, Feature Pack, etc.) and in Windows Phone 8.1.  You probably shouldn’t be as both offerings have far more effort behind them than their names suggest.   Recall that all of this is part of a product “wave” Microsoft referred to internally as Blue that entered planning as Windows 8/Windows Phone 8 were being finalized .

Back before Windows 8 shipped I wrote a blog entry on how Microsoft might adapt the Windows engineering system to deal with quick turnaround releases.  One alternative I proposed is that they would do a planning phase that covered multiple releases rather than separate planning phases for each release.  That’s exactly what I think we are seeing with the Windows 8.1 Update.

Microsoft did a planning exercise for Windows 8.1, saw how many development milestones they could fit in to make the 8.1 RTM target, allocated work to those development milestones, then created a post-8.1 development milestone and allocated the remaining 8.1 work to it.  That added development milestone is what we now see as the 8.1 Update.  But the plan and most of the decisions and designs were done more than a year ago as part of the original 8.1 effort.  Even a lot of the development work behind the Update may have been done as part of an earlier 8.1 milestone but it couldn’t be completed until the post-RTM milestone.  At least I believe that’s the case.

So whereas most of us see Windows 8.1 as what Windows 8.0 should have been, the Windows 8.1 Update is what Microsoft envisioned Windows 8.1 being!

Meanwhile Windows Phone 8.1 is an even bigger deal, and would no doubt have a more impressive name (8.5 or 9) if it weren’t for the attempt to line up Windows and Windows Phone version naming.  Well, it isn’t just naming it is an attempt to coordinate the releases much more closely, something that really kicks into gear for the post-Blue product wave after the One Microsoft reorg.

Consider that Windows Phone 8.1 is the first release of WP where the development team has gotten to place its primary focus on features and functions for customers and OEMs.  It is also the first release since the original Windows Phone 7 effort to have an 18 month (rather than 12 month) release cycle.  There is some amount of arbitrariness in picking how long the Windows Phone 7 cycle was, but basically it is 18-24 months depending on your perspective.

A year after Windows Phone 7 we had 7.5, which matured the platform a bit.  But even as Microsoft worked on that release it was ramping up the activities to move to a completely different kernel.  And then the entire team was focused on that kernel move for Windows Phone 8.  That meant for Windows Phone 7.5 and Windows Phone 8 we had modest (but important) improvements for users and developers, but most of Microsoft’s effort was going on under the covers.

Windows Phone 8.1 represents the first release of Windows Phone where the majority of the team doesn’t have to be focused on a new kernel.  Sure there is kernel-related work going on, like support for lower-end chipsets, but the teams working on higher-level layers finally had a level of stability in the lower-level layers that should have let them focus most of their resources on user visible improvements.  Combine that with the longer 18 month development cycle and Windows Phone 8.1 had better be a major step forward in user visible functionality!

Let me state this more strongly.  I would expect Windows Phone 8.1 to be the release where Windows Phone stops being an OS that is clearly less mature than IOS and Android and instead enters a phase where it is playing the same leapfrog game that the two of them have been engaged in.  From the leaks it seems that’s about where 8.1 is going to land.

Do the Windows 8.1 Update and Windows Phone 8.1 represent the end of the rollout of the Blue wave of OS changes?  Probably, with the caveat that Microsoft might release some minor to modest updates (similar to the Windows Phone 8 GDRs) before what is probably a Windows/Windows Phone 9 release.  Hopefully we’ll get some idea of their strategy, if not a full (PDC-style) developer-oriented reveal, at Build 2014 in less than a month.

 

 

About these ads
Posted in Computer and Internet, Microsoft, Mobile, Windows, Windows Phone | Tagged , , , , , | 8 Comments

SQL Server 2014 Delayed Durability/Lazy Commit

I am having a lot of fun watching everyone get excited over SQL Server 2014′s Delayed Durability feature, mostly because I invented it back in 1986.  At the time no one was particularly excited by the idea.  It’s possible someone invented it before me, but I never found any evidence of that.

Not long after taking over as Project Leader for DEC’s Rdb product I was looking at ways to address one of its performance bottlenecks, the wait to flush the log on a commit.  For those not schooled in database technology a key part of ensuring no data is lost on a system failure (a.k.a., durability) is to require that changes be written (forcibly if necessary) to the log file before you acknowledge a transaction has been committed.  The log file is sequential, so writing to it is enormously faster than writing changes back to the database itself.  But you still have to wait for the write to complete.  The architecture of Rdb 1.x and 2.x did not allow for what is now known as Group Commit or a number of other techniques for speeding up commit.  Further each database connection had its own log, so that even log writing typically required a seek (i.e., it was still random rather than serial) thus limiting throughput and typically imposing a 100+ms delay to commit.  On heavily loaded systems I remember this climbing to 250ms or more.  Since we couldn’t implement Group Commit in a minor release, I was thinking about other answers and had a revelation.

For many applications a business transaction (e.g., add a customer) is actually a series of database transactions.  From the application perspective, the customer add is not complete until the final commit of the series of database transactions, and thus they already have (or could easily be written to have) recovery logic that deals with failures (e.g., system crashes) between those individual database transactions.  In effect, the durability of those individual database transactions was optional, until the final one that completed the business transaction.

With this in mind I went and prototyped Delayed Durability as an option for Rdb transactions.  On Rdb it was quite simple to implement and I literally had it working in one evening.  But these were short turn-around releases, I was treading into another team’s area (the KODA storage engine), and there just wasn’t time to finish productizing it.  So a couple of weeks later I pulled out the change and in Rdb 3.x we started working on other (app transparent) solutions to the synchronous commit performance problem.

Now jump forward to 1994 after I’ve joined Microsoft.  There is somewhat of a battle going on between the team working on the first version of Microsoft Exchange (nee 4.0) and the JET Blue storage engine team over performance issues Exchange was having.  Because I was new to Microsoft and had no biases I was asked to look into Exchange’s performance problems.  That was quite the experience but I’ll limit this to just the relevant story.  I learned that to send an email the client did a series (30-40 pops into mind as typical) of MAPI property set calls.  And each one of those calls was turned into an individual database transaction.  Which of course meant 30-40 synchronous log flushes per email message!  No wonder they were having significant performance problems.  While my major recommendation was that they find a way to group those property sets into a single transaction, I had another trick up my sleeve.

After confirming that Exchange was fully designed to handle system failures between those MAPI Property Set “transactions” I suggested to Ian Jose, the Development Lead for JET Blue, that he implement Delayed (I think I called it Deferred at the time) Durability.  The next day he told me they’d added it, and so to the best of my knowledge the first product to ship with Delayed Durability was Exchange 4.0 in April 1996.  A full decade after I first proposed the idea.  Of course that wasn’t visible to anyone except through the greatly improved performance it provided.  But still I was quite proud to see my almost trivial little child born.

With SQL Server 2014 shipping Delayed Durability as a user visible feature my little child is finally reaching maturity.  It only took 28 years.

Update: My friend Yuri pointed out that Oracle implemented Asynchronous Commit in Oracle 10gR2 in 2006.  So it only took 20 years, not 28,  from my invention until the feature appeared in a commercial product.

Posted in Computer and Internet, Database, Microsoft, SQL Server | Tagged , , , , | 22 Comments

Satya shuffles his leadership

Now that Microsoft has formally announced a bunch of Senior Leadership Team changes I thought it appropriate to comment.

The easiest one is Tony Bates.  Like many CEOs who join a large company as part of an acquisition it can be hard to find an appropriate role in said large company.  I assumed that Tony took the Business Development job as a landing spot until something more appropriate came along (which could have been Microsoft CEO or ownership of a future business).  With Satya’s ascension to CEO Tony probably concluded there was nothing appropriate for him in the reasonable future.  In fact the most appropriate roles probably required that he, like Julie Larson-Green, would have had to give up his EVP position for a CVP position.

Speaking of Julie Larson-Green, with the closing of the Nokia deal she wasn’t going to be heading the Devices business at Microsoft.  So, if you have to play second fiddle you might as well choose a role that you have real passion for and unique skills in.  Julie’s new role working for Qi seems to fit the bill.  Good move for Julie and Microsoft.

Tami Reller’s departure is a bit of a surprise, but not a huge one.  I did not expect Satya’s ascension to CEO to necessarily impact Tami because they’ve worked together before.  They certainly were peers in Microsoft Business Solutions, and Tami may have briefly reported to Satya (though I can’t recall all the timing).  As far as I know they had a good working relationship.  But Tami’s background was finance and her marketing chops relatively limited.  So owning all of marketing for Microsoft was clearly a stretch.  Whatever discussions have happened over the last month it must have been clear to Tami, Satya, or both that Tami wasn’t the person for the job the way that Satya saw that  job.  And as much as I respect Tami, I don’t see her leaving as an earth-shattering departure.

Meanwhile, combining Tami’s responsibilities with the advertising responsibilities that Mark Penn had and giving them to Chris Caposella as Chief Marketing Officer is a great move.  Chris is probably the most respected marketing leader inside Microsoft.  Recall that Chris actually was the Chief Marketing Officer, taking over Central Marketing (as well as creating the Consumer Channels Group), from Mich Mathews 3 years ago after a long spell as the head of marketing for all Information Worker oriented products.  Along the way his CMG responsibilities moved to others and he was left with CCG.  This reorg not only reinstates his CMO title and CMG responsibilities (which included corporate advertising before Mark Penn took them over) but gives him leadership of all Microsoft product marketing as well.  Chris actually was the most logical choice to have been given this role as part of last summer’s One Microsoft reorg, but lost out to Tami  in the game of musical chairs.  Now balance has been restored.

Which brings us to Mark Penn.  I don’t know him and I really don’t know how insiders think of him.  Outsiders seem to base their opinions on their like or dislike of the Scroogled advertising campaign.  Of course I expect that campaign represented about 1% of his efforts since joining Microsoft and is almost irrelevant in the greater scheme of things.  In his revised role it looks like he lost direct operating responsibilities but retained his advisory role on corporate strategy.  He even had his title altered slightly to reflect the new focus (and likely to sooth his ego over losing control of advertising).  He is still EVP of Strategy, but also is called out as Chief Strategy Officer.  Microsoft has been throwing “Chief” around a lot the last few years and I don’t associate any incremental influence or power with those titles.  What is important is what the actual job entails.  Read Satya’s mail I linked to above for a good description of that.

That just leaves Eric Rudder taking on Tony Bates’ responsibilities on an interim basis.  I think the interim part is real and this was just a move to avoid overloading Satya with one or more additional, and perhaps junior, direct reports while he searches for a new leader for this function.  Or decides to organize it differently.

I don’t think any of these moves are earth shattering in the short run.  They do represent incremental changes that will lead to a more cohesive Senior Leadership Team.  They also mean that the SLT is more heavily weighted with people who grew up with the company and were (often as junior individual contributors) part of its glory days.  They no doubt are highly motivated to be known as the ones who returned the company to unquestioned leadership status.  I can’t give an unbiased opinion on if this is the right direction to take (my biased one is yes), but there is one thing I’m sure of.  Steve Ballmer spent a lot of time flailing about looking for a formula, including hiring many outside executives at the Senior Leadership Team level, to propel Microsoft forward.  With Satya’s appointment as CEO, and his management moves so far, it is clear that the primary bet is on home grown leadership.  That could reignite a lot of passion within the company.

Posted in Computer and Internet, Microsoft | Tagged , , , , , | 17 Comments

Another stupid Anti-Malware test?

PC Mag has an article on the latest anti-malware tests by NSS Labs.  For this test NSS Labs turned off Microsoft’s SmartScreen because it was too effective, blocking 98-100% of malware.  SmartScreen, being in the OS now, works independent of browser or anti-malware software installed.  So NSS Labs decided to turn it off so they could test the Anti-Malware products alone.

Ok, but there is a BIG BUT here.  3rd Party anti-malware products come with their own URL filtering and whitelisting capabilities.  Microsoft considers SmartScreen part of their overall protection suite and has no reason to duplicate its functionality in MSE/Defender.  So turning off SmartScreen is the same as going into a 3rd party product and disabling its similar features.

Does NSS Labs disable the SmartScreen-equivalent features in the other anti-malware products it tests?  I don’t think so.  The only anti-malware that is tested with its URL filtering and whitelisting capabilities disabled are Microsoft Security Essentials/Windows Defender.    And they try to call this a fair comparison of anti-malware products?

At least we now know the true protection level of using the Microsoft Protection Suite.  Combine SmartScreen and MSE (or Windows Defender) and you get about 100% protection.  They should just test and report that instead of artificially making the Microsoft offering look bad.

Posted in Computer and Internet, Microsoft, Security, Windows | Tagged , , , , | 3 Comments

Microsoft Goes Low

Lower-end devices are where both Windows Phone and Windows 8 tablets are showing the most traction.  On the phone side the (Nokia) Lumia 520/521 is where a lot of the volume growth has come from.  On the tablet side the first 8″ Windows 8.1 tablet to broadly hit the market, the Dell Venue 8 Pro, has garnered quite a fan base.  I wish we knew its volumes, but by any subjective measure it’s been a hit for Dell.  One regularly sees the Lumia 520/521 for under $99, sometimes way under. The Dell Venue 8 Pro has a list price of $299, but on any given day it can usually be found for $279, $259, $229, or on occasion (e.g., Microcenter’s in-store only special) for $199.  Wow.

If you combine today’s Mobile World Conference announcements from Microsoft with rumors about a big price cut for OEMs putting Windows on sub-$250 devices you start to get a picture of a strategy focused on winning at the low-end.  First let’s talk about the pricing rumor, then review the announced hardware requirements, then talk strategy.

I’m going to take as a given the oft-repeated OEM price of Windows as $50 since absolute accuracy isn’t required to explain what is happening.  With the exception of SKU expansion (e.g., Windows Pro), Microsoft has held the price of Windows pretty constant over the years.  When you were a OEM making primarily PCs that sold for over $1000 the cost of Windows was not your biggest issue.  At $50 it would not have made your top 3 component costs.

Components that are driven by Moore’s Law, or manufacturing scale, have been dropping over the years while the price of Windows has remained constant.  Let’s assume that Bill of Materials (BOM) cost represents 50% of the list price of a device shipping in volume.  A $300 tablet thus costs around $150 to manufacture.  From a OEM perspective the cost of Windows as a component has mushroomed from about 5% to about 30% and left it little room to maneuver.  On a $200 tablet the BOM would be around $100 with Windows representing 50% of that cost.  It is unclear you could create a tablet capable of running Windows with only $50 for hardware components and system integration.  And you certainly couldn’t do it profitably.

Meanwhile over on the Android/ChromeOS side of the world they have a lower cost structure for the software.  Yes they have to pay patent royalties to a number of parties, including Microsoft.  Yes they may have to license some CODECs or other software that is included in Windows but not the Android distribution.  But the bottom line BOM impact is still much lower.  Let’s guess that it is in the $15 per device range.

Now $15 seems to be a pretty magic number, not just because that might be the effective cost of using Android.  Lots of the major component assemblies used in phones and tablets hover in the $15 price range.  The processor.  The memory.  The sensor module.  The battery.  The camera.  Etc.  These will vary by a $1 or two depending on capabilities, but you get the idea of the ballpark for major components of a mobile device.  So demanding a price for software that is in the same ballpark as the other major device components is defensible.

And that’s exactly what Microsoft is rumored to have now done.  By pricing Windows for devices that sell for $250 or less at $15 it has (a) equaled the effective cost of using Android, (b) addressed OEM concerns that OS pricing sucks all margin out of low price devices, and (c) helped lower the BOM to the point where sub-$250 (and really sub-$199) devices become feasible.

Is there a financial downside to Microsoft’s action?  I don’t think there is much of one.  To begin with most devices sold in this price category are additive, they do not replace existing devices using higher priced Windows licenses.  For example, back around Christmas an acquaintance was telling me she saw a Best Buy deal for $250 notebooks and bought a dozen of them as gifts for her kids and their cousins.  Previously the kids were sharing a single hand-me-down PC.  My brother’s mother-in-law bought each of his kids Kindle Fires, so they ended up with individual computing devices at least 5 years before they would have ended up with individual PCs at higher price points.  And few of those Dell Venue 8 Pros are going to replace any other device, they are tablets for people who carry around a notebook a lot but wanted an additional content consumption device.  You see this kind of example all over, low-cost devices are expanding the market.  Microsoft is counting on market expansion to far outweigh any cannibalization of its higher-priced offerings.

For Windows tablets and low-end notebooks the other change that will allow for extremely low-cost devices is support for devices with only 16GB of storage and 1GB of main memory.  We’ll probably have to wait for Build in order to understand how they’ve done that, but I can imagine that they’ve given OEMs the option of removing the recovery image from shipping devices.  While this has always been available for a user to do once they received their device, having the recovery image ship on the device itself was previously a requirement for the OEMs.  And that made 32GB the minimum device size.  I don’t know how the recovery environment works in this new world.  Will OEMs ship a recovery image USB stick?  Will the device come with some minimal bootable image that can download the recovery image over the Internet?  We’ll just have to wait and see.

One important note about the reduction from 2GB/32GB to 1GB/16GB is that it should take about $15 (there is that magic number) out of the BOM for a low-cost device.  With the pricing change that’s $45 (30%) off the previous minimum BOM of about $150!  On top of that these lower priced devices will tend to use older components that are further down the price curve.  The cost of the 1280×800 displays now found in most 8″ tablets is bound to drop quite a bit as 1920×1080 and above becomes more common.  So we are certainly on our way to entry-level Windows 8 tablets with a BOM of under $100 and perhaps eventually as low as $75, enabling Windows tablets to retail in the $150-199 range.  Entry-level notebooks will follow these same pricing dynamics.

Similar changes are coming on the Windows Phone side with the support of lower performance, low-cost device targeted chipsets and support for on-screen soft buttons rather than the dedicated hardware buttons previously required by the Windows Phone specs.  Imagine devices selling for half of the very successful Lumia 520/521 and you get what Microsoft seems to be enabling.  This is both a defensive move against low-cost Android phones and an offensive move to increase Windows Phone volumes to reach critical mass (to attract app developers, create mind share, etc.).

As I look at Microsoft at this point I think they are following a two-tier strategy.  The first is a high-end push aimed at their traditional PC customer base.  With Windows 8.1 Update 1 they pay significant attention to desktop-focused users, something that will continue with “Windows 9″.  Their 2-in-1 efforts are aimed at bringing their vision of the next generation of notebooks to drive a refresh cycle and push the core of the PC business back into a healthy space.  And their premium tablet and phone efforts are aimed at providing both consumers and business committed to the Microsoft ecosystem with companion devices that bring the best of these consumption optimized experiences without having to give up the Microsoft ecosystem benefits.

And then there is the low-end push aimed at market expansion.  Capture the billions of phone users who have yet to move off of feature/basic phones.  Capture the billions of people who have not yet made any commitment to a tablet for a multitude of reasons, from pure cost constraints to those whose price threshold for a secondary device is particularly low.  Capture the low-end notebook market where people are sharing devices or living with ancient Windows XP desktop and notebook systems by having devices priced so they can afford to upgrade to a modern device and have a device per user rather than sharing.  And win big in the developing world, where these trends are particularly strong.

Of course winning at the low-end is also critical because the current vacuum there plays to Android and ChromeOS’ favor.  And whoever owns the low-end is setting themselves up for future upgrades into higher end devices, where the real money will eventually be made.  By capturing a significant share of the low-end, and bringing those users into the Microsoft ecosystem,  it won’t have to fight to take them away from Google (or someone else) later.  It will have the upper hand on retaining them for many years, and device generations, to come.

My major intent here was to show how Microsoft’s latest moves can really change the game at the low-end of the market.  The pricing and spec moves shave 30% off the BOM for a low-end Windows 8 tablet or notebook, and I suspect the lowered Windows Phone spec requirements amount to a similar improvement.  Besides growing its own customer base, the kind of volume growth that these low-cost devices cause make Windows Phone a more interesting target for developers and that should help close the App Gap.  So even if you are mostly a fan of flagship devices, this low-end focus is likely to benefit you greatly.

Posted in Computer and Internet, Microsoft, Mobile, Windows, Windows Phone | Tagged , , , , , , | 8 Comments

How much of a broadband monopoly is there?

One of the issues that comes up  in the Net Neutrality discussion is people’s perception that there is a monopoly or duopoly on broadband providers.  Ok, for folks outside the U.S. this is going to be a very U.S.-centric discussion because I know so little about how it works elsewhere.  Just be forewarned.  But is there a monopoly/duopoly, and if so how did we get here?

In Urban and Suburban U.S. just about everyone has access to two wired broadband offerings.  One from the old telephone companies and one from a cable TV company.  Why only two?  Because government granted a monopoly to one company from each category to operate within their jurisdiction.  The creation of the telephone system predates me of course, but I lived through the evolution of Cable from pretty early on.  A city, town, or county would open up a competition to give a charter to one cable company which would be the exclusive cable company allowed in that jurisdiction.  The cable companies would then bid, offering things like community access channels, to get the charter.  Thus when broadband came around there were two types of companies with already built out networks, Phone and Cable.

ADSL let the phone companies offer broadband over existing “last mile” copper cables into homes.  Cable companies responded with a number of efforts, with DOCSIS being the most notable, giving them the same capability to leverage their existing plant.  Over time various parts of this infrastructure have been updated or replaced, but this is mostly an incremental effort.  With so much existing plant in place, no new player could (with limited exception) create an alternate wired broadband offering.  And so most of the U.S. lives with a wired broadband duopoly.

It doesn’t really matter that many small cable companies have been swallowed up into a few giants, you had ONE before and you have ONE now.  You may have liked the little local guy better than you like Comcast, but it’s not like you had a choice before.  Years ago when I lived with a little local cable company I was thrilled that a bigger company bought them.  The little local cable company was not investing in digital TV or broadband.  The big company came in and immediately started doing an upgrade to their latest service capabilities.  I lost on customer service, but won out big time on capabilities I desperately desired.  This was in fact my first broadband service.

But having a wired broadband duopoly is not the same thing as having an actual duopoly, because we also now have wireless and (finally decent) satellite broadband offerings.  I live out in the sticks with no cable service.  I live at the maximum distance for a DSL line, with a maximum 1.5 Mb/s available and frequent issues cutting that in half.  Neighbors north or east of me don’t even have the DSL option.  Yet we do have a wealth of broadband options, they all just have one or another tradeoff.  We have at least two fourth generation satellite broadband options (Hughesnet Gen4 and Excede) and a fixed WiMax wireless offering from a local ISP (which is actually the preferred option for most of us).  Then there is Verizon’s HomeFusion LTE offering, the most serious modern option I’ve seen from a wireless company.  It uses an antenna mounted outside your house connected to a router inside.

Sprint, which long ago pioneered Fixed Wireless, is in the process of getting back into this business as well.  It currently offers the Netgear LTE Gateway 6100D.  This won’t work for me because I need an external antenna to get sufficient signal strength in the house, but it would work for many of my neighbors.  Or in my barn (which has an office and a lounge for people, in case you were wondering what horses would do with Internet access) which doesn’t have a bluff interfering with wireless signals.  In addition Sprint is working with DISH to build out a Fixed Wireless network in Corpus Christi Texas.  This is the real replacement for its old Broadband Direct service, and if successful could see Sprint and DISH blanket the country with a Fixed Wireless offering.

Neither AT&T nor T-Mobile have specific home wireless offerings that I know of, though of course you could (as with Verizon and Sprint) use their mobile hotspot offerings to cover an apartment.  For a while I used a Cradlepoint router with an AT&T 4G USB Modem to bring broadband into our barn.

Jumping back to the satellite options, I currently use Hughesnet Gen4 as a backup connection to our local WiMax ISP.  The performance of Gen4 is on par with the ISP, with better peak data transfer rates, though very long latencies impact some applications.  I use a load balancing router (with a couple of latency-sensitive things locked to the WiMax connection) and just about can never tell if my traffic is going over WiMax or Gen4.  Satellite used to be a poor connection of last resort, but the latest generation elevates its standing.  Peak throughput is better than Verizon’s HomeFusion, at least until Verizon moves to LTE Advanced.  But Verizon is half the price and doesn’t have satellite’s latency problem (bouncing a signal off a geosynchronous satellite takes time).  Tradeoffs, tradeoffs.

The problem with all of the wireless technologies, except for my local WiMax ISP, is that they are metered connections.  Essentially you pay by the gigabyte.  But as we’ve seen with the T-Mobile initiated price wars, higher capacities at lower prices are becoming the norm from the mobile carriers.  How long this trend continues depends on efforts to free up additional spectrum and to make more effective use of existing spectrum.  Hughesnet is rumored to be considering a (probably very expensive) unlimited plan, which would be a very interesting option for many more people when latency isn’t an issue.

So how much of a monopoly on broadband is there really?  Not much of one.  Here in my rural area my neighbors and I have 8 choices, although not every choice is available to every home.  In a typical suburban neighborhood anywhere in the U.S. you have as many as 10 choices when you throw in the 2 wired providers, 4 nationwide mobile providers, 2 satellite providers, and 1 or more fixed wireless or regional mobile providers.  Urban dwellers probably can’t do satellite, but are more likely to have additional wireless options.

All of us would probably like 1GB to the home with no capacity limits, 20ms latency, and low low prices.  Ok, we aren’t there and may never get there (especially the low low price part).  But it’s a fallacy to claim that broadband is a monopoly or duopoly.

Posted in Computer and Internet, Mobile, Telecom | Tagged , , , , , | 12 Comments

“Net Neutrality” gives me a headache

Many a night, usually during what would be considered “Prime Time” for TV, my wife complains that the Internet is slow.  She isn’t trying to watch video, and most of the time when she makes this complaint no one is watching video in our house.  When I investigate these complaints what I find is that our 10 Mb/s rated primary Internet connection is only delivering 2-3 Mb/s.  During off hours that connection delivers close to its 10Mb/s rating, and sometimes speedtest.net even shows it exceeding that.  So why do we drop by 5x during the evening?  Our neighbors are all watching Netflix or other video services.

The situation isn’t specific to our primary Internet connection.  We have a secondary connection, mostly for backup but also so that if someone is streaming video it doesn’t bring other Internet usage to a grinding halt.  It displays the same pattern of achieving close to its rated speed at off-peak hours but shows a 5x degradation during prime video watching times.

In our condo up on Seattle’s Eastside, where we have outstanding Internet connectivity, things are only marginally better.  Although rated to have plenty of bandwidth for showing even HDX movies you can’t actually get through an HDX movie during prime video watching hours without major buffering pauses.  Or the connection even being dropped.  I’ve tried both FIOS and Xfinity with the same results.

And this isn’t just us, this is the state of the Internet.  Just about no one sees the rated speed of their connection for more than brief periods of time usually during off-peak hours.  The infrastructure of Comcast, Verizon, and their cohorts just wasn’t designed to deliver peak performance to every user over long periods of time.  And if you take video off the table, it doesn’t really need to be.

But video is growing and the networks underlying the Internet need major upgrades to keep up.  How do you pay for the upgrades?

Now let me get out-of-the-way that I’m a major free market kind of guy, so I don’t think the government should be interfering in this.  How and who should be paying for communications services should be between customers and the companies they buy services from.  When the government gets involved then it picks winners and losers.  And it won’t be the communications companies, they will end up OK no matter what the answer.  The government is picking which customers win and lose.

Net Neutrality turns the question of do senders or receivers pay telecommunications providers for the traffic on the Internet into a simple answer, the receivers pay the telecom companies.  It really is that simple.  In a strict Net Neutrality world home (and business) Internet prices have to rise a lot, or cell phone plan-style pay by the GB plans have to replace unlimited service, to support building out the infrastructure to handle all the new traffic.  Without Net Neutrality the telecommunications providers have more freedom to get the sender to pay, so that the costs of streaming video is directly tied to those services and not forced on infrequent users of those services.

As I said, if government is involved then they’ll pick winners and losers.  And the reality of the second decade of the 21st century is that Net Neutrality means people who make heavy use of video streaming services, as well as the companies that operate those services, win.  Those who make little, light, or even moderate use of video streaming lose.  They’ll  either continue to suffer degraded performance, or pay substantially higher prices, to subsidize their neighbors’ video watching habits.

Now it might turn out that a proposal AT&T made a while ago, if I understood it correctly, is the right balance in all of this.  AT&T, coming at this from the standpoint of a company that does charge (wireless) customers by GB consumed, proposed to optionally allow the sender to pay for data that would not count against the receiving customer’s data limit.  Net Neutrality fans went nuts of course, but AT&T’s proposal makes tremendous sense.  Particularly if we are all forced into capacity-based pricing at home because of Net Neutrality.  Imagine Netflix continuing to offer their $7.99 plan unchanged from today. That would work great for light users of the service.  But heavy users who found that their base Internet service at home was now limited to 25GB per month, and then having to pay overage charges, might be more interested in paying for a $11.99 plan from Netflix that didn’t use up any of that 25GB.  The AT&T proposal is the kind of innovative solution that I’d expect if the government doesn’t get in the way.  Sadly they seem intent on getting in the way.

The reason Net Neutrality gives me a headache is because it isn’t neutral at all.  Any way you slice it we need more money put into the infrastructure of the Internet.  And having the government pick who is going to pay is going to end badly for me, and I believe for the vast majority of users of the Internet.

Posted in Computer and Internet, Telecom | Tagged , , | 12 Comments