When are we going to get serious about computer/network security (Part 3 – WiFi)?

I recently updated my Samsung Focus with the Windows Phone 7 NODO update and received a feature I am not sure I wanted, support for the WISPr protocol.  WISPr support, long present on the iPhone, allows your AT&T smartphone to automatically and transparently switch from the 3G/4G cellular network to an AT&T WiFi Hotspot when one is located nearby.  There is only one problem, a problem that you may not be aware of and AT&T seems to be keeping quiet about, while your 3G/4G communications are encrypted your communications over the AT&T WiFi Hotspot are not.  And so, without your explicit consent or even notification, you are automatically switched from a reasonably secure network to one in which nearly anyone can monitor or even hijack your communications.  Most people have some inkling that public WiFi Hotspots are dangerous and thus get to choose whether or not to take precautions (from VPNs to being extra careful to specify https when they connect to a website), though I suspect people generally just ignore the danger.  But when your phone automagically switches you over to a non-secure WiFi network, you have no opportunity to protect yourself.    AT&T has multiplied this problem 100-fold by its strategy of deploying WiFi as an alternative to new 3G cells in areas where it has data problems but not voice problems.  So, for example, when you walk through Times Square in New York your phone may switch to using WiFi rather than the 3G network for data transmission.  This is certainly not a situation you expect nor prepare to protect yourself from.

Public WiFi networks are open and non-secure because current security mechanisms are both inadequate to the task and are not user-friendly.    T-Mobile once tried to secure its WiFi offering by adding support for WPA encryption and 802.1x authentication to its T-Mobile Hotspot offering.  This never caught on (perhaps because it was inadequate, or perhaps because most WiFi adapters were not WPA compatible at the time of introduction) and has now apparently been dropped.   While I could simply blame the key management problem I’ve mentioned in previous postings, in this case we are mostly dealing with a fundamental conceptual flaw, that WiFi only tries to make the network as secure an an equivalent wired network.  Well, wired networks are only secure because of physical security.  That is, unless I can enter your building and tap into the wire I can’t intercept wired communications.  But once I tap into the wire I can see all communications flowing over it.  Solutions like WPA/WPA2 try to mimic wired security by locking unauthorized people out of the wireless network.  But once they are inside (because they have the key) they can still intercept communications just as in the wired network case.  So for public WiFi Hotspots to be secure it isn’t enough to use WPA/WPA2 since anyone could easily obtain the keys (e.g., by paying for access).  What is needed is for each individual WiFi user to have a separately encrypted channel between their client (PC, Phone, etc.) and the Wireless Access Point.  Sadly, there is no provision for this in current wireless networking standards.

Even your home or small business wireless network, supposedly protected from outsiders by WPA/WPA2 using Pre-Shared Keys, may be highly vulnerable.    A Pre-Shared Key is basically just a password, and of course you can have passwords that are weak or passwords that are strong.  When you think of websites, or your computer’s local logon, guessing even a weak password is difficult because of features that have been developed over the last four decades.  When you put in the wrong password there is a delay before you are allowed to try again, and then if you enter too many wrong passwords you are locked out for a longer period (10-30 minutes, or more).  This prevents one computer from breaking into another by trying password after password.  But with WiFi you don’t need to keep retrying the password and running into defensive measures.  Anyone can listen in and record the traffic going over your network in a log file, even if what is recorded is encrypted.   So what if you could take a log of the encrypted traffic from a network and process it elsewhere to figure out what Pre-Shared Key was being used to encrypt it?  Yes, that can be done using one of two techniques.

The first technique is a Dictionary Attack.  This is pretty simple, just imagine trying a list of words/phrases that people might use as their Pre-Shared Key.  For example, what if you use the name of your dog as your Pre-Shared Key?  Like Rover or Spot.  Or the name of one of your children?  Even a longish name like Christopher is going to fall rather quickly to a dictionary attack.  For $17 you can perform this dictionary attack at WPA Cracker.  Of course once the tool figures out what your Pre-Shared Key is then someone can use it to connect to your network and manipulate the network traffic.  Oops.

The second technique, a Brute Force attack, was until recently considered outside the realm of being practical for anyone other than spy agencies like the U.S. National Security Agency because it required massive amounts of computer horsepower.  The computing power to perform a brute force attack on WPA/WPA2 is now within the purchasing power of a criminal entity, and even more importantly can now be rented from cloud providers such as Amazon.  The cost of a brute force attack using Amazon’s cloud?  $1.68

There are ways to mitigate the weaknesses in WiFi security, but few of us are putting them to use.  And vendors aren’t doing a very good job of helping.  For example, using SSL (https) for as much web communications as possible is one mitigation.  Google made SSL the default for accessing GMAIL over a year ago.  Both Facebook and Hotmail have made this an option.  For example in Hotmail you can either go to https://www.hotmail.com or you can change a setting in Hotmail to always access it this way.  If you use Firefox or as your browser you can automate the use of SSL with many websites using HTTPS EverywhereKB SSL Enforcer adds this capability to Chrome.  As far as I know Internet Explorer users are out of luck.

Another mitigation is to use a VPN to encrypt all traffic over a WiFi network.  Of course, why the major hotspot providers like AT&T and T-Mobile don’t offer this as part of their service is beyond me (well, not really, VPNs are expensive to install and operate).  Many people use their work VPN.  But if you don’t have one, or your company prohibits or limits personal use, there are a number of low-cost consumer VPN services out there.  I use WiTopia whenever using a public hotspot with my notebook or iPad (though sadly it doesn’t help with my Samsung Focus since Windows Phone 7 doesn’t yet offer a VPN client).

As for those dictionary and brute force attacks a very reasonable mitigation is to use a long random string of characters for the Pre-Shared Key.  Random prevents dictionary attacks.  Long makes brute force attacks more costly.  Rover is horrible.  JohnPaulGeorgeRingo is only slightly better.  Try something like dpe3kd6aq39lyash (aka, 16 random characters) which is likely safe for a number of years.

Are their solutions to these problems?  Certainly, but it will be many years before they are introduced and achieve widespread deployment.  They will require the deployment of a new generation of WiFi Access Points, Routers, adapters, and other technology.  And the industry has just barely begun to think about a new generation of solutions.  In the mean time we’ll see a move to wider use of SSL on web sites and browsers that automatically try https before http when accessing web sites.  That will surely make things better, but it will still leave plenty of gaps in internet security.  And given we are looking at the next generation (at least) of browsers before SSL becomes the default we’ll be living with the current situation for 12-18 months or more.

As for AT&T’s automatic switching of your phone’s data connection to their WiFi Hotspots, until they automatically encrypt the data flowing over WiFi I’d disable the ability to connect to their Hotspots.  That is easy to do on a Windows PC where you can specify not to automatically connect.  But on smartphones like the iPhone you have to delete its knowledge of the AT&T WiFi hotspot and remember to re-delete it anytime you intentionally use one.  Otherwise you are putting all your data communications out in the open for all to see.

Posted in Computer and Internet, Privacy, Security | Tagged , , , , , , , , , , , | 1 Comment

Is Bing Putting the Squeeze on Google?

There were two interesting pieces of news out this week.  One was news from Experian Hitwise adding to other evidence that by some measures Bing has grown to about 30% of the search market in the U.S.   The second interesting piece of news is that Google reported disappointing earnings.  Are these two pieces of news related?  I doubt it, but they do leave Google between a rock and a hard place.  I’ll go a little into Bing’s rise and why Google now has a real problem on its hands.

If you go back four years ago Microsoft had put version one of Bing (then known as Live Search, having recently been renamed from MSN Search) into the market and elevated that effort from having a supporting role in MSN to being a first class initiative of its own.  At the time Live Search had a very small market share, mostly driven by users of its MSN portal.  Microsoft faced a huge set of hurdles in its attempt to go from a distant 3rd in the search market to a strong 2nd (and perhaps someday a real challenger for the #1 position).

Hurdle #1 was the poor performance of its core search engine.  Live Search just produced horrid results compared to Google and that was immediately obvious to any user.  The experience with Google was that you asked a question and links to the answer were at the top of the page.  Sometimes it seemed like Google read your mind.  The experience with Live Search was that you got an answer related to the words you typed but not necessarily the links that you were looking for.  So Microsoft threw huge resources into improving its core search engine performance, which is now in the same league as Google.  Both companies will continue to invest in improving their core search engines, but it is unlikely that either will be able to get sustainably ahead of the other in the future (unless someone screws up and dis-invests in core search).

Hurdle #2 was finding a way to change the search experience.  Just creating a search engine that was “as good as” Google wasn’t likely to bring users to Bing.  Microsoft needed to change the definition of search and make the experience very different.  It couldn’t be yet another search page that provided “10 blue links”.  You can see this evolution accelerate over the last four years, starting with the amazing background image that Bing displays on its homepage each day.  People go to Bing just to see that image.  People use screen savers that grab those images.  Then Bing focused on being a “Decision Engine”, helping people get the answer they were looking for rather than just a set of links.  And if you want to see the latest step take a look at the new Bing for iPad which is almost a shell you want to live in all day rather than just someplace you go to search.  I don’t think Microsoft is done redefining the search experience to be a Bing experience rather than a Google experience, but they’ve made good progress.

Hurdle #3 was addressing the marketing and (particularly) distribution problem.  Let’s just start with the name “Live Search”.  Google had become a verb.  How do you tell people to “Live Search it” instead of “Google it”?  You don’t.  So Microsoft went looking for a rebrand that would be catchy, could serve as a verb, etc. and landed on Bing (which I think ended up being perfect).  But there was a more immediate problem.  Google (as well as Yahoo) was paying computer manufacturers, printer manufacturers, high volume software providers, and everyone else they could think of to set the search default in the browser on PCs to be theirs and to distribute their toolbar.  And they were actually making it difficult for another piece of software to reset the default.  So a very large portion of the searches that Google was getting were from these default settings rather than from the user explicitly going to Google.  And then the user gets used to Google, so even when they go to a search engine home page it is Google’s.  Microsoft had to get its own search defaults and toolbar out there.  While it could do some of that via its existing mechanisms (e.g., when someone downloads Windows Live  or MSN you could include the toolbar and offer to set the search default to Bing) it needed to start to compete for search default/toolbar deals with third parties.  Eventually Microsoft won distribution deals with HP, Sun (for the Java installer), Dell, and others.  Microsoft effectively was able to declare victory here.  These deals are expensive, and no doubt represent a significant portion of the losses that Bing contributes to Microsoft’s bottom line.  And that is where the next item comes in.

Hurdle #4 was critical mass.  This is the crucial one because it is really all about search economics.  Search Advertising is an auction system.  Advertisers bid to have their links displayed at the top of the search results whenever results for a particular keyword appears.  As in any auction system, the more bidders there are the higher price you have to pay for your ad.  Google has such high search market share that everyone wants to advertise there, while Live Search had such low market share that there were few advertisers.  A keyword on Google might command $10 per click to be the first paid link displayed.  That same keyword on Live Search might have commanded only $2.  Microsoft had to, and I’m not sure they’ve done it yet, attract enough advertisers to get the price per click up to comparable levels with Google.  This is important for two reasons.  The first is just the obvious business point that if your revenue = clicks * price-per-click that financially you want to grow both clicks and price-per-click.  But more specifically, think of things like those search distribution deals.  Take an extreme example and say that Google was offering to give a PC maker 50% of its click revenue from the PCs they sell, or $5 per click.  Now Microsoft has to match that to get the deal, but Microsoft is only taking in $2 per click.  Microsoft would lose $3 every time someone clicked on an ad!  So Microsoft had to get more advertisers, which would drive the price-per-click up and make Bing financially viable.  But with 5-10% market share that just wasn’t going to happen.  I can tell you that when I was doing PredictableIT I stopped our advertising campaign on Live Search because although the price-per-click was low I was getting just about no clicks.  So I just concentrated my effort, and budget, on Google and Yahoo.  Microsoft went looking for an answer to this problem and settled on a search deal with Yahoo.  With Bing powering Yahoo-search, and a joint effort to sell search advertising, Microsoft’s market share would be high enough to attract advertisers and drive the price-per-click up.  Yahoo using Bing for search, and the joint advertising effort, have been in operation for only about 6 months so it is too early to know if it is working on the financial front.  But as market share rises the economics should look increasingly good for Microsoft.  At a minimum they should do well enough to continue the fight, and that’s where things get interesting given Google’s latest financial results.

The big issue in Google’s latest financial results is that they are experiencing margin pressure because of a spike in expenses.  Fundamentally, Google has just one significant revenue stream (Search Advertising) but is investing wildly in other areas.  It is investing in a suite of applications (Google Apps) to compete with Microsoft Office.  It is investing in two operating systems (Android for phones/tablets and Chrome for PCs and perhaps tablets).  In is investing in and running a free VOIP phone service.  And it is trying hard to find a way to compete with Facebook in the social networking space.  And none of these things directly result in significant revenue.  Google doesn’t want to drop these investments because in the long-term that would leave them even more dependent on the classic PC web search advertising business.  But at the same time, with Bing’s market share growth, and its apparent success in starting to redefine Search, Google has to put more focus (and likely spending) on its key business.  Not only that, but as advertisers find Bing a place to put a larger percentage of their ad budget, Google will see its price-per-click drop.  By keeping up the pressure Microsoft can make it increasingly difficult for Google to maneuver without it going into a financial tailspin.

The real question here is what can Google do to directly defend itself in search?  Well, more innovation, including redefining Search itself, is one answer.  The problem is that Google would then risk alienating its customers (think: New Coke) and giving them more reason to try, and perhaps stick with, Bing.  Another is to compete to win back search deals (and deals for its other offerings, such as the Chrome web browser) as the default on various PCs.  The problem here is that Bing’s increased market share means Microsoft can force the bidding even higher, causing Google to pay so much that it achieves a pyrrhic victory.  The bottom line could be that Google’s days of ever-increasing market share and “obscene” profit margins may be coming to an end.

But I do want to get real here.  Google’s worldwide market share is as dominating as ever, they have a loyal customer base, and “Google” is still a verb.  Google needs to keep in mind that if they stumble Microsoft is right there and well positioned to take advantage of Google’s woes.  And that is exactly how Microsoft has defeated market leaders in the past.

Posted in Computer and Internet, Google, Microsoft, Search | Tagged , , , , | 1 Comment

Quick Take on Windows Phone at MIX11

I’m not going to try to do a deep dive here, just a few comments on what I’ve seen so far in reading about and watching talks as compared to what I’d predicted.

My biggest miss was in predicting Silverlight 5 (or a hybrid) hitting Windows Phone.  It was wishful thinking on my part.  A couple of years back it certainly did look like Windows Phone would stay about a version behind the PC for the forseeable future, but I hoped that would change.  Maybe it will eventually, but not this round.  BTW, I think the biggest issue here comes up if they put Silverlight support in the browser.  That’s where the version consistency and Silverlight’s historic update strategy really come in (so that a website doesn’t have to do downlevel version support).

There was very little explicitly discussed about Enterprise-oriented features, but then most of these aren’t developer issues.  Whole drive encryption and more complete support for the management capabilities of Exchange Activesync (EAS), or indeed EAS-provided end-user features, as well as VPN capabilities, are not developer oriented.  So they may yet be in Mango, this just wasn’t the place to discuss them.  On the other hand, some of the lower level features that Enterprises might require were.  For example, I’m guessing that the lack of Socket access was one of the things keeping a Lync client away.  With Mango that is solved, though it appears you’d still have to keep the Lync client in the foreground during a call/conference.  And of course the inclusion of SQL Server CE makes the creation of more enterprise-oriented apps (e.g., CRM clients) far easier.

Full exposure of the platform is one I think I nailed pretty well given the exposure of 1500 new APIs.  This one is key for many apps that currently either can’t run on Windows Phone 7 or are clumsy.  For example, the great bar-code reading apps on the iPhone have no good equivalents on Windows Phone 7.  The ones that do exist require you to take a picture with the camera and then they try to analyze the picture after the fact.  This rarely works because you never get the barcode aligned properly.  With Mango an app can directly access the camera and let you align the bar code inside guides (as is done with iPhone apps).

Interestingly there was very little on a next-generation chassis.  Well, they did say three important things.  They added a gyroscope to the chassis spec.  They added support for another processor (really another entire SoC).  And they dropped the idea of an 480×320 (HVGA) screen resolution so they are back to just having 800×480 (which all existing devices had implemented).  And although they said the chassis spec was more flexible, the only example they are currently willing to talk about is that the gyroscope is optional.  Now the interesting thing here is that they dropped HVGA.  I believe this was originally done to allow for two things, a lower-cost device (since the WVGA screen is expensive in both screen costs, memory usage, and CPU usage) and to make a Blackberry-style physical keyboard slab phone possible.  Conceptually this seemed important a couple of years ago, but a lot has changed over those couple of years.  When the Windows Phone 7 project started you had only a single successful consumer smartphone, iPhone 3/3G.  Android had just shipped on the T-Mobile G1 and had essentially zero market share.  These devices were expensive compared to things like the old Samsung Blackjack, Motorola Q9, various Symbian devices, and of course RIM was at its peak with the Blackberry.  So OEMs would have wanted an alternative to just being able to build what was then considered an extremely high-end smartphone.  However, over the last two years the iPhone has continued to grow rapidly with Android not only becoming successful but actually eclipsing the iPhone’s market share.  In addition, specs that looked too high-end two years ago, or even a year ago, are now mid-range.   A lower cost slab with fixed keyboard would appear to be a niche product, so perhaps the OEMs didn’t care if Microsoft simplified things by dropping the notion of supporting HVGA.  Certainly developers should be happy about this.  Now the only questions are, will there be another Nokia-inspired screen resolution added in the future or has Nokia agreed to go with WVGA?  And what additional details about its “flexible chassis” will Microsoft reveal in the future?

Microsoft did allow XNA and Silverlight in the same app, which is great.  And they didn’t talk about native apps, which is not a surprise.

On the platform completion front obviously I’ll just point out a couple of developer-related things.  For example, the Generational Garbage Collector was part of the original design for the Windows Phone 7 .NET Compact Framework but was a risky thing to squeeze into that schedule.  Nice that they were able to finish it up for Mango.  And I’m pretty sure Agents were always in the App Model that Istvan’s team had developed for Windows Phone 7, they just couldn’t make the release last October.  They are a huge addition for Mango.

That’s about it for now.  The release looks great so far, and there are no doubt a number of great surprises yet to be revealed.

Posted in Computer and Internet, Microsoft, Windows Phone | Tagged , , , , | 1 Comment

The Continuing Microsoft and AT&T Windows Phone “NoDo” Debacle

Let’s see now, the “early 2011” Windows Phone 7 update (now known as NoDo) from Microsoft turned into a March update which in the case of AT&T customers is now starting to look like a late April or even May update.  Microsoft, sadly, is still being kind to its operator partners by downplaying the delay.  At first I thought that maybe this was a Samsung Focus specific problem, particularly since we know there was a problem with a small number of Focuses when they tried to deliver a pre-NoDo update.  But this delay is also impacting the LG Quantum and HTC Surround.  So, what’s going on here?

I have no idea, but the fact that it is impacting NoDo delivery for all AT&T phones suggests that AT&T is really screwing this up.  In fact, even if it is a Microsoft problem I think AT&T is screwing this up.  Ok, they both are.  Why not come out with a statement that either says “hey, we didn’t start testing soon enough and we are sorry for the delay” or “we found some issues in our testing and we are working through them”?  Or why not provide a more precise target date (which yes, they could embarrassingly miss yet again)?  Instead we get near silence, and enough consumer displeasure to make one want to write a letter to the FCC asking them to block AT&T’s acquisition of T-Mobile (who shipped NoDo for the HTC HD7 a while back).  The current status of an indeterminate testing period (with a completion period that has already passed…it still says early April although it is now mid April) just makes both AT&T and Microsoft look incompetent.

The standard by which Microsoft is being judged is Apple’s iPhone.  Now certainly with multiple devices and many carriers Microsoft’s task is far more complex than Apple’s.  But you would think that the “Premier Launch Partner” for Windows 7 would have worked with Microsoft to make sure the leading Windows Phone 7 device (the Focus) would have been first in line to get NoDo.  Instead this situation is tarnishing everyone.  Microsoft, AT&T, and even Samsung.

Right now if I could get a HTC HD7-like world phone on Verizon I’d dump AT&T in a heartbeat (and not just because of the update debacle, that is just the last straw).   And while my patience with Windows Phone is likely to last until the end of the year, it isn’t going to last forever.  Apple is going to have the opportunity to woo me back to their fold with the iPhone 5.  And though I’m not particularly a fan of Android, there is such rapid innovation going on in that space that there is an outside chance a device will come along that I just can’t resist.

So Microsoft, you and AT&T need to get the situation around updates fixed asap.  You are still screwing up the communications as well as the delivery.  And customer patience is wearing thin.

 

Posted in Computer and Internet, Microsoft, Windows Phone | Tagged , , , , , , , , | 1 Comment

What I expect for Windows Phone at MIX ’11

At first I was amused by articles suggesting that most of what is coming in the Windows Phone 7 “Mango” update had already been revealed at Mobile World Congress (MWC).  Now I’m amused by some of the leaks of much more that may be revealed at Microsoft’s MIX ’11 conference starting tomorrow morning.  I’m going to give a quick overview of what I expect, and then sit back to watch (well read, via other blogs, since I’m not there to actually watch) the fun.

The reason I was so amused by people who thought we’d already heard the bulk of what to expect from Mango is that what has already been announced was only enough to keep maybe 5% of the Windows Phone development team busy.  An observer needs to keep three things in mind.  One, quite obviously, is what are the competitive and customer pressures.  Second, what unique innovations could Microsoft want to bring to the table.  And third, how big is the development team.  The Windows Phone development team is quite large, and while I don’t know its exact size (and wouldn’t reveal it if I did), it is fair to assume that there are several hundred developer, test, and program management people working on Mango.  You also can’t have several hundred people dancing on the head of a pin, so it is fair to assume that those people are spread across all aspects of Windows Phone and thus we should see advances in every area of the platform.

MIX is a designer/developer conference, so what we hear about this week are going to be those aspects of “Mango” that it is important developers start working with.  Microsoft will likely retain a few key surprises for nearer Mango’s RTM or GA, but those would have little or no developer impact.  There also could be a few developer-oriented things that surface after MIX, but I’d expect those to be modest.  This would happen if, for example, Microsoft wasn’t quite sure that it could complete the work and thus wanted to wait for more certainty before making them public.

I think we’ll see the following general themes for Mango:

WP7 Completion – WP7 was a schedule driven release and Microsoft made many tradeoffs to ensure it would ship last fall.  Now they have to go back and pickup a lot of the higher priority items that were dropped to make the schedule.  For example, we all know that Landscape orientation support is incomplete.  I think they will fix that.  There will be dozens of things they polish up, from major items to little annoyances, to make the WP7 experience complete.  Only a few of these will be revealed at MIX, because most don’t have developer impact.

Fully Expose the Platform – There is a great deal that already exists in WP7 but there are no (or incomplete) APIs for accessing it.  This was mostly a time to market decision, but also a philosophical decision to focus on doing a great job exposing the most important aspects of the platform over doing a mediocre job of exposing everything.  Over the year since MIX ’10 Microsoft has received a huge amount of feedback from developers on what they really want to be able to do with the platform that they currently can’t, and will undoubtedly expose far more of it as a result.

Track Key Microsoft Technologies – We’ve already seen that Mango will be picking up IE9 and there have been plenty of rumors about Silverlight 5.  Mango almost certainly will have Silverlight 5 (or some variation of it, perhaps something that is a 4/5 hybrid).  It is a pain in the neck for Microsoft’s Developer Division to have separate forks of the Silverlight tree going on simultaneously.  Not only because the Silverlight team wants to keep the versions in sync on all platforms, but because teams like Expression Blend and Visual Studio don’t want to have too many targets to deal with.  Another example, the Visual Studio Lightswitch team probably would love to target Silverlight 5 (and later) as the only version of Silverlight they support in their final product.  They would then want Windows Phone to include Silverlight 5 so there is one less thing they need to deal with when they support it.  Also no surprise is HTML5 taking a bigger role via having IE9 on the device.  But will Microsoft start to expose HTML5 in a broader way with Mango?  I think it depends on how much Microsoft is going to just be talking about HTML5 vs how much they’ll be handing out HTML5-related tool developer previews.  This goes way beyond Windows Phone, though we could be in for some Windows Phone surprises.

Enterprise Focus – For Windows Phone 7 there was an explicit decision to put enterprise-related features on the back burner and really drive to create a good consumer product.  This leaves the Blackberry as the Gold standard for enterprise-oriented smartphones.  It used to be that Windows Mobile took the Silver, but that position is now held by the iPhone.  And Google is working to make sure Android can medal as well.  Windows Phone 7 isn’t even on the podium.  That means Microsoft isn’t fully exploiting some of its key strengths, such as its large enterprise sales force and all of the other products that could be doing innovative things with Windows Phones.  For example, where is a Lync (unified communications) client for Windows Phone?  It requires features not in WP7 that I predict will be in Mango.  What about a database on the phone and the ability to sync with SQL Server?  I suspect we will see some variation of SQL Server CE provided in Mango.  There may be dozens of enterprise-oriented projects at Microsoft backed up waiting for Windows Phone features that hopefully Mango will bring to the table.  And then there is VPN support.  I think it would be a stretch for Microsoft to add DirectAccess support in Mango, but hopefully there will be a VPN client.  Of course, the alternative would be to add enough platform support so third-parties (e.g., Cisco or Juniper) could build their own VPN client.  In any case, I think Mango will redefine where Windows Phone 7 sits in the enterprise smartphone category, garnering Microsoft a position on the medal stand.

Next-Generation Chassis – Ok, I don’t think we will get all the details of a next generation chassis because not all of it is important to developers.  But if there are going to be new screen resolutions, new mandatory sensors, etc. then we are likely to see them revealed at MIX ’11 and supported in a version of the tools (including emulator) that Microsoft will likely hand out at MIX.

More Lead, Less Follow – One of the problems with Windows Phone 7 was that the crisis in getting into the consumer smartphone arena left Microsoft as a follower rather than a leader.  There were places that Microsoft certainly showed leadership, such as in Gaming with XBox Live.  And Microsoft certainly has more catch-up left to do with IOS and Android.  But at the same time Microsoft Research, Bing and others at Microsoft have been doing a lot of innovative work and Mango is the first opportunity for these to start making their way into Windows Phone.  Hopefully we’ll see a few “hey, we’re past playing catch-up” things revealed this week.

There are many other things that might come out at this MIX.  I saw one rumor of a merged Silverlight and XNA.  This is something of interest, but not simple due to the incompatible (events in Silverlight vs polling in XNA) programming models they present.   It would be great if Microsoft really has resolved this.

Another long-shot is allowing native code applications on Windows Phone.  The downside of native code apps is that they are more likely to reduce phone reliability, battery life, etc. than are managed code apps.  Also, they may rely on some quirks of Windows CE that Microsoft would want to change in the future.  Microsoft doesn’t want to create a lot of legacy that would keep them from aggressively moving Windows Phone forward.  To put another spin on it, because the managed code Windows Phone apps are written completely in technology (Silverlight, .NET) that runs on mainstream Windows those apps probably will be able to run on “Windows 8” slates (assuming Microsoft ports over some Windows Phone unique libraries).  The same likely wouldn’t be true for native code apps.  A compromise position would be for Microsoft to allow special classes of developers to use native code while not enabling it for the broad developer community.  Top-tier Game companies are an example.  Another example might be the case I previously mentioned of allowing Cisco and Juniper to create VPN clients.  Of course even if Microsoft does this they might not reveal it at MIX since they could approach this small set of developers directly.

So that’s my take at what we might see starting tomorrow at MIX ’11.  I know I focused mostly on broad categories rather than specifics, but that is because I’m going on strategic thinking rather than trying to play back specific leaks or rumors I’ve heard.  By the end of this week we should have a pretty good idea of just how much Microsoft has been able to accomplish in the short time since Windows Phone 7 RTM.  If they’ve done enough it certainly will re-ignite the excitement around Windows Phone 7.

 

 

 

Posted in Computer and Internet, Microsoft, Windows Phone | Tagged , , , | 1 Comment

TRUSTe gets its Tracking Protection List act together

When Microsoft added the Tracking Protection List (TPL) feature to the Internet Explorer 9 beta TRUSTe made available a beta TPL that has been widely criticized.  The problem with the TRUSTe beta TPL is that apparently what they did was just take everyone who had a TRUSTe seal and add them to the list as parties allowed to perform third-party tracking.  Many, including the Electronic Frontier Foundation, accused TRUSTe of having created an anti-privacy list!  Well, TRUSTe has just (April 4th) released its production TPL and they’ve taken a completely different approach.

What TRUSTe has now done is created a set of third-party data collection principles designed to protect privacy on the web.  Only those third parties who agree to the principles are on the allow list in TRUSTe’s TPL, and it is currently a very short list.  Only 17 web addresses are on the allow side of the list, a far cry from the nearly 4000 that were on the beta list.  TRUSTe also appears to be adding web addresses of trackers that are known not to adhere to the third-party data collection principles to the block side of the TPL (there are currently 88).  This puts TRUSTe well on the way to creating a list in which the “Good Guys” can track you but others can’t.

Right now it looks like this story hasn’t been picked up by many sources, and so articles about the TRUSTe list are based on their beta TPL and are extremely critical.  Don’t necessarily believe what you read.

Posted in Computer and Internet, Microsoft, Privacy | Tagged , , , , , | 1 Comment

It’s time for Windows XP to die

In a world where computer security and internet privacy are critical issues, one of the biggest problems we have is the continued use of Windows XP.  Fortunately there is progress in replacing Windows XP, with it being used on less than a third of PCs here in the United States.  The story is not as good worldwide, where Windows XP is still used on almost half of all PCs.  So let’s explore why Windows XP should aggressively be abandoned.

Windows XP was a well received operating system, and it seems that both users and organizations love it.  Not with passion, but with a reverence for doing exactly what they needed it to do.  Unfortunately, Windows XP was not designed for the modern Internet threat environment.  The design for a merged desktop operating system (combining the full consumer experience and compatibility of Windows 95 with the modern underpinnings of Windows NT) started in 1996 under the banner NT 5.  Microsoft couldn’t quite get all the compatibility work done in a single release, so NT 5 was released as Windows 2000 with the Professional edition targeting business desktops.  Microsoft then did a quick turnaround release, internally NT 5.1, to finish the compatibility work and bring NT technology to the mainstream.  NT 5.1 was released as Windows XP..

It is hard to imagine it, but 1996 is just one year after the Internet was fully transitioned to a commercial endeavor.  In 1996 the Internet is a fairly safe place, and it isn’t until 1999 that we start to see the use of the Internet to spread malware.  And it isn’t until 2001, after Windows XP is completed, that we start to see the explosion in malware that exploits flaws in the operating system.  And so Windows XP was unprepared for the threat environment into which it was introduced.  As the attacks mounted Microsoft was forced to pause development of the next version of Windows (“Longhorn”, which would eventually ship as Vista) while it revised its development processes to focus on security (what is now known as the Security Development Lifecycle) and undergo a complete security review of Windows XP.  This resulted in Microsoft fixing hundreds of potential Windows XP security issues with Windows XP Service Pack 2 (SP2).  But what Microsoft couldn’t do in SP2 was alter the basic design of Windows XP, so those security changes deferred for Windows Vista.

The single biggest problem with security in Windows XP is the result of its work to maintain compatibility with the Windows 95 family and earlier operating systems.  Because those operating systems were built as single user systems there was never a notion of separating normal use from administrative use, and many applications took advantage of this.  Windows NT did have the notion of giving users different permissions, but if you tried to use this feature then you found many applications wouldn’t run.  For example, from personal experience I know that some Intuit products still needed administrative permission as late as 2006.  So when Windows XP was released in 2001 it was necessary for Microsoft to give all users full administrative permissions by default in order to retain the highest level of compatibility with Windows 95/98/98SE/ME.  This means any application you run on Windows XP can modify the system as though it was the system administrator.  Malware takes advantage of this to modify system files, the registry, etc.

In Windows Vista Microsoft came up with a solution to the problem of always running as an administrator.  It would run all applications without administrative privileges, and then if the application needed to run as an administrator it would ask the user to give the ok.  The idea behind this design was that if the user was running a well-known application then they would ok the use of administrative privileges whereas if they didn’t really trust the application (such as something they just downloaded over the Internet, or was contained as an attachment in an email) they would say no to giving the privileges.  This is the feature known as User Account Control (UAC).  Sadly in its first incarnation UAC would result in far too many prompts and was one of the reasons that Windows Vista was poorly accepted.  However, in Windows 7 UAC was refined so that users now find it to not be a burden.  (And if the rumors about Windows 8 use of Reputation are true then a UAC prompt would only occur when the safety of the application is suspect.)

UAC, by itself, is a reason to dump Windows XP and move to Windows 7.  There is a vast amount of malware out there that requires administrative privileges and thus will cause damage on a Windows XP machine but do no harm on Windows Vista or Windows 7.

The list of security design changes introduced in Windows Vista, and further refined in Windows 7, is long (and I’m not going to go over them all).  Application Isolation features let browsers such as Internet Explorer 7 (and later) and Google Chrome implement a “protected mode” where browser windows operate in a sandbox that limits the damage they can do.  This is likely one of the key reasons that the new Internet Explorer 9 won’t run on Windows XP.  The more Internet Explorer tries to take advantage of the secure environment provided by Windows 7 (and Windows Vista) the harder it is to keep Internet Explorer running on Windows XP.  I think the team finally said “hey, if we really want to focus on security then lets only run on secure operating systems”.

Other features such as DEP, ASLR, and a host of hardening features in the 64-bit version of Windows (which everyone should be using when they install on a new machine) make it very difficult for malware to infect systems.  Some of these, for example DEP, have their roots all the way back to Windows XP SP2.  But while DEP was first introduced in SP2 its use has been ramped up over time so that in a Windows 7 64-bit environment it is always on whereas in Windows XP SP2 it rarely turned on due to compatibility issues.

The results of Microsoft’s focus on security in Windows Vista and Windows 7 shows.  Consider this quote from SIRv9, “Windows 7 has consistently had a lower infection rate over the past four quarters than versions of Windows Vista, which have consistently had lower infection rates than versions of Windows XP since the original release of Windows Vista in 2006.”

Of course in addition to all these design improvements that enhance security the newer Windows versions introduce many new security features.  Many of these are targeted at enterprise users, and others are used under the covers.  For example, if you run the latest version of Microsoft’s Security Essentials on Windows 7 then it uses the new Windows Filtering Platform to let it block malware at the network level.

The key takeaway here is that Windows XP was not designed for, and is inappropriate for, today’s Internet threat environment.  Moving away from Windows XP  makes you and the entire Internet a safer place.  Newer Microsoft offerings, particularly Windows 7, are designed for the modern threat environment.  So is the latest version of Mac OS X.  What are you waiting for?

Posted in Computer and Internet, Microsoft, Security, Windows | Tagged , , , , , , | 5 Comments

Microsoft IE9 Tracking Protection Lists (Part 2 – The How)

In Part 1 of my blog on the Microsoft Internet Explorer 9 (IE9) Tracking Protection List (TPL) feature I discussed why it was important to do something about tracking.  Now lets talk just a little bit about how tracking and the TPLs work and then spend most of the article on which TPLs you should use with IE9.

When you view a web page the various elements on that page typically come from multiple sources.  So, for example, if you go to www.msnbc.com the advertising on the page doesn’t come from MSNBC, it might actually come from DoubleClick (which is owned by Google, quite ironic since MSNBC is co-owned by Microsoft).  In fact many web page are made of elements from dozens of different sources.  Some of those sources might not even display actual content, they might be invisible.  These invisible elements are there just to gather information, specifically that you visited the page.  These invisible tracking elements, or beacons, are an example of the techniques used for third-party tracking.  Another example are third-party tracking cookies.  Tricks like beacons are increasingly popular because most browsers have either blocked or limited third-party cookies for some time now.  IE has long blocked third-party cookies by default unless the third-party publishes what is known as a “compact privacy policy”.  Ok, that turned out to be a weak solution, so you can optionally set IE to always block third-party cookies if you wish.   But that can lead to web pages that don’t render properly.  Safari on the iPad and iPhone block third-party cookies by default, which is no doubt why some pages don’t render properly on those devices.  A better solution would be to only block third-party cookies, beacons, etc. that are specifically used for tracking.  This is what the TPL was designed to do.

Tracking Protection Lists are simply lists of web addresses that Internet Explorer should NOT communicate with unless the user directly goes to the site.  In other words, they tell Internet Explorer not to engage in third-party communications with that web address.  TPLs can also include instructions to explicitly allow third-party communications with a web address.  I’ll explain why that is important in a minute.  You could actually use TPLs in a number of ways beyond what Microsoft has explicitly designed them for.  For example, beyond limiting tracking you can also use them to actually block the display of third-party ads.  But we’ll focus on tracking protection.

What Microsoft hasn’t done is create an actual Tracking Protection List for users, it is relying on third parties to do that.  You can find a set of TPLs by clicking on the tools (gear box) icon in IE9, then Safety->Tracking Protection.  There you will find a link to get “Tracking Protection Lists Online”.  Clicking it will show you a set of lists, with very poor descriptions of what they do.  Ed Bott did some analysis of the lists as of last February and I’m using his data in my analysis.  You can use more than one list, and which one(s) you choose is based on what your goal is.

In Part 1 of this series I broke the actors who are tracking down into three categories which I’ll recast here.  “Good Guys” are those who are tracking you so they can provide a better web experience and have strong privacy protection policies in place.  “Careless Guys” are those whose motives may be good, but whose privacy protection policies and/or operations are suspect.  “Bad Guys” are everybody else.  The question you have to ask yourself is, are you willing to let some of these actors (e.g., the Good Guys) track you while blocking the others or do you just not want to be tracked at all?  As a reminder, letting yourself be tracked results in more appropriate ads (e.g., diaper ads for a new mother and sports car ads for an empty nest guy going through a mid-life crisis).  And in the future you might find websites that block your access to content unless you allow them to track you and display appropriate ads.  So you need to make a decision just how much you want to give in to tracking paranoia.

The most important list you need to know about, and the one I recommend, is the EasyPrivacy Tracking Protection List.  This list is the most aggressive at blocking tracking.  The EasyPrivacy list is taken directly from the popular Adblock Plus add-in for Firefox, so it has a long history and a very active volunteer community dedicated to eliminating third-party tracking.  But of course, it is going to block the “Good Guys”.  Fortunately for all of us, if you want to re-enable the “Good Guys” then you can simply add another list.  TRUSTe’s TRUSTed Tracking Protection List is a list of what are supposed to be the “Good Guys” as determined by TRUSTe.

That sounds easy doesn’t it?  Add one list if you are “paranoid” about tracking, then add a second if you want some tracking protection but don’t want to interfere too much with the web experience.  There is a caveat there of course, you have to trust TRUSTe to only have the “Good Guys” on its list.  I think for most users this is an ok assumption, though the TRUSTe “Good Guy” list is long enough that it makes me wonder if some “Carless Guys” are actually on it.  Still, nothing is perfect.

There are other choices in Tracking Protection Lists.  For example, instead of the two lists I mention above you could use the lists from PrivacyChoice.  There are two, one that blocks all third-party tracking that PrivacyChoice knows about.  This would be the list for the “paranoid”.  The other is a PrivacyChoice that only blocks tracking sites that are not part of the Network Advertising Initiative (NAI).  The NAI is another attempt at identifying the “Good Guys”.  So if you believe NAI has a better program than TRUSTe then the PrivacyChoice TPL blocking companies without NAI oversight is for you.

There are other lists such as Albine (which has one list specifically designed to block tracking on kid-oriented web sites), and then FanBoy which seems similar to EasyPrivacy.  These might be of interest, but require more investigation.

There you have it.  If I were you I’d go turn on IE9’s Tracking Protection List feature immediately.  Either EasyPrivacy + TRUSTe or EasyPrivacy alone.

Posted in Computer and Internet, Microsoft, Privacy | Tagged , , , , , , | 2 Comments

Microsoft IE9 Tracking Protection Lists (Part 1 – The Why)

A very cool feature in Microsoft’s Internet Explorer 9 (IE9) is its support for Tracking Protection Lists (TPL).  TPLs are a mechanism for reducing how tracking of your activities on the web can be done by advertisers (and others).  I want to explore why you might want to use a TPL, and then in Part 2 I’ll go into a little about how they work, and then make recommendation of what TPLs you should use under various circumstances.

There are a couple of things to realize about “tracking” before getting into TPLs.  The first is that tracking is not necessarily bad.  Take Amazon.com as an example.  Amazon has long been known for tracking what its customers do on their site and then using that information to customize the user’s experience.  It is one of the things that people love about Amazon.  Many other web sites do the same.   The key here is that the tracking information stays within the site.  It is not shared with third parties.  And that is the key differentiator we are talking about here, so-called “first party” vs “third-party” tracking.  TPLs, and most privacy (though not all) concerns, are around third-party tracking.  The basic problem with third-party tracking is that it allows your behavior to be tracked across many web sites, allowing for these third parties to know more about you then anyone on the planet besides yourself.  And doing so without your permission.  And potentially selling that information to anyone they desire, again without your permission.

If I know what books you bought, what medical conditions you researched, that you looked at adult toy websites, what health products you ordered, which political web sites you read, etc. I could draw an amazing number of conclusions about you.  Advertisers want to use this for seemingly innocuous purposes.  For example, you are sitting at a news site and they have to display an ad.  Should it be for “Cancer Centers”, “Baby Diapers”, “Sports Cars”, or “Dating Services”?  If “they” have enough information about your web surfing habits to suspect you have cancer then guess which ad they display.  So what’s the problem?  Take a simple example, what if one of your co-workers notices that you are always getting ads for cancer-related products and services.  Did you really want them to know about your health situation?  Or what if a potential employer could get this profile information.  Information you are neither required to disclose, nor that they might be permitted to ask, could impact whether you are hired.

How about a concrete example from a slightly different domain.  When you use a supermarket loyalty card the supermarket tracks every one of your purchases.  A lawyer friend was once telling me why she registered her card as “Mrs. Denzel Washington” rather than her actual name.  She knew of a case where a prosecutor in a drunk driving trial had subpoenaed the supermarket’s tracking information and used it to show that the defendant had purchased beer earlier in the day.  They might not have been able to prove the defendant drank the beer, but it was still a way for the prosecutor to sway the jury.  Your tracking information on the web could be similarly obtained in criminal cases, civil lawsuits, or just government witch hunts (recall the FBI trying to get their hands on library records looking for those reading books that might suggest they are potential terrorists).  Imagine the fact that you were surfing the web trying to understand Islam, and at the same time researching US weapons that they keep talking about on the news, making you a target for investigation as a terrorist.  It sounds far-fetched, but if you think back to those first few months and years after 9/11 you know it is all too possible.

Hopefully I’ve made third-party tracking sound really awful, and thus you are wondering why it is even legal.  Well, lets talk about the good side of tracking.  I’ve already mentioned the Amazon.com example of first party tracking leading to great site personalization.  And I’ve given the bad side of how third-party tracking could disclose private information via advertising.  But what about a good use of this same data?  It is buried in my bad example.  If you are going to display an ad for me, and have me take any interest in it, then please display the “Sports Car” ad.  Some of my single friends and cousins really could use that “Dating Service”.  And I know a few people with newborns who are no doubt in the market for lots of diapers.  The third-party tracking is generally the only way that the right ad is going to get to each audience.  Some very large sites may have their own ad inventory and be able to use the profile from first party tracking to choose which ad to display.  But most web sites call on a third-party service to display an ad, and that service of course relies on the tracking data it has acquired in deciding what ad to display.

There are three other important things for you to think about before we get to TPLs specifically.  The first is that most web sites are supported by advertising, and thus anything that degrades the advertising experience reduces their revenue stream.  As a result web sites are investigating blocking the display of some of their content when a user who has disabled tracking visits that website.

The second is a sea-change that is taking place in how advertising is personalized.  Think about the analog world for a minute and you realize that most media is targeted at a broad demographic.  For example, Men’s Health magazine has a demographic that is primarily health conscious males in their 20s and 30s.  If you are an advertiser looking for this demographic then you place an ad, tuned for that demographic, in Men’s Health.  The fact that I subscribe to Men’s Health means I’ve opted into the demographic, even if it doesn’t exactly fit me.  And sure enough both the articles and advertising in Men’s Health is slightly off for me because I don’t fit the age demographic.  But since they can’t print a Men’s Health magazine specifically for me, I (and the advertisers) live with a suboptimal match.  Of course, in the digital world you can do much more specific targeting.  And so it is easy for a web site to dynamically build web pages that are tuned for multiple fine-grained demographics.  Men’s Health could in fact customize both editorial content and advertising so that it became “Middle Age Men’s Health” or “Senior Men’s Health”.  And if all this third-party tracking did was flag you as hitting a broad demographic such as “Married, No Kids, Middle Age” and then the tracking data was thrown away, it might not be that threatening.  But instead we are going in the opposite direction.  The web lets us do things like offer up coupons, and so it becomes important to know answers to very detailed questions such as “Likes Steak” or “Vegetarian” so the right offers go to the right people.  Thus simply gathering data and then throwing away the details is becoming less and less attractive to advertisers.

The third point is that we can probably break the actors involved in tracking into three broad categories:

1) Those who are both trying to use the tracking data in a way that benefits the consumer (as well as their own economics) and is very conscientious about trying to protect the consumer’s privacy.  They will subscribe to some industry set of best practices for doing so, and may offer some means of at least partially opting out of tracking.

2) Those whose intent is benign, but who really aren’t that careful about taking reasonable measure to protect your privacy

3) Those who either don’t display much concern about protecting privacy as long as they don’t violate the letter of the law, are negligent, or have outright malicious intent

What IE9’s Tracking Protection Lists do is let you choose how much you are willing to be tracked, and by which of the three kinds of actors.  So those seriously concerned about anyone tracking their activities on the web, and willing to give up having a personalized browsing experience and potentially access to some web pages, can do so.  While those interested in still having a personalized experience and full access to web pages, but wanting to limit tracking to only those parties who take reasonable measures to protect privacy, also have that choice.  In Part 2 I’ll describe how to achieve this.

Posted in Computer and Internet, Microsoft, Privacy | Tagged , , , , , , | 1 Comment

A Data Breach with real damage

Some of you aren’t going to be sympathetic on this one, but have you heard of PornWikiLeaks?  I was looking at CNet this morning and at the top of the list of most popular links was the story of how someone had set up a site to out the real identities of Porn actors/actresses.  The data was apparently stolen from a medical testing service the porn industry uses to test for STDs.

My first reaction was that this must be some porn fan trying to have a little fun, but that clearly isn’t the case.  PornWikiLeaks is promising to release information about each actors’ STD status.  And a look at a couple of profiles they are building shows quite a bit of derogatory commentary.  This looks pretty close to the serial killer who attempts to justify their killing spree by claiming they were only ridding the world of prostitutes.

In real life PornWikiLeaks will almost certainly destroy a few families and even has the potential to lead to deaths (e.g., from suicide).  And while many will not be sympathetic to this particular demographic’s plight, very similar things could happen to any of us.

The loss of privacy that is occurring due to “bad guys” exploitation of the Internet, and in particular through data breaches, is horrifying.  It is sad to think that it will take people actually dying as a result before industry, government, and individuals take the problem seriously enough.

 

Posted in Computer and Internet, Privacy, Security | Tagged , , | Comments Off on A Data Breach with real damage