Consider the first sentence of Internet Engineering Task Force (IETF) RFC 4272 published in 2006 and be afraid, be very afraid. It reads “Border Gateway Protocol 4 (BGP-4), along with a host of other infrastructure protocols designed before the Internet environment became perilous, was originally designed with little consideration for protection of the information it carries. “ Or consider this statement from the February 2003 US Department of Homeland Security’s The National Strategy to Secure Cyberspace “Of the many routing protocols in use within the Internet, the Border Gateway Protocol (BGP) is at greatest risk of being the target of attacks designed to disrupt or degrade service on a large scale.” So in the 8 years since DHS highlighted the risks in BGP and called for it to be replaced by a secure version how much progress has been made in addressing this problem? Close to zero. There has been a lot of talk, and in 2009 (6 years after the report) DHS finally started funding research into securing BGP. But practical progress, zero.
BGP isn’t the only internet protocol that has security problems. While few users have even head of BGP, many have at least seen the initials DNS (for Domain Name System). DNS is what takes internet names (e.g., www.mydomain.com) and translates them into addresses (e.g., 192.168.0.1) so you can actually access them. Imagine if you typed www.mybank.com into your browser and instead of actually going to the web page for your bank you went to a phishing site that stole your bank account password? All too easy if DNS is compromised. The lack of security in DNS was recognized in 1990, but it took 20 years before the rollout of DNS Security Extensions (DNSSEC) started to gather steam. It will be a few more years until DNS has been fully secured.
I could go on, but I think I’ve already made the point. The Internet is a house of cards that could collapse any time.
So, what’s the problem here? For one thing, the Internet was never supposed to be a mass market success. If you go back to the early 90s the Internet was an academic environment and most technologists predicted that interconnected commercial utilities (e.g., AOL, or Microsoft’s original MSN) would become the mainstream network solutions. Even many of those who believed in the Internet thought that a commercialized parallel to the academic network would emerge rather than having the existing academic network just opened up to the public. (For full disclosure, in 1993 I expected that a parallel commercial Internet would appear, with utilities as islands within that network each offering a community experience that the typical end-user found more comfortable than just being thrust into the wilds of the network.) So what really happened? The “Academic Internet” was opened to the public and was adopted so quickly that it overwhelmed all alternative solutions. The industry was forced to “go with the flow”, and that included living with a set of protocols that hadn’t been designed for the potential hostilities that the having all the worlds communications and commerce traveling over the network might attract.
So if the existing Internet protocols aren’t secure, and we’ve known that for quite some time, why don’t we fix them more quickly? Quite simply, because we are more afraid of disrupting the Internet than we are of the security risks. Just think about SPAM for a minute. How frustrated are you when mail you really want to receive ends up in your Junk folder? Now imagine that we fixed the SMTP protocol so that only fully authenticated mail was ever delivered to you. That would eliminate a lot of SPAM, but at the same time there would be a period of perhaps years in which even more mail you really want would either not be delivered at all or would end up in your Junk folder. That would happen because not every email server and client would (or could) upgrade at the same time. So instead we have some extensions that make it easier for anti-SPAM filters to recognize valid email but we haven’t made a real dent in SPAM. Now take that another step. Imagine a rollout of a secure BGP in which some ISPs were actually unable to connect to the Internet until they upgraded to “Secure BGP”. We can’t just flip a giant switch and instantaneously get all the ISPs on these new protocols at the same time. It can take years to roll them out. To put this in a more concrete perspective, imagine a small ISP covering a midwest town in the US. What if they didn’t have the money to buy new “Secure BGP” compatible routers or the manpower to perform the transition? What happens when we declare January 1, 2012 “Secure BGP” flip-the-switch day? They probably go out of business and leave that town with no Internet access at all.
There is another factor here. While we know that these protocols aren’t secure, they haven’t actually been compromised. We have had ISPs accidentally misconfigure routers in a way that the BGP weaknesses allowed to cause a major Internet outage. But we haven’t had someone intentionally exploit BGP’s problems to misroute Internet traffic. On the other hand, the reason that DNSSEC rollout is accelerating is that we have had security researchers actually demonstrate the ability to exploit flaws in DNS.
So in the absence of actual disasters, and with a desire to avoid disrupting the Internet experience, the industry simply does nothing. That is a little too harsh, but it is all to close to the truth. We will wait until something bad happens, potentially very bad, before we get serious about fixing the Internet protocols.
What constitutes very bad? Well of course we could have some “hackers” decide to exploit BGPs weaknesses for commercial gain. But I think a more likely scenario is a cyberwarfare one. The weaknesses in BGP and the other Internet protocols are well-known. I would think that every nation that has a cyberwarfare operation has figured out how they could disrupt these protocols in practice. And they are just sitting on those techniques until they need them. Yes, even a minor player like Libya might have the capability to disrupt or steal information transmitted over the Internet and could decide to do so in response to the UN-approved no fly zone. Nations like North Korea and Iran almost certainly have this capability. It is sad that we’ll likely wait until one of them demonstrates it before we get serious about fixing the Internet protocols.
One doesn’t have to wait for an attack on Internet protocols to see how slow we are being in response to the Cyberwarfare threat. DHS’ 2003 strategy also called for securing SCADA (supervisory control and data acquisition) systems. The big news in 2010 was Stuxnet, a worm that targets these systems. Most believe it was created by government entities specifically to target Iranian nuclear facilities. I guess in this case many of us are happy that their were vulnerabilities that could be exploited. But Iran uses generally available commercial equipment, which means that many other facilities in many industries around the world could be similarly attacked. I wonder if suppliers and users of the systems targeted by Stuxnet, and similar systems from other suppliers, are rushing to secure them? Keeping in mind that it is 8 years since DHS called for them to be secured, how many more years will it take for the vast majority of these systems to actually be secured?
The bottom line here is that we keep making the wrong cost/benefit tradeoff in security. We tolerate bad security in the name of better user experience, lack of customer disruption, etc. until something really bad happens. We need to swap our priorities and make preventing really bad things from happening more important than preserving the status quo. There are tradeoffs to be made here for sure (e.g., UAC in Windows 7 vs UAC in Windows Vista), but the bottom line is having a secure system has to come first. And we need to get our existing systems and protocols into a more secure state ASAP.