2015-10-06

By News Aggregator

By Geoff Huston

In defining what is meant by “Internet Fragmentation” it is useful to briefly describe what is meant by its opposite, an “Open and Coherent Internet”. As we’ve explored in the previous section, “coherence” implies that each of the elements of the Internet are orchestrated to work together to produce a seamless Internet which does not expose the boundaries between discrete elements. Coherence also implies consistency, in that the same trigger actions by a user produce the same response from the networked environment, irrespective of the user’s location and their choice of service provider. Openness also implies the ability to integrate and build upon existing tools, technologies and services to create new technologies and services, and in turn allow others to further evolve the technology and service environment.

“Fragmentation” on the other hand encompasses the appearance of diverse pressures in the networked environment that leads to diverse outcomes that are no longer coherent or consistent. In the context of the Internet, fragmentation also encompasses various ways in which openness is impaired, and also can include consideration of critical elements of service and the fragility of such arrangements when the supply of such services is left to a very small number of providers.

This section contains some notes on where and how there are fragmentary pressures that are driving apart aspects of the Internet and create various “islands” of differentiated functionality and connectedness. It concentrates on the technical aspects of these pressures for fragmentation and does not attempt to analyse public policy implications.

IP level Fragmentation

The issues around address exhaustion in IPv4 and the transition IPv6 deserve attention in relation to any discussion of potential Internet fragmentation.

The transition to IPv6 is still a process without clear coherence or assured outcomes. It is possible that the work undertaken already by a relatively small number of retail Internet access providers, including notably large ones such as AT&T, Comcast, Deutsche Telekom and KDDI, will generate sufficient impetus in the market to pull both content providers and other ISPs along with them in embarking on IPv6 services. This is, however, by no means an assured outcome, and the continued expansion of Network Address Translators (NATS) in IPv4 Internet appears to have no immediate end in sight. The market signals are as yet unclear and the public policy actions have not yet provided adequate impetus, with the result being that the general response from the majority of players has been insufficient to make any real progress in trying to shut down the use of IPv4 in the Internet.

Due to the address exhaustion of IPv4, increased use of Carrier Grade NATs is being made to share this scarce address resource across a greater number of users. In other words, address exhaustion for IPv4 is creating larger and larger networks of “semi-opaque” connectedness within the public network. IPv4 addresses used in conjunction with NATs no longer have a clear association with a single end user and the most probable outcome is that parts of the net will “go dark” in the sense that user’s actions within this “dark” network are effectively untraceable. These devices also compromise other aspects of robustness in the engineering of the Internet at this level of operation. The requirement to pass all traffic to and from an external site through the same address translation unit impairs some forms of robust network operation that uses diverse points of interconnection and diverse connectivity, and instead this form of state-based middleware creates critical signal points of failure. Given the critical importance of content delivery in many networks, the presence of CGNs creates incentives to place selected content distribution functions on the “inside” of the CGN. This runs risks of the network discriminating between various content delivery systems through this ability to position some content in an advantaged position as compared to others. The longer-term pressures are difficult to discern at this stage, but the longer this hiatus in addresses lasts the greater the levels of address pressure. The greater the address pressure on the IPv4 network the greater the fragility and complexity of networks using address sharing.

Another side effect of IPv4 address exhaustion is address trading. This market has appeared organically and there is growing evidence that transferred IPv4 addresses are not all being registered in the established address registries. Some of this is evidently due to address “leasing” where the lessee is not registered at the current beneficial user of the address, but also some times due to a reluctance of the address holder to enter the address into the address registry because of concerns over address title policies or similar concerns for the parties involved. The larger the pool of unregistered addresses the greater the pressure to fracture the address space. There is no clear way back when or if the space fractures in this manner.

With the exhaustion of the address allocation framework for IPv4 and the established common belief that addresses are plentiful in IPv6, then much of the original rational for the regional address registry structure is weakened.

Much of the original rationale for the regional internet address distribution framework lay in the perceptions of scarcity in supply of addresses in the IPv4 address plan, and the need to perform a complex rationing operation. The clearly finite pool of addresses and the larger visions of the Internet’s future implied that it was not possible to simply allocate an adequate pool of addresses to each network operator to meet perceived needs, and instead each regional registry devised a rationing scheme based around the principle of “demonstrated need”. The original objective of this process was to ration the consumption of addresses in IPv4 until such time as IPv6 was prevalent, and there was no further need for IPv4 addresses. Without the need for further rationing and its associated administrative overhead, and a reversion to a potentially far simpler registry model then the case for regional fragmentation of the registry function is an open question.

However, not all of the pressures in this space are directed towards aggregation of the registry function into a single operation. When coupled with a cyber security perspective that its “good to know where every address is in a country” its reasonable to anticipate further pressure to further fracture the regional structures into national structures. In the Asia-Pacific region, APNIC already has China, India, Indonesia, Korea, Japan, Taiwan and Vietnam all operating such national address registries, and in Latin America there are comparable structures in Brazil and Mexico. It is an open question whether this will spread in response to these pressures of national security and the effective end of the conservative address allocation function.

Routing Fragmentation

The routing system is intended to ensure that every switching element is loaded with consistent information such that every attached device on the Internet is reachable by any other device. The Internet uses a two level protocol routing hierarchy. The set of local routing domains (or “Autonomous Systems” (AS’s)) use a variety of routing protocols. As they do not directly interact with each other this is not an issue at all. The second routing domain (or the “Inter-Domain” space) uses a single routing protocol called the Border Gateway Protocol (BGP).

The protocol BGP, and the broader Internet routing space, is under various pressures.

The AS identification field was defined as a 16 bit number field. The Internet community is close to exhausting this identifier space, and needs to move to a larger 32 bit field. Over the past 20 years the problem has been identified, technical standards produced, software has been deployed by vendors, the transition strategy defined, and the process has been started. In Europe, the process is well under way, while in North America (Canada and United States) the process has stalled and almost no 32 bit ASs are in use. The subtle difference is AS-specific communities appears to be the issue here. The Canadian and United States ISPs appear to make use of these AS-specific communities for routing policy, and are reluctant to use 32 bit AS numbers for this reason. The European ISPs appear to make more use of routing registries to describe routing policies, and these registries are largely agnostic over the size of the AS number field. It is unclear how the North American ISPs are going to resolve their issues given that the 2 byte AS number pool will be exhausted in the coming months.

The routing system is under constant pressure from false routing advertisements. Some of these are local scope advertisements intended to comply with national directives to filter certain IP addresses as part of a national content filtering directive. Some are the result of mistakes in router configuration. Others are deliberately malicious advertisements designed to redirect traffic in some manner, or to disrupt the genuine service. Efforts to improve the security of the routing system are being explored by the Internet Engineering Task Force (IETF), but the measures being contemplated imply additional overheads in routing, increased complexity and increased brittleness. The security is most effective when the entirety of the routing space adopts the technology, and balancing the local costs against the common benefit that is contingent on universal adoption is a significant issue for this approach.

The routing space is not a uniform space, and different address blocks are visible in different parts of the Internet, and there is often no clear reason why. There are “ghost routes” where the original withdrawal has not successfully promulgated across the entire network and some networks are still carrying reachability information for the old route. There are “islands” of more specific routes, which are blocked from universal promulgation by various prefix length filters being applied by various routers. There is the selective absence of routing information because some routing domain use ‘Route Flap Damping’ and others do not. Local routing policies may apply differential treatment to various routes, such as is seen in the distinction between transit, peering and customer relationships as implemented in the routing space. The result is that there is no clear consensus as to what constitutes “the Internet” in a routing sense. Each AS appears to see its own routing view of the Internet and there are invariably some number of subtle distinctions between each of these local views. The result is that it is not assured that every single end point on the Internet can send a packet to any other connected end point at any time. In some cases the routing system simply does not contain the information to support this universal form of connectivity. Though for most means and purposes the overwhelming majority of all end points can be reached.

ISP Peering and Transit

The Internet architecture does not expose user level transactions to the network and inter-network arrangements are not based on transaction accounting. At its heart, every user of the Internet pays for his or her own access. In turn, their ISP undertakes to provide connectivity to the rest of the Internet. It cannot do this without the support of other ISPs. Two ISPs that interconnect and exchange traffic typically do so under one of two broad models. One model is the transit relationship, where one party pays the other, and in return is provided with all of the routes available on and via the other network. The transit model is used in open competitive markets when the interconnection is perceived as being asymmetric and one party has significant assets, and the other party is wanting to access those network assets. There is no particular assurance from this model that a customer of a transit provider necessarily sees the entirety of the Internet.

The typical transit arrangement is that the customer is given access to the route set controlled the transit service provider that the ISP cannot obtain as efficiently by any other means. The other broad model is a peering model where neither party pays the other, and each party learns only the customer routes of the other. Through the use of peering, ISPs can reduce their transit costs, as they do not need to purchase transit for that traffic. To save interconnection costs ISPs establish or make use of Internet Exchange Points (IXPs), where they can peer with multiple networks at the same time. The peering model is often seen in open competitive market situations where the two providers bring, in each party’s perception, approximately equal assets to the connection, so neither party believes that there is undue leverage of the investments and assets of the other. Peering arrangements are at times challenging to sustain. Some networks grow and want to change their position to a seller of transit connectivity. They may then opt to de-peer some networks in order to force them to become customers. There are also various hybrid approaches that combine peering of customer networks with the option of also purchasing a transit service. For example, Hurricane Electric has an open peering policy, while at the same time selling an optional transit service to the networks it peers with.

The market-based approach to connectivity as represented by this model of interconnection is efficient, and relatively flexible, and it embraces entirety large proportion of the inter-provider relationships in the Internet. Its divergence from the model supported by telephony is still a source of continuing tension in certain international circles. Efforts by certain countries to assert some form of paid relationship by virtue of their exclusive role of access to a national user base have, in general, been relatively self-harming in terms of a consequence of limited external visibility on the part of the national user community that was used in such negotiations. Nonetheless, where commercial negotiations do take place and in the absence of sufficient competition, one player may leverage their position to endeavor to extract higher rents from others. In those instances, and the reason why the Internet has become so successful in competitive markets, ISPs then have the option to bypass each other using transit if they find that more economical.

Other tensions have appeared when the two parties bring entirely different assets to a connection, as is the case with Content Distribution Networks connecting with Internet Access Providers. One party is bringing content that presumably is valued by the users, the other party is bringing access to users that is vital for the content distribution function. Whether or not a party can leverage a termination monopoly in such situations depends on the competitive market situation of the location it operates in. For example Free in France for a while demanded paid peering from Google and would not upgrade saturated interconnects, but in 2015 upgraded its peering with Google without receiving payment.

Name Space Fragmentation

The name space has been under continuous fragmentation pressure since its inception.

The original public name space has been complemented by various locally scoped name spaces for many years. As long as the public name space used a static list of top level domains the private name was able to occupy unused top level name spaces without serious side effects. The expansion of the gTLD space challenges this assumption, and the collision of public and private name spaces leads to the possibility of information leakage.

Other pressures have, from time to time, taken the form of augmenting the public root with additional TLD name spaces through the circulation of “alternate” root servers. These alternate roots generate a fractured name space with the potential for collision, and as such have not, in general, been sustaining. The problem with these alternate systems is that a name that refers to a particular location and service in one domain may refer to an entirely different location and service in another. The “use model” of the Internet’s user interface is based on the uniqueness of a domain name, and in fact based on the graphical representation of a domain name on a user’s device. So if a user enters a network realm of an alternate root name space where a previously known and trusted domain name is mapped to a different location, then this can be exploited in many ways to compromise the user and the services they use. The confidence of users and the trust that is placed in their use of the Internet is based on a number of premises, and one of the more critical premises is that the Domain Name space is consistent, unfragmented and coherent. This premise is broken when alternate root systems are deployed.

As well as these fragmentary pressures driven by an objective to augment the name space in some fashion, there are also pressures to filter the name space, so that users are unable to access the name information for certain domain names. In some cases these are specific names, while in other cases it has been reported that entire TLD name spaces are filtered.

It has been observed that the TLD for Israel, .il, is filtered in Iran, such that no user in Iran is able to resolve a DNS name under the .il TLD.

Source: An Analysis of New GTLD Universal Acceptance

The resolution process of the DNS is also under pressure. The abuse of the DNS to launch hostile attacks has generated pressures to shut down open resolvers, to filter DNS query patterns and in certain cases to alter a resolver’s responses to force the query to use TCP rather than DNS.

This abuse has also highlighted another aspect of the DNS, namely that for many service providers the operation of a DNS resolution service is a cost centre rather than a generator of revenue. With the advent of a high quality high performance external DNS resolution services in the form of Google’s Public DNS, the Open DNS resolver system and Level 3’s long standing Open DNS Resolver service, many users and even various ISPs have decided to direct their DNS queries to these open resolver servers. Such a response has mitigated the effectiveness of local name filtering, and at the same time allowed these open providers to gain substantial market share of the Internet’s DNS activity.

Google recently noted that “Overall, Google Public DNS resolvers serve 400 billion responses per day.”

Not only is the DNS protocol enlisted to launch attacks; the DNS itself is under attack. Such attacks are intended to prevent users from obtaining genuine answers to certain queries, and instead substituting a deliberately false answer. The DNS itself does not use a “protected” protocol, and such substitution of “false” answers is often challenging to detect. This property of the DNS has been used by both attackers and state actors when implementing various forms of DNS name blocking. Efforts to alter the DNS protocol to introduce channel security have been contemplated from time to time and have some role in certain contexts (such as primary to secondary zone transfers) but they have not been overly effective in the area of resolver queries. Another approach is to allow the receiver of a response to validate that the received data is authentic, and this is the approach behind DNSSEC. Like many security protocols, DNSSEC is most effective when universally adopted, in that at that point any attempts to alter DNS resolution responses would be detectable. With the current piecemeal level of adoption, where a relatively small number of DNS zones are signed (and even where the zones are signed, DNSSEC uptake at the domain name level is vanishingly small — even amongst banks, large-scale ecommerce providers), then the value of this security approach is significantly smaller than would be the case with general adoption.

The contents of the DNS are also under some pressure for change and there are various ways that applications have chosen to handle them. This is evident in the introduction of scripts other than ASCII (so-called “Internationalized Domain Names,” or “IDNS”). Due to a concern that DNS implementations were not necessarily 8-bit clean, the introduction of DNS names using characters drawn from the Unicode character space required the application to perform a transform of the original unicode string to generate an encoded string in the ASCII character set that strictly obeyed the “letter, digit, hyphen” rule. Similarly the application is required to map back from this encoded name to a displayed name. The integrity of the name system with these IDN components is critically dependent on every application using precisely the same set of mappings in their applications. While this is desirable, it is not an assured outcome. A second issue concerns the “normalisation” of name strings. The ascii DNS is case-insensitive, so that query strings are “normalized” to monocase when searching for the name in the DNS. The issue of normalised characters from non-ascii scripts presents some issues of common use equivalence by communities of users of a particular language, and what may be regarded as equivalent characters by one community of users of a given language may not be equivalently regarded by users of the same language. In recent years, engineers and linguists within the ICANN community have been working towards a common set of label generation rules, and have been making real progress. This is a particularly complex issue in the case of Arabic script which has many character variants (even within individual languages) and uses some characters which are not visible to humans (zero-width joiners). The DNS is incapable of handling such forms of localisation of script use. Despite being available for 15 years, IDNs are still not working seamlessly in every context or application in which an ASCII domain is used. This issue is called “universal acceptance”. Although it applies equally to other new gTLDs, it is far more complex and challenging to overcome in IDNs. Examples include IDN email addresses — while Google announced last year that Gmail will support IDN addresses, this only applies when both the sender and receiver both have Gmail accounts. IDN email addresses are not supported by any of the major application providers in the creation of user accounts (which often use email addresses as the user’s unique account identifier), nor in digital certificates, DNS policy data or even many web browsers.

In a test involving some 304 new generic top level domains a problem was observed with punycode-encoded IDNs where a combination of Adobe’s Flash engine, the Microsoft Windows platform and either the Internet Explorer or Firefox browsers was incapable of performing a scripted fetch of an IDN. The problem illustrates the care needed in application handling where entirely distinct internal strings (in this case the ascii punycode and the Unicode equivalent) refer to the same object.

Source: An Analysis of New GTLD Universal Acceptance

The fragmentation risk is that the next billions of Internet users who are not Latin-script literate — for example, 80% of India’s 1.2 billion population is unable to speak English — will not be able to benefit from the memorability and human-usability of the domain name system. The problem has been masked to some extent by the demographics of Internet uptake to date, but is likely to become more apparent as the next billion comes online. Another possibility is that such populations will simply not use the domain name system. Uptake of domain name registrations (both ASCII and IDN) in Arab States and Islamic Republic of Iran is extremely low, and stands in stark contrast to the enthusiastic uptake of social network platforms (Egypt has 13 million Facebook users; Saudi’s Twitter usage grew by 128% in 2013, to 1.8 million.

Another issue with the use of IDNs concerns the “homograph” issue, where different characters drawn from different scripts use precisely the same display character glyph on users’ screens. The risk here is of “passing off” where a domain name is registered with a deliberate choice of script and characters that will be displayed using the same character glyphs as a target name. This has led to different applications behaving differently when handling exactly the same IDN domain name. Some applications may choose to display the Unicode string, while others may elect to display the ascii encoding (Punycode) and not display the intended non-ascii string Unicode equivalent.

http://en.wikipedia.org/wiki/IDN_homograph_attack

https://wiki.mozilla.org/IDN_Display_Algorithm

Another quite different precedent has been created by the IETF when they opened up the “Special Use” domain name registry (RFC6761). There was a common convention that names that looked like DNS names were in fact DNS names, and were resolvable by performing a DNS query. Other name forms, and other non-DNS forms of name resolution, were identified principally through the URI name scheme, which used the convention of a scheme identifier and a scheme-defined value. The registration of the top level name “.onion” is anomalous in this respect in that the names under .onion are not DNS labels, and are not resolvable using a conventional DNS query. Similar considerations apply to names under the “.local” top level domain, which are intended to be local scope names resolved by multicast DNS queries. Such implicit scope creep of the DNS label space to encompass names that are not resolvable by the public DNS, yet otherwise resemble conventional DNS names, offers greater levels of potential confusion.

Application Level Fragmentation

Similar considerations apply at the application level, and there are tensions between maximizing interoperability with other implementations of the same application and a desire to advantage the users of a particular application.

Apple’s iMessage application used a similar framework to other chat applications, but elected to use encryption with private key material that allowed it to exchange messages with other Apple chat applications, but no other.

There is an aspect of fragmentation developing in the common network applications, typically over behaviors relating to mutual trust, credentials and security. For example, the Mail domain is a heavily fractured domain, predominately because of the efforts of the ‘mainstream’ mail community to filter connections from those mail agents and domains used by mail spammers. It is no longer the case that any mail agent can exchange mail with any other mail domain, and in many cases the mail agent needs to demonstrate its credentials and satisfy the other agent that it is not promulgating spam mail.

Similar pressures are emerging to place the entirety of application level interactions behind secure socket and channel encryption using the service’s domain name as the key. It is possible that unsecured services will be increasingly be treated as untrustable, and there may be a visible line of fracture between applications that use security as a matter of course and those that operate in the clear.

Another potential aspect of fragmentation concerns the demarcation between the application, the host operation system, the local network and the ISP environment. The conventional view of the Internet sees this as a supply chain that invokes trust dependencies. The local network collects IP addresses and DNS resolver addresses from its ISP. The local network uses the ISP’s DNS resolver and relies on the ISP to announce the local network’s address to the Internet. Devices attached to the local network are assigned IP addresses as a local network function. They may also use the local network gate was a DNS resolver. Applications running on these devices use the network services operated by the system running on the device to resolve domain names, and establish and maintain network connections. Increasing awareness of the value of protection of personal privacy, coupled within increasing use of these devices in every aspect of users’ lives and increasing use of these devices to underpin many societal functions implies increasing levels of concern about information containment and protection. Should a local network trust the information provided through the ISP, or should it use some other provider of service, such as a VPN provider, or a DNS resolution service? Should an attached device trust the local network and the other attached devices? Common vectors for computer viral infection leverage these level of trust within local networks, and services such as shared storage system and similar can be vectors for malware infections. Similarly, should an application trust its host? Would a cautious application operate with a far greater level of assured integrity were it able to validate DNS responses using DNSSEC directly within the application? How can the application protect both itself and the privacy of the end user unless it adopts a suitably cautious stance about the external environment in which it operates.

As computing capability becomes ever more ubiquitous and the cost, size and power of complex computing falls, there is less incentive to create application models that build upon external inputs, and instead incentives appear to draw those inputs back into the explicit control of the application, and use explicit validation of external interactions rather than accepting them on trust. This does not necessarily fragment the resultant environment, but it does make the interactions of the various components in this environment one that is far more cautious in nature.

Security Fragmentation

It has often been observed that security was an afterthought in the evolution of the Internet protocol suite, and there is much evidence to support this view. Many of the protocols and applications we use are overly naive in their trust models, and are ill-equipped to discriminate between authentic transactions and various forms of passing off and deception. None of the original protocols operated in a “private” mode, allowing eavesdroppers to have a clear view of network transactions. The routing system itself forms a large mutual trust environment where rogue actors are often difficult to detect. This is coupled with an environment where edge devices are similarly vulnerable to subversion, and the combination of the two has proved to be exceptionally challenging.

Today’s situation appears to be that that the very openness of the network and its protocols is being turned against the Internet, and the level of cohesion of how to improve the situation is one that is still evolving and incomplete. Instead, it is evident that there is a certain level of fragmentation and diversity in the current endeavours in network security.

The domain name security framework is a good example of this. When the requirement emerged to be able to associate a cryptographic key pair with a service that was delivered from a given domain name, then the logical answer would have been to bind the public key to the domain name in the same fashion as an address was bound to a domain name, namely through the use of a DNS resource record and publication within the DNS. However, this was not possible at the outset due to the lack of mechanisms in the DNS to permit validation of a DNS response and the lack of a standard way of representing a public key in the DNS. The interim measure was the enrolling of a set of third party certification authorities who generated certificates that associated a given domain name to a particular public key. This set has expanded to many hundreds of these third party Certification Authorities (CAs), all of whom are trusted by end users for their security needs.

The central problem now is that the user does not know in advance which certification authority has issued a public key certificate for which name, so the user, and the subject of an issued certificate are both forced to trust the entire set of CAs. If any CA is compromised then it can be coerced into issuing a fake certificate for any domain name, compromising the privacy and integrity of the service offered by this domain name. This is not a framework that naturally induces high quality and integrity. The system is only as strong as the weakest CA, and the risks inherent in this framework does not lie in the choice of a particular CA to certify a domain name, but in the level of integrity of the entire collection of CAs.

Google posted this entry on their online security blog in March 2015:

“On Friday, March 20th, we became aware of unauthorized digital certificates for several Google domains. The certificates were issued by an intermediate certificate authority apparently held by a company called MCS Holdings. This intermediate certificate was issued by CNNIC.

“CNNIC is included in all major root stores and so the misissued certificates would be trusted by almost all browsers and operating systems. Chrome on Windows, OS X, and Linux, ChromeOS, and Firefox 33 and greater would have rejected these certificates because of public-key pinning, although misissued certificates for other sites likely exist.

“We promptly alerted CNNIC and other major browsers about the incident, and we blocked the MCS Holdings certificate in Chrome with a CRLSet push. CNNIC responded on the 22nd to explain that they had contracted with MCS Holdings on the basis that MCS would only issue certificates for domains that they had registered. However, rather than keep the private key in a suitable HSM, MCS installed it in a man-in-the-middle proxy. These devices intercept secure connections by masquerading as the intended destination and are sometimes used by companies to intercept their employees’ secure traffic for monitoring or legal reasons. The employees’ computers normally have to be configured to trust a proxy for it to be able to do this. However, in this case, the presumed proxy was given the full authority of a public CA, which is a serious breach of the CA system. This situation is similar to a failure by ANSSI in 2013.

“This explanation is congruent with the facts. However, CNNIC still delegated their substantial authority to an organization that was not fit to hold it.



“As a result of a joint investigation of the events surrounding this incident by Google and CNNIC, we have decided that the CNNIC Root and EV CAs will no longer be recognized in Google products.”

Improving this situation has proved to be exceptionally challenging. Adding digital credentials into the DNS to allow DNS responses to be validated can provide a robust mechanism to place public keys into the DNS, but the cost of such a measure is increased fragility of the DNS, increased complexity in zone registration and administration, increased time to perform a DNS query and much larger DNS responses. In addition, the chosen form of DNS security is one that interlinks parent and child zones, so that piecemeal adoption of this form of security has limited benefit. All of these considerations, coupled with the incumbency of a thriving CA industry, have proved to be inhibitory factors in adopting a more robust form of associating domain names with public keys to improve the integrity of secure communication.

Similar issues have been encountered in the efforts to retrofit secure credentials to allow authentication into other protocols. The Internet’s inter-domain routing protocol, the Border Gateway Protocol (BGP), is mutual trust environment where lies, whether deliberate or inadvertent, readily propagate across the Internet, causing disruption through the diversion of traffic to unintended destinations. Efforts to improve the situation by using public key cryptography to provide a framework that allows routing information to be validated against a comprehensive set of digital credentials involve considerable complexity, add a potentially large computational overhead to the routing function, and contribute further fragility to a system that is intended to be robust. Such systems can only authenticate as valid information that is already well signed. False information is invariably indistinguishable from unsigned information, so the ability of the system to detect all attempts of abuse is predicated on universal adoption of the technology. This provides little incentive for early adopters and such retro-fitted security systems are forced to compromise between a desire for complete integrity and a system that can provide incremental benefits in scenarios of partial adoption.

Similar problems have been encountered in email systems and the scourge of spam mail. The original open trust model of email has been comprehensively abused for some decades and efforts to add digital credentials into mail have also been unable to gather a critical mass of adoption. The weaknesses in the security model of email has induced many end users to subscribe to the free email services provided by the very largest of mail services, such as Gmail and Yahoo, simply because of their ongoing investment in spam filters, at the cost of a level of digital privacy.

Insecurity does not only occur at the level of application protocols that sit above the transport services provided by the IP protocol suite. We are seeing these underlying protocols also becoming the subject of abuse. The User Datagram Protocol (UDP) is now a major contributor to Distributed Denial of Service (DDOS) attacks. There are few clear practical responses that would mitigate or even prevent this. “Just block UDP” is tempting, but two of the more absolutely critical services on the Internet, the DNS and the network time protocol, are feasible only with this lightweight query/response interaction, and it is the DNS and NTP which are being used in these DDOS attacks. This raises the question of how this lightweight efficient query response protocol can be used moving forward if its abuse rates are unacceptably high to the extent that they overwhelm the Internet itself.

The outlook is not good. When the overall environment becomes toxic the motivation of individual actors is to spend resources to defend themselves, rather than attempting to eliminate the source of insecurity. This increase in defensive capability induces ever larger and more toxic attacks, and the ensuring escalation ensures that all other actors who cannot afford such large budgets to defend their online presence in the face of such continual attack are placed in a precarious position. As with electronic mail, the business of content hosting is now shifting into a role performed only by a small number of highly capable and large scale hosting providers. The provision of DNS services is undergoing a similar shift, where the activity is only viable when undertaken by a large scale incumbent operator who has the wherewithal to protect their service from this continual onslaught of attack. This is no longer an open and accessible market for the provision of such services.

Written by Geoff Huston, Author & Chief Scientist at APNIC

Follow CircleID on Twitter

More under: Internet Protocol, Security

Read more here:: feeds.circleid.com/cid_sections/blogs?format=xml

The post Thoughts on the Open Internet – Part 2: The Where and How of “Internet Fragmentation” appeared first on IPv6.net.

Read more here:: IPv6 News Aggregator

The post Thoughts on the Open Internet – Part 2: The Where and How of “Internet Fragmentation” appeared first on IPv6.net.

Show more