2015-10-05

I’m sure we’ve all heard about “the Open Internet.” The expression builds upon a rich pedigree of term “open” in various contexts. For example, “open government” is the governing doctrine which holds that citizens have the right to access the documents and proceedings of the government to allow for effective public oversight, a concept that appears to be able to trace its antecedents back to the age of enlightenment in 17th century Europe. There is the concept of “open society,” a theme that was developed in the mid 20th century by the Austrian philosopher Karl Popper. And of course in the area of technology there was the Open Systems Interconnection model of communications protocols that was prominent in the 1980’s. And lets not forget “Open Source,” which today is an extremely powerful force in technology innovation. So we seem to have this connotation that “open” is some positive attribute, and when we use the expression of the “Open Internet” it seems that we are lauding it in some way. But in what way?

So let’s ask the question: What does the “Open Internet” mean?

The Federal Communications Commission of the United States has published its views on this question:

‘The “Open Internet” is the Internet as we know it. It’s open because it uses free, publicly available standards that anyone can access and build to, and it treats all traffic that flows across the network in roughly the same way. The principle of the Open Internet is sometimes referred to as “net neutrality.” Under this principle, consumers can make their own choices about what applications and services to use and are free to decide what lawful content they want to access, create, or share with others. This openness promotes competition and enables investment and innovation.

‘The Open Internet also makes it possible for anyone, anywhere to easily launch innovative applications and services, revolutionizing the way people communicate, participate, create, and do business—think of email, blogs, voice and video conferencing, streaming video, and online shopping. Once you’re online, you don’t have to ask permission or pay tolls to broadband providers to reach others on the network. If you develop an innovative new website, you don’t have to get permission to share it with the world.’

http://www.fcc.gov/openinternet

The FCC’s view of an “Open Internet” appears to be closely bound to the concept of “Net Neutrality,” a concept that attempts to preclude a carriage service provider from explicitly favouring (or disrupting) particular services over and above any other.

Wikipedia offer a slightly broader interpretation of this term that reaches beyond carriage neutrality and touches upon the exercise of technological control and power.

“The idea of an open internet is the idea that the full resources of the internet and means to operate on it are easily accessible to all individuals and companies. This often includes ideas such as net neutrality, open standards, transparency, lack of internet censorship, and low barriers to entry. The concept of the open internet is sometimes expressed as an expectation of decentralized technological power, and is seen by some as closely related to open-source software.”

http://en.wikipedia.org/wiki/Net_neutrality#Open_Internet

In this essay I’d like to expand upon this theme of openness and extend it to include considerations of coherence within the Internet and also consider fragmentary pressures. I’d like to see if we can provide a considered response to the question: Is today’s Internet truly “Open?”

What does the “Open Internet” mean?

Let’s examine the attributes of an “Open Internet” through the lens of its component technologies. The questions being addressed here for each of these major technology activities that support the Internet are: What would be the expectations of an “Open Internet”? What would a truly open and coherent Internet look like?

The technology model used here is an adaption of the earlier Open Systems Interconnection reference model (ISO/IEC 7498-1), where each layer of this reference model uses services provided by the layer immediately below it, and provides services to the layer immediately above it. It is a conventional taxonomy for networking technologies. An Internet-specific protocol reference model is shown in Figure1, based on the technology model used in RFC1122 (“Host Requirements” RFC)


Figure 1 – A Protocol Reference Model for the Internet (after RFC 1122)

The concept of “Openness” as applied to networks carries an obvious connotation of accessibility. This is not the same as a free service, but it is a service where access to the service is not limited or restricted in arbitrary ways. An open network is an accessible network. This concept of general accessibility encompasses more than accessibility as a potential consumer of the network’s service. It also implies that there are no inherent restrictions or arbitrary inhibitions for anyone to provide services, whether it’s services as a provider of transmission capacity, switching, last-mile access, mobility, names, applications, of any of the other individual components that make up the Internet. This concept of openness also extends to the consequent marketplace of services that exists within this networked environment. Consumers can make their own choices about the applications and services that they choose to use in such an open network. The environment promotes competition in the supply of goods and services, and stimulates investment in innovation and development that provides evolutionary pressure to expand and diversify the ways in which we make use of this common network.

Such outcomes are the result of the application of the same set of overall principles into each of the areas of technology that form the essential components of the Internet.

An Open Switched Network

The theoretical model of an open and coherent network is a re-statement of an interpretation of the end-to-end principle in packet-switched networks, where the network’s intended role is strictly limited to the carriage of individual packets from source to destination, and all users, and all functions and services that populate the network, are provided by devices that sit outside of the network itself. These devices communicate between themselves in a manner that is largely opaque to the packet-switched network. Furthermore, edge devices are not expected to communicate with packet switching devices within the network, and equally, packet switching devices within the network do not directly communicate with edge devices (with the one exception of the generation of Packet Control messages (ICMP message in the context of the Internet Protocol). Network consistency implies that all active (packet switching) elements within the network that perform switching functions in IP packets use a consistent single interpretation of the contents of an IP packet, supporting precisely the same IP protocol specification.

The seminal work on the end-to-end principle is the 1981 paper “End-to-End Arguments in System Design” by J.H. Saltzer, D. P. Reed, and D. D. Clark, published in Proceedings of the Second International Conference on Distributed Computing Systems. Paris, France. April 8–10, 1981. IEEE Computer Society, pp. 509-512.

A simple restatement of the end-to-end principle is that the network should not replicate the functions that can be performed by communicating end systems.

A further paper on the topic is “Tussels in Cyberspace: defining Tomorrow’s Internet” by D. D. Clark, K. R. Sollins, J. Wroclawski and R. Braden, published in SIGCOMM’02, August 19-23, 2002.

A restatement of this paper’s thesis is that in an unbundled environment each actor attempts to maximize their role and value, and when applied to networks and applications they may create conflicting situations, which undermines a pure end-to-end network design.

The Internet Protocol has chosen a particular form of packet switching which is a “stateless” switching function. Within each active switching element each IP packet is forwarded towards its intended destination without reference to any preceding or following packets, and without reference to any pre-configured state within the switching element. This implies that every IP packet contains a destination address that is not relative to any assumed network state or topology, and that this address has an identical unique interpretation across the entire network domain. In this way IP addresses are not relative to any locality, network or scope, and each IP address value is required in this architecture to be unique across the entire Internet.

This is a claim more honored these days as an exception rather than general practice. Network operators have eschewed passing all responsibility for packet transmission to end points and have responded by constructing internally segmented networks that rely on various forms of virtual state within the network. This extends from the extensive use of VLANs in ether-switched data services to the almost ubiquitous use of MPLS in wide network networks. The current enthusiasm SDN is no exception to this bias towards the use of virtual circuits within networks.

A Open Consistent Address Space

In an open and consistent Internet every destination on the Internet is reachable from any location on the Internet. The way this is achieved is the universal ability to send a packet to any destination, and this implies that all such destinations require an IP address that everyone else may use. These IP addresses must be allocated and administered such that each address is uniquely associated with a single attached network and with a single attached device within that network. The network itself cannot resolve the inconsistency of address clashes where two or more devices are using the same address, so the responsibility for ensuring that all addresses are used in a manner that is unique is left to the bodies who administer address allocation and registration.

This has been an evolutionary process. The original address administration and registry function was managed through the US research agencies, and the evolution of this model has lead to the creation of five “Regional Internet Registries” each of which serve the address allocation and registry function needs of regional communities. The administration of the central pool of unallocated addresses is part of the IANA function. The policies that govern the administration of the distribution and registration functions within each of these regional registries are determined by the regional communities themselves, in a so-called “bottom-up” self regulatory manner.

The practices relating to access to address space through allocation and assignment are based on policies developed by the respective address communities in each region. The general theme of these address distribution policies is one of “demonstrated need” where addresses are available to applicants on the proviso that the applicant can demonstrate their need for these addresses within their intended service infrastructure.

Open End-to-End Transport

The service model of a stateless packet switched network is one of unreliable datagram delivery. This service model is inadequate for most useful network services. The Internet has commonly adopted a single end-to-end stream protocol, the Transmission Control Protocol (TCP), that is conventionally used by communicating end systems to transform the network’s unreliable datagram delivery service into a reliable lossless byte stream delivery service.

This is not the only end-to-end transport protocol in common use. Another protocol, the User Datagram Protocol (UDP), is a minimal abstraction of the underlying IP datagram behavior, commonly used by simple query/response applications, such as the DNS resolution protocol.

While many other transport protocols have been defined, common convention in the Internet has settled on TCP and UDP as the two “universal” end-to-end transport protocols, and all connected systems in an open coherent network would be expected to be able to communicate using these protocols. The uniform adoption of end-to-end transport protocol behaviors is a feature of such an open network, in that any two endpoints that both support the same transport protocol should be able to communicate using that protocol. In this open network model, the operation of these end-to-end protocols is completely opaque to the packet-switched network, as it concerns only the communication signaling between the two end systems.

This perspective of the end-to-end protocols in use in the Internet also makes a critical assumption about the nature of the flow control processes. This model assumes that TCP is the predominate protocol used by end hosts and, most critically, that the flow control algorithm, used by all TCP implementations, behaves in very similar ways. This model assumes that there is no central method of allocation or governance of network resource allocation to individual end-to-end conversation flows, and instead the model relies on the aggregate outcome of the TCP flow control protocols to provide a fair share allocation of common network resources where an approximately equal proportion of network resources is utilized by each active conversation flow.

The conventional flow control process is one of additive increase in flow rates (slow) and multiplicative decrease (fast), or “AIMD”. TCP sessions have no arbitrary speed settings, and each TCP session will both impose pressure on other concurrent sessions and respond to pressure from other concurrent sessions to try and reach a meta-stable equilibrium point where the network’s bandwidth is, to some level of approximation, equally shared across the concurrent active flows.

Packet loss is the signal of over-pressure, so a flow will gradually increase its sending rate to the point of onset of packet loss, and at that point it will immediately halve its sending rate and once more gradually probe increased rates until the next packet loss event.

TCP implementations that use a different flow control algorithm normally fare worse, as their efforts to place greater flow pressure on concurrent flows often results in higher packet loss rates in their own flows. However, there has been a significant body of research into flow control algorithms and there are TCP flow control algorithms that can secure a greater relative share of the network over a conventional AIMD flow control algorithm without the element of self-damage. These algorithms are capable of exerting “unfair” pressure on other concurrent TCP flows, and can consume a greater proportion of network resources as a result.

One aspect of the “network neutrality” debates is the assumption of a relatively passive network where the network’s resources will be equitably allocated due to the general fair-shared outcome that is achieved by the uniform use of particular TCP flow control behaviour. The TCP ecosystem is changing with entrants such as Akami’s use of FAST, Google’s use of QUIC with Chrome and some Linux distributions using CUBIC, and these assumptions about the general equity of outcome of competing end-to-end streaming sessions are now an increasingly approximate set of assumptions.

“TCP Protocol Wars”, http://ipj.dreamhosters.com/wp-content/uploads/2015/07/ipj18.2.pdf

A Open Consistent Name Space

This open and coherent model of the Internet is not limited to the network packet switching and end-to-end transport functions. A critical component of the Internet is implemented as a distributed application that sits alongside clients and servers at the “edge” of the network rather than within the network’s direct purview. This is the Internet’s symbolic name space, the Domain Name System (DNS).

This name space is the combination of a name structure and a name resolution function that allows a user level discourse using familiar symbols and terms to refer to service points connected to the Internet that are identified by IP addresses and transport protocol port numbers.

While it is conceivable to think about many diverse name spaces and even many diverse name resolution protocols, and the Internet as such would not necessarily prevent such an outcome, a coherent view of the Internet requires that the mapping of symbols to IP addresses follows a uniform and consistent convention across the entire network. Irrespective of where and how a DNS query is generated, the response should reflect the current state of the authentic information published in the DNS. The implication here is that an open and consistent DNS uses the hierarchical name space derived from an single and unique root zone, and that all name resolvers perform the resolution of a name query using a search within this same uniquely rooted name space. This is the essential element of a consistent name space for all of the Internet.

Open Applications

The context and content of individual conversations in this open coherent network model is also the subject of a number of common conventions, where certain commonly defined application level protocols are defined for common services. For example, applications wishing to pass email messages are expected to use the SMTP protocol, the retrieval of web pages to use the HTTP protocol, and so on.

This implies that the protocols used to support network-wide functions, including for example data transfer, electronic mail, instant messaging, and presence notification, all require the adoption of openly available protocol specifications to support that application, and that these specifications are openly implementable and not encumbered by restrictive claims of control or ownership.

Much of today’s environment also relies heavily on the concept of “open source” technologies. The Unix operating system, originally developed at AT&T Bell Labs in the 1970’s and distributed as open source, is now the mainstay of the much of today’s environment. The implementation of the TCP/IP protocol suite by the Computer Systems Research Group at the University of Berkeley in the 1980’s was made available as open source and the ready availability of this software package was part of the reasons behind the rapid adoption of this protocol as the common computer networking protocol in the 1990’s. Subsequent “open” implementations of popular applications, such as sendmail for Mail, BIND for the DNS, Apache for Web servers, added further momentum to this use of open source, and these days the concepts of open source is fundamental to much of the technology base of not only the Internet but to the entire information technology world.

Open Security

Security functions include both the open and unrestricted ability for communicating end users to invoke protection from third party eavesdropping and the ability for these end users to verify the identity of the remote party with whom they are communicating, and to authenticate that the communication as received is an authentic and precise copy of the communication as sent. This is useful in many contents, such as for example in open communications environments using the radio spectrum, or in environments that trade goods and services where authentication and non-repudiation is vitally important.

To allow such functions to be openly available to all users requires the use of unencumbered cryptographic algorithms that are generally considered to be adequately robust and uncompromised, and the associated availability of implementations of these algorithms on similar terms of open and unencumbered availability.

An Open Internet

One view of an open network is a consistent network, in that the same actions by a user will produce the same response from the networked environment, irrespective of the user’s location and their choice of service provider. In other words, the interactions between the application on the user’s device and the application that serves the referenced content should not be altered by the network in any way, and users should see identical outcomes for identical inputs across the entire network.

These considerations of the prerequisites of an open coherent Internet do not imply the requirement for an Internet that is operated by a single operator, or one where services are provided via a single service delivery channel or technology provided through a single channel. While the Internet is an amalgam of tens of thousands of component networks, populated by millions of services, and services by thousand of suppliers of services and technologies it is still feasible that this collection of service providers are individually motivated follow common conventions, and to operate their component services in a fashion that is consistent with all other providers. The property of coherence in an open internet is an outcome of individual interests to maximize their effectiveness and opportunities by conforming to the common norms of the environment in which they operate.

An Open Internet is not one where open access equates to costless access. The considerations of openness in such a model of an open network relate to the absence of arbitrary barriers and impositions being placed on activities.

What these considerations imply is the ability to evolve the Internet through incremental construction. A novel application need not require the construction of a new operating system platform, or a new network. It should not require the invention and adoption of a new network protocol or a new transport protocol. Novel applications can be constructed upon the foundation of existing tools, services, standards and protocols. This model creates obvious efficiencies in the process of evolution of the Internet.

The second part of the evolutionary process is that if a novel application uses existing specifications and services then all users can access the application and avail themselves of its benefits if they so choose. Such an open unified environment supports highly efficient processes of incremental evolution that leverage the existing technology base to support further innovation. The process of evolution is continual, so it is no surprise that the Internet of the early 1990s is unrecognizable from today’s perspective. But at the same time today’s Internet still uses the same technology components from that time, including the IP protocol, the TCP and UDP end-to-end transport protocols, the same DNS system, and even many of the same application protocols. Each innovation in service delivery in the Internet has not had to reinvent the entire networked environment in order to be deployed and adopted.

Much of the Internet today operates in a way that is consistent with common convention and is consistent with this model of an open, unified and accessible public resource. But that does not mean that all of the Internet environment operates in this manner all of the time, and there are many fragmentary pressures.

Such pressures appear to have increased as the Internet itself has expanded. These fragmentary pressures exist across the entire spectrum of technologies and functions that together make up the internet.

Some of these fragmentary pressures are based in technology considerations, such as the use of the Internet in mobile environments, or the desire to make efficient use of high capacity transmission systems. Other pressures are an outcome of inexorable growth, such as the pressures to transition the Internet Protocol itself to IPv6 to accommodate the future requirements of the Internet of Things. There are pressures to increase the robustness of the Internet and improve its ability to defend itself against various forms of abuse and attack.

How these pressures are addressed will be critical to the future of the concept of a coherent open Internet in the future. Our ability to transform responses to such pressures into commonly accepted conventions that are accessible to all will preserve the essential attributes of a common open Internet. If instead we deploy responses that differentiate between users and uses, and construct barriers and impediments to the open use of the essential technologies of the Internet then not only will the open Internet be threatened, but the value of the digital economy and the open flow of digital goods and services will be similarly impaired.

The Where and How of “Internet Fragmentation”

In defining what is meant by “Internet Fragmentation” it is useful to briefly describe what is meant by its opposite, an “Open and Coherent Internet”. As we’ve explored in the previous section, “coherence” implies that each of the elements of the Internet are orchestrated to work together to produce a seamless Internet which does not expose the boundaries between discrete elements. Coherence also implies consistency, in that the same trigger actions by a user produce the same response from the networked environment, irrespective of the user’s location and their choice of service provider. Openness also implies the ability to integrate and build upon existing tools, technologies and services to create new technologies and services, and in turn allow others to further evolve the technology and service environment.

“Fragmentation” on the other hand encompasses the appearance of diverse pressures in the networked environment that leads to diverse outcomes that are no longer coherent or consistent. In the context of the Internet, fragmentation also encompasses various ways in which openness is impaired, and also can include consideration of critical elements of service and the fragility of such arrangements when the supply of such services is left to a very small number of providers.

This section contains some notes on where and how there are fragmentary pressures that are driving apart aspects of the Internet and create various “islands” of differentiated functionality and connectedness. It concentrates on the technical aspects of these pressures for fragmentation and does not attempt to analyse public policy implications.

IP level Fragmentation

The issues around address exhaustion in IPv4 and the transition IPv6 deserve attention in relation to any discussion of potential Internet fragmentation.

The transition to IPv6 is still a process without clear coherence or assured outcomes. It is possible that the work undertaken already by a relatively small number of retail Internet access providers, including notably large ones such as AT&T, Comcast, Deutsche Telekom and KDDI, will generate sufficient impetus in the market to pull both content providers and other ISPs along with them in embarking on IPv6 services. This is, however, by no means an assured outcome, and the continued expansion of Network Address Translators (NATS) in IPv4 Internet appears to have no immediate end in sight. The market signals are as yet unclear and the public policy actions have not yet provided adequate impetus, with the result being that the general response from the majority of players has been insufficient to make any real progress in trying to shut down the use of IPv4 in the Internet.

Due to the address exhaustion of IPv4, increased use of Carrier Grade NATs is being made to share this scarce address resource across a greater number of users In other words, address exhaustion for IPv4 is creating larger and larger networks of “semi-opaque” connectedness within the public network. IPv4 addresses used in conjunction with NATs no longer have a clear association with a single end user and the most probable outcome is that parts of the net will “go dark” in the sense that user’s actions within this “dark” network are effectively untraceable. These devices also compromise other aspects of robustness in the engineering of the Internet at this level of operation. The requirement to pass all traffic to and from an external site through the same address translation unit impairs some forms of robust network operation that uses diverse points of interconnection and diverse connectivity, and instead this form of state-based middleware creates critical signal points of failure. Given the critical importance of content delivery in many networks, the presence of CGNs creates incentives to place selected content distribution functions on the “inside” of the CGN. This runs risks of the network discriminating between various content delivery systems through this ability to position some content in an advantaged position as compared to others. The longer-term pressures are difficult to discern at this stage, but the longer this hiatus in addresses lasts the greater the levels of address pressure. The greater the address pressure on the IPv4 network the greater the fragility and complexity of networks using address sharing.

Another side effect of IPv4 address exhaustion is address trading. This market has appeared organically and there is growing evidence that transferred IPv4 addresses are not all being registered in the established address registries. Some of this is evidently due to address “leasing” where the lessee is not registered at the current beneficial user of the address, but also some times due to a reluctance of the address holder to enter the address into the address registry because of concerns over address title policies or similar concerns for the parties involved. The larger the pool of unregistered addresses the greater the pressure to fracture the address space. There is no clear way back when or if the space fractures in this manner.

With the exhaustion of the address allocation framework for IPv4 and the established common belief that addresses are plentiful in IPv6, then much of the original rational for the regional address registry structure is weakened.

Much of the original rationale for the regional internet address distribution framework lay in the perceptions of scarcity in supply of addresses in the IPv4 address plan, and the need to perform a complex rationing operation. The clearly finite pool of addresses and the larger visions of the Internet’s future implied that it was not possible to simply allocate an adequate pool of addresses to each network operator to meet perceived needs, and instead each regional registry devised a rationing scheme based around the principle of “demonstrated need”. The original objective of this process was to ration the consumption of addresses in IPv4 until such time as IPv6 was prevalent, and there was no further need for IPv4 addresses. Without the need for further rationing and its associated administrative overhead, and a reversion to a potentially far simpler registry model then the case for regional fragmentation of the registry function is an open question.

However, not all of the pressures in this space are directed towards aggregation of the registry function into a single operation. When coupled with a cyber security perspective that its “good to know where every address is in a country” its reasonable to anticipate further pressure to further fracture the regional structures into national structures. In the Asia-Pacific region, APNIC already has China, India, Indonesia, Korea, Japan, Taiwan and Vietnam all operating such national address registries, and in Latin America there are comparable structures in Brazil and Mexico. It is an open question whether this will spread in response to these pressures of national security and the effective end of the conservative address allocation function.

Routing Fragmentation

The routing system is intended to ensure that every switching element is loaded with consistent information such that every attached device on the Internet is reachable by any other device. The Internet uses a two level protocol routing hierarchy. The set of local routing domains (or “Autonomous Systems” (AS’s)) use a variety of routing protocols. As they do not directly interact with each other this is not an issue at all. The second routing domain (or the “Inter-Domain” space) uses a single routing protocol called the Border Gateway Protocol (BGP).

The protocol BGP, and the broader Internet routing space, is under various pressures.

The AS identification field was defined as a 16 bit number field. The Internet community is close to exhausting this identifier space, and needs to move to a larger 32 bit field. Over the past 20 years the problem has been identified, technical standards produced, software has been deployed by vendors, the transition strategy defined, and the process has been started. In Europe, the process is well under way, while in North America (Canada and United States) the process has stalled and almost no 32 bit ASs are in use. The subtle difference is AS-specific communities appears to be the issue here. The Canadian and United States ISPs appear to make use of these AS-specific communities for routing policy, and are reluctant to use 32 bit AS numbers for this reason. The European ISPs appear to make more use of routing registries to describe routing policies, and these registries are largely agnostic over the size of the AS number field. It is unclear how the North American ISPs are going to resolve their issues given that the 2 byte AS number pool will be exhausted in the coming months.

The routing system is under constant pressure from false routing advertisements. Some of these are local scope advertisements intended to comply with national directives to filter certain IP addresses as part of a national content filtering directive. Some are the result of mistakes in router configuration. Others are deliberately malicious advertisements designed to redirect traffic in some manner, or to disrupt the genuine service. Efforts to improve the security of the routing system are being explored by the Internet Engineering Task Force (IETF), but the measures being contemplated imply additional overheads in routing, increased complexity and increased brittleness. The security is most effective when the entirety of the routing space adopts the technology, and balancing the local costs against the common benefit that is contingent on universal adoption is a significant issue for this approach.

The routing space is not a uniform space, and different address blocks are visible in different parts of the Internet, and there is often no clear reason why. There are “ghost routes” where the original withdrawal has not successfully promulgated across the entire network and some networks are still carrying reachability information for the old route. There are “islands” of more specific routes, which are blocked from universal promulgation by various prefix length filters being applied by various routers. There is the selective absence of routing information because some routing domain use ‘Route Flap Damping’ and others do not. Local routing policies may apply differential treatment to various routes, such as is seen in the distinction between transit, peering and customer relationships as implemented in the routing space. The result is that there is no clear consensus as to what constitutes “the Internet” in a routing sense. Each AS appears to see its own routing view of the Internet and there are invariably some number of subtle distinctions between each of these local views. The result is that it is not assured that every single end point on the Internet can send a packet to any other connected end point at any time. In some cases the routing system simply does not contain the information to support this universal form of connectivity. Though for most means and purposes the overwhelming majority of all end points can be reached.

ISP Peering and Transit

The Internet architecture does not expose user level transactions to the network and inter-network arrangements are not based on transaction accounting. At its heart, every user of the Internet pays for his or her own access. In turn, their ISP undertakes to provide connectivity to the rest of the Internet. It cannot do this without the support of other ISPs. Two ISPs that interconnect and exchange traffic typically do so under one of two broad models. One model is the transit relationship, where one party pays the other, and in return is provided with all of the routes available on and via the other network. The transit model is used in open competitive markets when the interconnection is perceived as being asymmetric and one party has significant assets, and the other party is wanting to access those network assets. There is no particular assurance from this model that a customer of a transit provider necessarily sees the entirety of the Internet.

The typical transit arrangement is that the customer is given access to the route set controlled the transit service provider that the ISP cannot obtain as efficiently by any other means. The other broad model is a peering model where neither party pays the other, and each party learns only the customer routes of the other. Through the use of peering, ISPs can reduce their transit costs, as they do not need to purchase transit for that traffic. To save interconnection costs ISPs establish or make use of Internet Exchange Points (IXPs), where they can peer with multiple networks at the same time. The peering model is often seen in open competitive market situations where the two providers bring, in each party’s perception, approximately equal assets to the connection, so neither party believes that there is undue leverage of the investments and assets of the other. Peering arrangements are at times challenging to sustain. Some networks grow and want to change their position to a seller of transit connectivity. They may then opt to de-peer some networks in order to force them to become customers. There are also various hybrid approaches that combine peering of customer networks with the option of also purchasing a transit service. For example, Hurricane Electric has an open peering policy, while at the same time selling an optional transit service to the networks it peers with.

The market-based approach to connectivity as represented by this model of interconnection is efficient, and relatively flexible, and it embraces entirety large proportion of the inter-provider relationships in the Internet. Its divergence from the model supported by telephony is still a source of continuing tension in certain international circles. Efforts by certain countries to assert some form of paid relationship by virtue of their exclusive role of access to a national user base have, in general, been relatively self-harming in terms of a consequence of limited external visibility on the part of the national user community that was used in such negotiations. Nonetheless, where commercial negotiations do take place and in the absence of sufficient competition, one player may leverage their position to endeavor to extract higher rents from others. In those instances, and the reason why the Internet has become so successful in competitive markets, ISPs then have the option to bypass each other using transit if they find that more economical.

Other tensions have appeared when the two parties bring entirely different assets to a connection, as is the case with Content Distribution Networks connecting with Internet Access Providers. One party is bringing content that presumably is valued by the users, the other party is bringing access to users that is vital for the content distribution function. Whether or not a party can leverage a termination monopoly in such situations depends on the competitive market situation of the location it operates in. For example Free in France for a while demanded paid peering from Google and would not upgrade saturated interconnects, but in 2015 upgraded its peering with Google without receiving payment.

Name Space Fragmentation

The name space has been under continuous fragmentation pressure since its inception.

The original public name space has been complemented by various locally scoped name spaces for many years. As long as the public name space used a static list of top level domains the private name was able to occupy unused top level name spaces without serious side effects. The expansion of the gTLD space challenges this assumption, and the collision of public and private name spaces leads to the possibility of information leakage.

Other pressures have, from time to time, taken the form of augmenting the public root with additional TLD name spaces through the circulation of “alternate” root servers. These alternate roots generate a fractured name space with the potential for collision, and as such have not, in general, been sustaining. The problem with these alternate systems is that a name that refers to a particular location and service in one domain may refer to an entirely different location and service in another. The “use model” of the Internet’s user interface is based on the uniqueness of a domain name, and in fact based on the graphical representation of a domain name on a user’s device. So if a user enters a network realm of an alternate root name space where a previously known and trusted domain name is mapped to a different location, then this can be exploited in many ways to compromise the user and the services they use. The confidence of users and the trust that is placed in their use of the Internet is based on a number of premises, and one of the more critical premises is that the Domain Name space is consistent, unfragmented and coherent. This premise is broken when alternate root systems are deployed.

As well as these fragmentary pressures driven by an objective to augment the name space in some fashion, there are also pressures to filter the name space, so that users are unable to access the name information for certain domain names. In some cases these are specific names, while in other cases it has been reported that entire TLD name spaces are filtered.

It has been observed that the TLD for Israel, .il, is filtered in Iran, such that no user in Iran is able to resolve a DNS name under the .il TLD.

http://www.potaroo.net/reports/2015-07-GTLD-Universal-Acceptance-report-v1.pdf

The resolution process of the DNS is also under pressure. The abuse of the DNS to launch hostile attacks has generated pressures to shut down open resolvers, to filter DNS query patterns and in certain cases to alter a resolver’s responses to force the query to use TCP rather than DNS.

This abuse has also highlighted another aspect of the DNS, namely that for many service providers the operation of a DNS resolution service is a cost centre rather than a generator of revenue. With the advent of a high quality high performance external DNS resolution services in the form of Google’s Public DNS, the Open DNS resolver system and Level 3’s long standing Open DNS Resolver service, many users and even various ISPs have decided to direct their DNS queries to these open resolver servers. Such a response has mitigated the effectiveness of local name filtering, and at the same time allowed these open providers to gain substantial market share of the Internet’s DNS activity.

Google recently noted that “Overall, Google Public DNS resolvers serve 400 billion responses per day.”

http://googlewebmastercentral.blogspot.fr/2014/12/google-public-dns-and-location.html

Not only is the DNS protocol enlisted to launch attacks; the DNS itself is under attack. Such attacks are intended to prevent users from obtaining genuine answers to certain queries, and instead substituting a deliberately false answer. The DNS itself does not use a “protected” protocol, and such substitution of “false” answers is often challenging to detect. This property of the DNS has been used by both attackers and state actors when implementing various forms of DNS name blocking. Efforts to alter the DNS protocol to introduce channel security have been contemplated from time to time and have some role in certain contexts (such as primary to secondary zone transfers) but they have not been overly effective in the area of resolver queries. Another approach is to allow the receiver of a response to validate that the received data is authentic, and this is the approach behind DNSSEC. Like many security protocols, DNSSEC is most effective when universally adopted, in that at that point any attempts to alter DNS resolution responses would be detectable. With the current piecemeal level of adoption, where a relatively small number of DNS zones are signed (and even where the zones are signed, DNSSEC uptake at the domain name level is vanishingly small – even amongst banks, large-scale ecommerce providers), then the value of this security approach is significantly smaller than would be the case with general adoption.

The contents of the DNS are also under some pressure for change and there are various ways that applications have chosen to handle them. This is evident in the introduction of scripts other than ASCII (so-called “Internationalized Domain Names,” or “IDNS”). Due to a concern that DNS implementations were not necessarily 8-bit clean, the introduction of DNS names using characters drawn from the Unicode character space required the application to perform a transform of the original unicode string to generate an encoded string in the ASCII character set that strictly obeyed the “letter, digit, hyphen” rule. Similarly the application is required to map back from this encoded name to a displayed name. The integrity of the name system with these IDN components is critically dependent on every application using precisely the same set of mappings in their applications. While this is desirable, it is not an assured outcome. A second issue concerns the “normalisation” of name strings. The ascii DNS is case-insensitive, so that query strings are “normalized” to monocase when searching for the name in the DNS. The issue of normalised characters from non-ascii scripts presents some issues of common use equivalence by communities of users of a particular language, and what may be regarded as equivalent characters by one community of users of a given language may not be equivalently regarded by users of the same language. In recent years, engineers and linguists within the ICANN community have been working towards a common set of label generation rules, and have been making real progress. This is a particularly complex issue in the case of Arabic script which has many character variants (even within individual languages) and uses some characters which are not visible to humans (zero-width joiners). The DNS is incapable of handling such forms of localisation of script use. Despite being available for 15 years, IDNs are still not working seamlessly in every context or application in which an ASCII domain is used. This issue is called “universal acceptance”. Although it applies equally to other new gTLDs, it is far more complex and challenging to overcome in IDNs. Examples include IDN email addresses – while Google announced last year that Gmail will support IDN addresses, this only applies when both the sender and receiver both have Gmail accounts. IDN email addresses are not supported by any of the major application providers in the creation of user accounts (which often use email addresses as the user’s unique account identifier), nor in digital certificates, DNS policy data or even many web browsers.

In a test involving some 304 new generic top level domains a problem was observed with punycode-encoded IDNs where a combination of Adobe’s Flash engine, the Microsoft Windows platform and either the Internet Explorer or Firefox browsers was incapable of performing a scripted fetch of an IDN. The problem illustrates the care needed in application handling where entirely distinct internal strings (in this case the ascii punycode and the Unicode equivalent) refer to the same object.

http://www.potaroo.net/reports/2015-07-GTLD-Universal-Acceptance-report-v1.pdf

The fragmentation risk is that the next billions of Internet users who are not Latin-script literate – for example, 80% of India’s 1.2 billion population is unable to speak English – will not be able to benefit from the memorability and human-usability of the domain name system. The problem has been masked to some extent by the demographics of Internet uptake to date, but is likely to become more apparent as the next billion comes online. Another possibility is that such populations will simply not use the domain name system. Uptake of domain name registrations (both ASCII and IDN) in Arab States and Islamic Republic of Iran is extremely low, and stands in stark contrast to the enthusiastic uptake of social network platforms (Egypt has 13 million Facebook users; Saudi’s Twitter usage grew by 128% in 2013, to 1.8 million.

Another issue with the use of IDNs concerns the “homograph” issue, where different characters drawn from different scripts use precisely the same display character glyph on users’ screens. The risk here is of “passing off” where a domain name is registered with a deliberate choice of script and characters that will be displayed using the same character glyphs as a target name. This has led to different applications behaving differently when handling exactly the same IDN domain name. Some applications may choose to display the Unicode string, while others may elect to display the ascii encoding (Punycode) and not display the intended non-ascii string Unicode equivalent.

http://en.wikipedia.org/wiki/IDN_homograph_attack
https://wiki.mozilla.org/IDN_Display_Algorithm

Another quite different precedent has been created by the IETF when they opened up the “Special Use” domain name registry (RFC6761). There was a common convention that names that looked like DNS names were in fact DNS names, and were resolvable by performing a DNS query. Other name forms, and other non-DNS forms of name resolution, were identified principally through the URI name scheme, which used the convention of a scheme identifier and a scheme-defined value. The registration of the top level name “.onion” is anomalous in this respect in that the names under .onion are not DNS labels, and are not resolvable using a conventional DNS query. Similar considerations apply to names under the “.local” top level domain, which are intended to be local scope names resolved by multicast DNS queries. Such implicit scope creep of the DNS label space to encompass names that are not resolvable by the public DNS, yet otherwise resemble conventional DNS names, offers greater levels of potential confusion.

Application Level Fragmentation

Similar considerations apply at the application level, and there are tensions between maximizing interoperability with other implementations of the same application and a desire to advantage the users of a particular application.

Apple’s iMessage application used a similar framework to other chat applications, but elected to use encryption with private key material that allowed it to exchange messages with other Apple chat applications, but no other.

There is an aspect of fragmentation developing in the common network applications, typically over behaviors relating to mutual trust, credentials and security. For example, the Mail domain is a heavily fractured domain, predominately because of the efforts of the ‘mainstream’ mail community to filter connections from those mail agents and domains used by mail spammers. It is no longer the case that any mail agent can exchange mail with any other mail domain, and in many cases the mail agent needs to demonstrate its credentials and satisfy the other agent that it is not promulgating spam mail.

Similar pressures are emerging to place the entirety of application level interactions behind secure socket and channel encryption using the service’s domain name as the key. It is possible that unsecured services will be increasingly be treated as untrustable, and there may be a visible line of fracture between applications that use security as a matter of course and those that operate in the clear.

Another potential aspect of fragmentation concerns the demarcation between the application, the host operation system, the local network and the ISP environment. The conventional view of the Internet sees this as a supply chain that invokes trust dependencies. The local network collects IP addresses and DNS resolver addresses from its ISP. The local network uses the ISP’s DNS resolver and relies on the ISP to announce the local network’s address to the Internet. Devices attached to the local network are assigned IP addresses as a local network function. They may also use the local network gate was a DNS resolver. Applications running on these devices use the network services operated by the system running on the device to resolve domain names, and establish and maintain network connections. Increasing awareness of the value of protection of personal privacy, coupled within increasing use of these devices in every aspect of users’ lives and increasing use of these devices to underpin many societal functions implies increasing levels of concern about information containment and protection. Should a local network trust the information provided through the ISP, or should it use some other provider of service, such as a VPN provider, or a DNS resolution service? Should an attached device trust the local network and the other attached devices? Common vectors for computer viral infection leverage these level of trust within local networks, and services such as shared storage system and similar can be vectors for malware infections. Similarly, should an application trust its host? Would a cautious application operate with a far greater level of assured integrity were it able to validate DNS responses using DNSSEC directly within the application? How can the application protect both itself and the privacy of the end user unless it adopts a suitably cautious stance about the external environment in which it operates.

As computing capability becomes ever more ubiquitous and the cost, size and power of complex computing falls, there is less incentive to create application models that build upon external inputs, and instead incentives appear to draw those inputs back into the explicit control of the application, and use explicit validation of external interactions rather than accepting them on trust. This does not necessarily fragment the resultant environment, but it does make the interactions of the various components in this environment one that is far more cautious in nature.

Security Fragmentation

It has often been observed that security was an afterthought in the evolution of the Internet protocol suite, and there is much evidence to support this view. Many of the protocols and applications we use are overly naive in their trust models, and are ill-equipped to discriminate between authentic transactions and various forms of passing off and deception. None of the original protocols operated in a “private” mode, allowing eavesdroppers to have a clear view of network transactions. The routing system itself forms a large mutual trust environment where rogue actors are often difficult to detect. This is coupled with an environment where edge devices are similarly vulnerable to subversion, and the combination of the two has proved to be exceptionally challenging.

Today’s situation appears to be that that the very openness of the network and its protocols is being turned against the Internet, and the level of cohesion of how to improve the situation is one that is still evolving and incomplete. Instead, it is evident that there is a certain level of fragmentation and diversity in the current endeavours in network security.

The domain name security framework is a good example of this. When the requirement emerged to be able to associate a cryptographic key pair with a service that was delivered from a given domain name, then the logical answer would have been to bind the public key to the domain name in the same fashion as an address was bound to a domain name, namely through the use of a DNS resource record and publication within the DNS. However, this was not possible at the outset due to the lack of mechanisms in the DNS to permit validation of a DNS response and the lack of a standard way of representing a public key in the DNS. The interim measure was the enrolling of a set of third party certification authorities who generated certificates that associated a given domain name to a particular public key. This set has expanded to many hundreds of these third party Certification Authorities (CAs), all of whom are trusted by end users for their security needs.

The central problem now is that the user does not know in advance which certification authority has issued a public key certificate for which name, so the user, and the subject of an issued certificate are both forced to trust the entire set of CAs. If any CA is compromised then it can be coerced into issuing a fake certificate for any domain name, compromising the privacy and integrity of the service offered by this domain name. This is not a framework that naturally induces high quality and integrity. The system is only as strong as the weakest CA, and the risks inherent in this framework does not lie in the choice of a particular CA to certify a domain name, but in the level of integrity of the entire collection of CAs.

Google posted this entry on their online security blog in March 2015:

“On Friday, March 20th, we became aware of unauthorized digital certificates for several Google domains. The certificates were issued by an intermediate certificate authority apparently held by a company called MCS Holdings. This intermediate certificate was issued by CNNIC.

“CNNIC is included in all major root stores and so the misissued certificates would be trusted by almost all browsers and operating systems. Chrome on Windows, OS X, and Linux, ChromeOS, and Firefox 33 and greater would have rejected these certificates because of public-key pinning, although misissued certificates for other sites likely exist.

“We promptly alerted CNNIC and other major browsers about the incident, and we blocked the MCS Holdings certificate in Chrome with a CRLSet push. CNNIC responded on the 22nd to explain that they had contracted with MCS Holdings on the basis that MCS would only issue certificates for domains that they had registered. However, rather than keep the private key in a suitable HSM, MCS installed it in a man-in-the-middle proxy. These devices intercept secure connections by masquerading as the intended destination and are sometimes used by companies to intercept their employees’ secure traffic for monitoring or legal reasons. The employees’ computers normally have to be configured to trust a proxy for it to be able to do this. However, in this case, the presumed proxy was given the full authority of a public CA, which is a serious breach of the CA system. This situation is similar to a failure by ANSSI in 2013.

“This explanation is congruent with the facts. However, CNNIC still delegated their substantial authority to an organization that was not fit to hold it.



“As a result of a joint investigation of the events surrounding this incident by Google and CNNIC, we have decided that the CNNIC Root and EV CAs will no longer be recognized in Google products.”

https://googleonlinesecurity.blogspot.ca/2015/03/maintaining-digital-certificate-security.html

Improving this situation has proved to be exceptionally challenging. Adding digital credentials into the DNS to allow DNS responses to be validated can provide a robust mechanism to place public keys into the DNS, but the cost of such a measure is increased fragility of the DNS, increased complexity in zone registration and administration, increased time to perform a DNS query and much larger DNS responses. In addition, the chosen form of DNS security is one that interlinks parent and child zones, so that piecemeal adoption of this form of security has limited benefit. All of these considerations, coupled with the incumbency of a thriving CA industry, have proved to be inhibitory factors in adopting a more robust form of associating domain names with public keys to improve the integrity of secure communication.

Similar issues have been encountered in the efforts to retrofit secure credentials to allow authentication into other protocols. The Internet’s inter-domain routing protocol, the Border Gateway Protocol (BGP), is mutual trust environment where lies, whether deliberate or inadvertent, readily propagate across the Internet, causing disruption through the diversion of traffic to unintended destinations. Efforts to improve the situation by using public key cryptography to provide a framework that allows routing information to be validated against a comprehensive set of digital credentials involve considerable complexity, add a potentially large computational overhead to the routing function, and contribute further fragility to a system that is intended to be robust. Such systems can only authenticate as valid information that is already well signed. False information is invariably indistinguishable from unsigned information, so the ability of the system to detect all attempts of abuse is predicated on universal adoption of the technology. This provides

Show more