2013-08-07



‘Breakpoint: Why the Web Will Implode, Search Will Be Obsolete, and Everything Else You Need to Know About Technology Is in Your Brain’ by Jeff Stibel (Palgrave Macmillan; July 23, 2013)

Table of Contents:

i. Introduction/Synopsis

PART I: NATURAL NETWORKS: THE ANT COLONY AND THE HUMAN BRAIN

Section A: Ants

1. Natural Networks Exhibit A: The Ant Colony

2. How Ant Colonies Work

3. An Ant Colony Is More than the Sum of Its Parts

a. Emergence

b. The Mature Ant Colony

4. The Evolution of an Ant Colony

a. Growth

b. Breakpoint

c. Equilibrium

Section B: Brains

5. Natural Networks Exhibit B: The Human Brain

a. How the Human Brain Works

6. The Evolution/Life Cycle of the Human Brain

a. Growth

b. Breakpoint

i. In the Life Cycle

ii. In the Evolution of the Human Species

c. Equilibrium

PART II: THE INTERNET AND INTERNET-RELATED NETWORKS

Section C: The Growth Phase of the Internet

7. The Evolution of the Internet

a. The Beginnings of the Internet: The ARPAnet

b. The Internet Moves to the National Science Foundation

c. The Internet Goes Global (and Adopts the World Wide Web)

8. The Energy Problems (and Solutions)

9. The Continuing Growth of the Internet (and the Challenges This Poses)

Section D: The Internet’s Interface: The World Wide Web (and Its Websites)

10. The Evolution of the World Wide Web (and How We Navigate It)

a. Yahoo!

b. The First Search Engines

c. The Rise of Google

11. The Breakpoint of the Web (and How to Help it Reach Equilibrium Gracefully)

a. The Breakpoint of the Web

b. How the Web Can Be Improved

12. How Websites and Web Networks Can Be Improved

PART III: THE FUTURE OF THE INTERNET & CONCLUSION

13. The Future of the Internet (The Internet as Brain)

14. Conclusion

i. Introduction/Synopsis

This is not a book about the end of the internet, as the controversial title may seem to suggest. Rather, it’s a book about networks (meaning a group of interconnected people or things) and how networks evolve; and its main focus is on internet-related networks and the internet itself (which is one enormous network). The author, Jeff Stibel, argues that there are certain natural laws that govern the unfolding of networks, and that understanding these laws can help us understand how the internet (and other internet-related networks) are likely to evolve over time, and also how we should approach these networks in order to get the most out of them (including make money off of them).

When it comes to the evolution of a network, Stibel argues that there are three main stages here: 1) Growth; 2) Breakpoint; and 3) Equilibrium. In the growth phase, the network grows in size, usually at a very quick (often exponential) pace. This is a precarious time for networks, for if they do not grow fast enough and large enough they will simply wither away and die (the vast majority of networks do in fact die at this stage).

Though a network must grow very quickly in the growth phase just to survive, this initial rate of growth is not something that can be sustained indefinitely. For all networks have a natural carrying capacity that limits how large they can be. This carrying capacity is defined by two factors: energy and organizational complexity. When it comes to energy, a network needs physical energy in order to sustain itself, and thus it is limited by how much energy is available in the environment and that it is able to access (and physical energy is never infinite, so all networks must ultimately have a physical limit).

When it comes to organizational complexity, as a network grows in size it also increases in complexity, and it eventually reaches a point where it becomes so complex that it becomes unwieldy, and begins to lose its utility. Thus a network has an optimal level of organizational complexity, and this optimal level of complexity defines its carrying capacity. (Whether a network hits its carrying capacity due to energy limits or complexity limits depends on the network itself—but whichever limit is met first defines the carrying capacity of that network).

Now, while each network has a natural carrying capacity, a healthy, successful network will almost always grow beyond its carrying capacity during its growth phase. This is because a network never actually knows what its carrying capacity is beforehand, and can only discover this by feeling the effects of having gone beyond it. Once a network exceeds its carrying capacity it begins to perform in a suboptimal way, until eventually, if it keeps on growing, it collapses. The point at which a network collapses is the breakpoint (the second stage in the evolution of a network).

Now, if a network has grown too far beyond its carrying capacity (often due to human interference) it may collapse entirely. However, if the network is allowed to reach its breakpoint naturally, it will usually just collapse in a way that leads it to shrink back in size and complexity to its natural carrying capacity. If the former happens the network dies, if the latter happens the network reaches the third and final stage: equilibrium. In the equilibrium stage the network may lose some of its size, but it is at this stage that it begins to improve in quality and stability.

Take an ant colony, for example. A successful ant colony grows in size until it reaches its breakpoint (sometimes due to an energy limit, but most often due to a complexity limit), at which point it begins shedding off ants to form new colonies. This downsizing process continues until the colony shrinks back to its natural carrying capacity–at which point it enters its equilibrium phase. It is only when it reaches equilibrium that the ant colony becomes as efficient and stable as it can be, and hitting this stage most often allows the colony to persist well into the future.

Or take the human brain. The brain generates new neurons and connections at an incredibly quick pace in the beginning. Eventually, though, it hits a breakpoint, at which time it begins culling back neurons and connections until it reaches equilibrium. It is at this stage that the brain begins developing real intelligence and even true wisdom.

When it comes to the internet—the network that is the focus of the book—we learn that this network is still in its growth phase, and thus it still has much evolving to do before it reaches maturity. Specifically, the internet must still grow beyond its carrying capacity, reach its breakpoint, and collapse back to equilibrium. What this means is that the internet stands to go through some very significant changes in the coming years.

Drawing on evidence from other networks, Stibel seeks to chart out what is likely to happen to the internet (and other internet-related networks) as it passes through its various phases on its way to equilibrium. Stibel predicts that the journey will feature some real growing pains, but that ultimately the internet will emerge better and smarter than ever (and may even develop consciousness).

*To check out the book at Amazon.com, or purchase it, please click here: Breakpoint: Why the Web will Implode, Search will be Obsolete, and Everything Else you Need to Know about Technology is in Your Brain

What follows is a full executive summary of Breakpoint: Why the Web Will Implode, Search Will Be Obsolete, and Everything Else You Need to Know About Technology Is in Your Brain by Jeff Stibel

PART I: NATURAL NETWORKS: THE ANT COLONY AND THE HUMAN BRAIN

The various networks we see around us differ in many ways, but for Stibel, there are some deep similarities between them that we must appreciate if we are to understand them properly.

Now, many of the networks that are most familiar to us (including the internet, and the World Wide Web) were created by us, through conscious intention. But networks do in fact sprout up in nature without conscious intention, and it is instructive to take a look at these networks first, for they can teach us much about the ones that we do create consciously (loc. 145).

One place where networks do sprout up in nature is in communities of social species (of which we are one). Indeed, the communities of social species are nothing but complex networks—and some of the most complex networks around—and thus this is a very appropriate place to begin.

Now, the most complex communities of social species belong to a particular class of these species—known as the eusocial species—and of these there are but a handful. They include only bees, wasps, termites, ants and we humans (loc. 1326). And of these, the ant stands out as the one species whose communities are closest to our own—for the simple reason that their communities come closest to matching the sophistication and complexity of ours (loc. 2369).

Communities of ants are so much like ours, in fact, that it has been suggested that though chimpanzees are closest to us in evolutionary terms, that ants are in fact the species that are most similar to us in functional terms (loc. 2369). As researcher Mark Moffet puts it, “no chimpanzee group has to deal with issues of public health, infrastructure, distribution of goods and services, market economies, mass transit problems, assembly lines and complex teamwork, agriculture and animal domestication, warfare and slavery” (loc. 2369). And ants not only deal with all of these issues, but they handle them in a way that approximates the sophistication that we display.

Given, then, that ant colony networks are some of the most complex and sophisticated in nature, it is appropriate that we begin with them.

*It is quite possible to skip the section on ants and still understand the sections on the human brain and the internet. However, understanding the ant colony does provide some valuable context for understanding the latter 2 networks.

Section A: Ants

1. Natural Networks Exhibit A: The Ant Colony

A mature ant colony is a truly remarkable thing. Take the leaf-cutter ant, for instance. Leaf-cutter ants derive their name from the fact that they cut and collect leaves to bring back to their nest (they can often be seen lugging leaves several times their own size [loc. 2312]). Interestingly, leaf-cutter ants don’t actually eat these leaves (loc. 2312). Rather, they use them as mulch, which they feed to a fungus that they cultivate in their nests, and the fungus is what they live on (loc. 2312). As Stibel explains “leaf-cutter ants eat fungus that they nurture, fertilize, and harvest themselves. The fungus thrives on leaves, hence all the leaf cutting and transporting” (loc. 2312).

Here is an excellent clip on the incredible leaf-cutter ant:

This whole process requires a very elaborate set-up in the nest (to say the least). Here is Stibel describing the nest of a community of leaf-cutter ants that was excavated back in 1994, in Botucatu, Brazil: “a marvel of modern engineering, one mound covered an above-ground surface area of nearly 725 square feet. The largest nest had tunnels extending 229 feet below the earth, making the entire structure as large as a skyscraper and as wide as a city block. Its construction required the ants to move untold tons of soil. The extensive labyrinths of the largest nest contained 7,863 chambers reaching as far down as 23 feet, each with a specific purpose: there were garden compartments, nurseries, even trash heaps. The tunnel system connecting the chambers looked like a superhighway system, complete with on-ramps, off-ramps, and local access roads. The structure itself looked as if it had been designed by an architect” (loc. 2307).

Here is a nice clip that features the excavation of a leaf-cutter ant colony:

And that’s not all. In addition to constructing garden chambers for the express purpose of cultivating their fungi, the ants also organize their nests in such a way that optimizes the growing conditions therein. For example, their nests feature an elaborate air-recycling system that keeps fresh air flowing in and stale air flowing out (loc. 634, 2309). What’s more, they actively open and close air ducts that regulate the temperature and humidity in their growing chambers to make them just right for the needs of their precious fungi (loc. 2315).

Other varieties of ants live very differently from the leaf-cutters, but often no less impressively. Take the slave-making ants, for instance. The slave-making ants make a living pretty much as their name suggests. Here’s Stibel to explain: “slave-making ants don’t clean house, cook food, or take care of their babies. They actually don’t even know how to do any of those things. They’re pretty much good at only one thing: finding others to do their work. Slave-makers raid the nests of other ant colonies and steal all their eggs. Those ants grow up as slaves, and they do pretty much everything for their masters: groom them, feed them, defend them from bigger insects, you name it. If the colony moves to a new nest, the slaves will even carry their masters to their new abode… In order to steal the eggs of another colony, the slave-makers must first go to war—these prodigious ants ruthlessly kill any ant that gets in the way. Opportunistic slave-maker queens follow these raiders into a colony and take advantage of the chaos created by the raid. The young aspiring queen slips into the nest, finds the queen ant, and literally chokes her to death. Then she eats the old queen so that she smells like the queen’s pheromones. The rest of the ants never know the difference, giving the young slave-maker queen an instant colony of her own” (loc. 644).

The following is a very good (albeit melodramatic) segment on one variety of slave-making ant (the part about the ants begins in earnest at the 50 second mark):

Now, from a moral point of view we may well admire the leaf-cutter ant much more so than the slave-maker. But from a strictly organizational point of view, we must grant that both varieties of ant are incredibly sophisticated in how they make a living.

2. How Ant Colonies Work

The sophistication of ants is all the more impressive when we consider just how limited their brains are. As Stibel explains, “their brains have something on the order of 250,000 cells (compared to the 16 million brain cells of the average frog)” (loc. 167). And if you take an ant out of its colony, the behavior that it exhibits reflects just about what you might expect from something with 250,000 brain cells. For example, as the author explains, “certain ants outside of their colony will move in circles until they die from exhaustion” (loc. 202).

However, if you were to remove that ant from its dizzy circle and plop it back into its colony, it would start looking a lot less confused. For despite their unimpressive brains, ants are able to do one thing very well: communicate with one another and coordinate their behavior based on this communication (loc. 1751). As the author explains, “they communicate through chemical pheromones that pass information from ant to ant. They decide which tasks to undertake at any given moment based on information they receive from other ants” (loc. 169).

To illustrate how this works, ant researcher Deborah Gordon provides the following example: “‘an ant operates according to a rule such as, “If I meet another ant with odor A about three times in the next 30 seconds, I will go out to forage; if not, I will stay here”’” (loc. 672). Thus it is the pattern of interactions that an ant experiences in the recent past that determines what it decides to do next. Or, as Gordon puts it, “the pattern of interaction itself, rather than any signal transferred, acts as the message’” (loc. 672; see also loc. 1754).

This is significant because it shows that the coordinated behavior of ants is not directed by some central command center (the queen, say). Rather, the interactions between the individual ants dictates the overall pattern of behavior. It’s a bottom-up, self-organizing system, as opposed to a top-down one (loc. 861-68). As Stibel puts it “no one is in charge” (loc. 868).

Communicating and coordinating their actions in this way, ants are able to practice a division of labor in their colonies and thus perform some very complex and sophisticated tasks (despite the fact that ‘no one is in charge’). For example, as Stibel explains, “groups of ants learn and remember sophisticated routes and can return to them to gather food. They protect their queen and defend their territory from predators and imperialistic ant colonies. They also keep their nests clean and in good repair and nurture the newborn ants who will eventually go out into the world, mate, and create new colonies” (loc. 172). And, as we have seen, certain ants also practice agriculture, and enslave other ants.

3. An Ant Colony Is More than the Sum of Its Parts

a. Emergence

Thus ants collectively accomplish tasks that none of them would be able to accomplish individually. One way to put this would be to say that though individual ants themselves are not very intelligent, the colony itself is extremely intelligent. As Stibel puts it, “here we have this tiny biological machine, the ant, that’s very primitive in terms of intellectual capacity, but the colony does tremendously sophisticated things… When mature ants act as a group, a single unit, they defy logic. It turns out that the intelligence of ants does not lie with the individual—it lies with the group. ‘Ants aren’t smart,’ but the colonies are downright brilliant” (loc. 178).

Another way to put this would be to say that an ant colony displays characteristics and abilities not possessed by any of the ants that make it up, and thus that the colony is more than the sum of its parts. This phenomenon is known as emergence (on account of the fact that qualities not present at one level of organization ‘emerge’ one level higher up), and it is a key feature of all complex networks (loc. 2348-72) (as we shall see below, it applies to both human brains and internet networks as well).

b. The Mature Ant Colony

While a mature ant colony operates as an extremely efficient, stable and intelligent network, it turns out that it takes time for it to reach its peak efficiency, stability and intelligence. As Stibel explains (speaking of mature ant colonies), “their reactions to various incidents become quicker, more precise, and more consistent. Dr. Gordon knows this because she goes out and harasses the ants—messing up their nests, spreading toothpicks everywhere and the like. She has learned that when she does these experiments with colonies that are five years or older… they are consistent in their reactions from one time to the next. They’re ‘much more homeostatic. The worse things get, the more I hassle them, the more they act like undisturbed colonies, whereas the young, small colonies are much more variable’” (loc. 214).

The added efficiency, stability and intelligence of a mature ant colony is also evident under entirely natural conditions. For example, when it comes to the way that an ant colony protects itself from rain, once again, it becomes most efficient at this only once it reaches full maturity. With regards to leaf-cutter ants, for instance, as Stibel explains, “smaller, younger colonies close up their nest entrances during rain to prevent flooding, which leads to high carbon dioxide levels and suboptimal fungus growth conditions. Larger, mature colonies… work around this problem. Their numerous nest openings and deeper chambers enable the ants to allow carbon dioxide gases to escape while maintaining a low risk of flooding” (loc. 2320).

Mature ant colonies are superior to more junior ones in other ways as well, including in ventilation practices, and foraging strategies (loc. 2320-29; loc 172).

How is it that an ant colony becomes more intelligent as it develops (until eventually peaking in this regard)? In order to understand how we must take a look at the evolution of an ant colony.

4. The Evolution of an Ant Colony

a. Growth

A new ant colony (in this case a colony of harvester-ants) normally begins in the following way: “it all starts with a single female winged ant who leaves her home to mate with one or more male ants, who immediately die. After mating, she flies out into the wild, finds a suitable piece of real estate, gets rid of her wings, and digs a small nest in the dirt to lay her eggs. She takes great care of her first group of eggs, nursing them to adulthood. The young adult ants at that point begin to forage for food, dig and maintain the nest, and take care of the young larvae. The original female ant is now queen of her own colony, where she lives deep inside the nest, her sole responsibility the laying of eggs” (loc. 160).

When the colony is still young the queen lays huge numbers of eggs, and the colony quickly grows in size—so long, that is, as the environmental conditions are suitable; if not, the young colony will simply wither away and die (as it turns out, it is fairly difficult to find the right environmental conditions to raise an ant colony, for the vast majority of ant colonies perish before they are 1 year old. For example, as Stibel explains, “more than 90 percent of harvester ant colonies fail in their first year” [loc. 2374]).

Again, though, should the conditions prove suitable, the colony quickly grows in size (loc. 160). Successful harvester-ant colonies normally continue to grow for the first five years or so of their existence, until they reach about 10,000 ants (or slightly more than this) (loc. 310). At this point, the colony stops growing, and it starts sending out any excess ants to form new colonies, such that its population drops off a bit and levels off at around 10,000 total (loc. 310).

b. Breakpoint

Just why do harvester-ant colonies top off at around 10,000 ants? Ant colonies require physical resources in order to sustain themselves, of course (both in terms of food, and materials needed to build and maintain the nest [loc. 660]). So we might imagine that the resources available even in a very propitious environment would only be enough to support some 10,000 harvester ants. As it turns out, though, most ant colonies never even come close to exhausting the resources in their environment (loc. 660).

Rather, the limiting factor of a harvester-ant colony is almost always organizational in nature. That is, too many ants leads to confusion in the nest, so growing too large is counter-productive (loc. 310). As Stibel explains, “ants communicate mainly through scents. If you’ve ever tried to cover a bad odor with a combination of bleach, 409, and Febreze, you know that piling too many scents on top of each other is a bad thing… An ant operates according to a rule such as, ‘If I meet another ant with odor A about three times in the next 30 seconds, I will go out to forage; if not, I will stay here.’ Ants are not the best at counting and have short memories, so you can imagine that too many ants make it simply too distracting for an individual ant to focus on the task at hand. Instead of growing in numbers, it makes more sense for a mature colony to form a stable population” (loc. 677).

So the breakpoint of a harvester ant colony is determined by organizational constraints—which begin to set in when the colony has reached a population just over 10,000. Once the colony hits breakpoint it begins to downsize until it reaches 10,000, at which point it stabilizes. Thus we may say that the natural carrying capacity of the harvester-ant colony is 10,000 ants.

c. Equilibrium

It is only after the population of the colony has stabilized at 10,000 that the colony really hits its stride in terms efficiency and intelligence (as we have seen above). As Stibel explains, “at this point, things change for the colony… the ant colony grows smaller and paradoxically gets wiser” (loc. 212). Elsewhere the author adds that “after completing its explosive growth phase, the colony seems to change its focus from quantity to quality. The colony itself becomes an intelligent network… And when you look to nature more broadly, it quickly becomes clear that this pattern is true across all biological networks” (loc. 218).

Section B: Brains

5. Natural Networks Exhibit B: The Human Brain

The pattern outlined above may hold across all biological networks, but we shall limit ourselves here to but one additional example (before moving on to internet networks): the human brain. Like the ant colony, the human brain is itself a complex network (loc. 189). And as such, though the human brain has a very different function from that of an ant colony, the two have some very interesting similarities. We shall now focus on the network that is the human brain, being sure to compare (and contrast) it with the ant colony.

In the network that is the human brain, ants are replaced with brain cells (neurons) (loc. 190), and the pheromonal communication between ants is replaced with the electrical and chemical communication between neurons (loc. 193). Also, whereas there are around 10,000 ants in a mature ant colony, there are around 100,000,000,000 (100 billion) neurons in a mature human brain (loc. 189).

a. How the Human Brain Works

Just as with the ants, individual neurons are actually pretty dumb. All they do is flip on and off (loc. 193). However, just as dim ants communicate and coordinate their behavior to produce something that is pretty smart (the colony), so too do dim neurons communicate and coordinate their behavior to produce something that is pretty smart (us).

Thoughts and decisions emerge out of the patterns of neuronal firings that occur in the brain. Specifically, neurons fire in response to stimuli, and the nature of the stimuli determines which other neurons each neuron passes its message on to and activates. The overall pattern dictates thoughts and decisions. As Stibel explains, “what we do know is that our network of neurons acts as a crowd. Each neuron performs a minor task that collectively forms a pattern. We see a snake and certain neurons fire. The snake bites us and other neurons fire. The next time we see a snake, both sets of neurons fire—we see the pattern—and then something unexpected happens: an entirely new group of neurons fire, the ones that make us jump. The scale at which this happens is truly epic: this minor interchange requires tens of millions of neurons firing in a linked chain of events… Our intelligence comes from all of those little charges in our head going on and off in a constant stream eventually leading to actions that define us” (loc. 1338).

But hold on, you may say, there is a fundamental difference between the ant colony and the brain, for while no single ant directs the activity of the ant colony, there is something in charge of the brain: consciousness. Actually, no. All of the evidence indicates that consciousness does not so much direct the activity of the brain, as ‘emerge’ as the result of brain activity. In other words, the brain is another one of these things that has qualities at one level of organization (the brain itself, or ‘the mind’) that do not exist at a lower level of organization (neurons), but which nonetheless arise entirely as the result of activity at this lower level. (For more on this topic you may wish to check out my Summary of Who’s in Charge?: Free Will and the Science of the Brian by Michael Gazzaniga).

Thus what we perceive as ‘our’ intelligence does not derive from ‘us’ (consciousness) but from the collective action of our dumb neurons. Thoughts and decisions (and our awareness of these thoughts and decisions) ‘emerge’ at the level of the whole structure, but they arise entirely out of the collective activity of neurons operating one level down. As Stibel explains, “neurons communicate with one another through electrical and chemical transmitters. These tightly packed neurons work together in a distributed network, forming patterns that allow us to perform tasks such as walking, speaking, remembering someone’s name, and even reading this book” (loc. 2457). Elsewhere, the author adds that “our brains are far more distributed than we once thought. We have a brain that sees patterns rather than individual pixels of information” (loc. 2560).

O.k., but how, exactly, does the brain come up with a solution to a problem (such as jumping at the site of a snake)? It does so in the way that it solves any problem. By consulting past experience, imagining different possible courses of action (and how they are likely to turn out), and selecting whichever course of action it predicts will have the best outcome (at the molecular level, this involves recalling neuronal patterns that have been stored away in memory, as well as mixing and matching them to formulate new ones). As Stibel explains, “to figure out what to do at any given moment, the brain must gaze into the future and imagine. The brain studies its environment, watches what others are doing, and simulates possible future scenarios. Then the brain evaluates those scenarios to guess which are most likely. And then, to save energy (so that it doesn’t have to interpret, calculate, and guess again and again), it learns from those simulations [that is, it stores them away in long-term memory for future use]… Forward thinking is the brain’s way of chipping away at the edges of uncertainty. It makes bets based on past experiences. The human brain learns and remembers not only what happened, but also what didn’t happen. And it turns the sum of this disconnected, limited information into real insight. As [Steven] Pinker notes, we make ‘fallible guesses from fragmentary information’” (loc. 1963).

Again, though, these ‘fallible guesses’ emerge from the bottom up, through the collective activity of neurons, and not from the top down (from something or someone in charge). Just as the ant colony is able to behave intelligently on the basis of the collective action of the individual ants, so too are we able to behave intelligently on the basis of the collective action of our neurons.

6. The Evolution/Life Cycle of the Human Brain

The human brain is not only similar to an ant colony in how it functions, it is also similar to it in terms of how it evolves; for the same 3 stage pattern (growth; breakpoint; and equilibrium) is also at play here.

a. Growth

To begin with, in the growth phase of the brain, neurons and the connections between them proliferate at an astonishing rate. Take neuronal growth, for example. As the author explains, “a fetus can generate an astronomical 250,000 neurons per minute” (loc. 284) This growth phase (called neurogenesis) does not last long, though, as neurons stop being generated when they reach about 100 billion (a point that is arrived at while the fetus is still in utero) (loc. 284).

While the number of neurons in the brain peaks even before birth, the number of connections between these neurons continues to grow until the age of 5. In the end, by the time we are 5 years old, our 100 billion neurons are linked-up to one another by way of an incredible 1,000 trillion connections (loc. 311).

We do not retain all of these neurons and the connections between them for the rest of our lives, though. Rather, many of our neuronal connections start getting pruned away, and many of the neurons themselves die. Specifically, when it comes to the neuronal connections, “through a process of selective pruning, the 1,000 trillion connections shrink to roughly 100 trillion by adulthood” (loc. 310). As for our neurons, these continue to die throughout our adult lives, such that by the time we reach the age of 75 we will have lost about 1/10th of the 100 billion neurons that we are born with.

b. Breakpoint

i. In the Life Cycle

As with the ant colony, then, our brain has a breakpoint when it comes to the number of neurons and connections therein. Once it reaches this breakpoint, it begins to cut back on these neurons and their connections. Just why does this happen? Well, when we are still young, the enormous number of neurons and connections in our brains allow us to learn about our environment (and our language) at a very quick pace (loc. 1812-30). Once we have learned the general structure of our environment (and our language), though, we can afford to sacrifice some of the neurons and connections that were needed to do this learning (loc. 1824).

What’s more, it makes more sense to devote our resources to the neurons and connections that we use the most and are most important, rather than seek to maintain them all. For the fact is that our neurons use up an enormous amount of energy (to be precise, the brain uses up 20% of our body’s energy, despite taking up just 2% of our body mass [loc. 419]), and thus the utility of more neurons and connections must be weighed against the costs of having to find more food to feed them (loc. 1824). In our evolutionary past, the set-up that we have now was found to be most beneficial in terms of optimizing survival and reproduction, and thus it is the one that we were left with.

Unlike with the ant colony, then—which begins shedding off ants due to organizational constraints—the brain starts shedding off neurons and their connections mainly because of energy constraints (loc. 1824).

ii. In the Evolution of the Human Species

Incidentally, the benefits and costs of neurons and their connections not only explains why we lose some of them over the course of our lives, but it also explains why we have the specific number of neurons and connections that we do. Specifically, in our evolutionary past, the benefit of evolving more neurons and connections (in the form of evolving bigger brains) had to be weighed against the costs of having to find more food to feed them; and when the costs of having to secure more food began to outweigh the benefits of evolving more neurons and connections, our brains stopped growing (loc. 422, 2044-64).

Luckily for us, before our brains stopped growing we found a way to increase the amount of calories that we were able to take in (and pass on to our growing brains) without greatly increasing the energy needed to procure these calories. The solution was cooking. As Stibel explains, “to grow our brains from ape-sized to human sized would have required spending well over nine hours crunching veggies and chewing on raw meat each day. That would have left little time for anything else… Cooking food actually changes its composition, which allows cooked food to be consumed more quickly and digested faster. By cooking food, our ancestors consumed many more calories than they would have otherwise, which provided fuel for their hungry growing brains and left them with extra time to use those brains” (loc. 527). This example is important because it demonstrates that the energy constraints of a network can be overcome to a degree through clever ways of harnessing more energy (more on this below, in the section on the internet).

Over and above increasing the number of calories that we were able to take in (which helped keep our brains growing), we also found a way to get more bang out of the buck of our brains. In other words, we (or at least evolution) found a way to get more intelligence out of the specific number of neurons and connections in our heads. The solution involved reorganizing the brain itself, such that different mental tasks were assigned different locations in the brain. As Stibel explains, “our brains compartmentalize different functions to increase efficiency. Brain scientists call this modularity. We have distinct regions for language, vision, memory, and most other high-level cognitive functions. Speed and efficiency are the hallmarks of a modular system—it is much more economical when many of the areas that control a specific function are close together. Just imagine an airplane with half of the controls in the cockpit and the rest in the rear lavatory, and you’ll get the idea” (loc. 553). This example is important because it demonstrates that the power of a network can be increased by way of coming up with clever ways of optimizing its efficiency (more on this below, in the section on the internet).

c. Equilibrium

Returning to the topic of neurons and their connections for a moment, while it may be the case that we lose some of our neurons and their connections over the course of our lives, this does not mean we get dumber as we age. Just the opposite, in fact. As with the ant colony, once our brains have passed the breakpoint and reached equilibrium, they actually become more and more intelligent.

The main reason for this is that though we sacrifice some neurons and connections, we devote more and more of our energy resources to the ones that we use the most and are most important to us (as mentioned above). As Stibel explains, “the brain prunes its weakest links regularly and removes faulty neurons in a natural process called ‘cellular suicide.’ It replaces sheer quantity with quality, making us smarter without the need for additional volume. When the brain stops growing and reaches a point of equilibrium in terms of quantity, it starts to grow in terms of quality. We gain intelligence and become wise” (loc. 207).

Of course, we should not neglect to mention that intelligence and wisdom also comes from the fact that we build up experiences over the course of our life-times (loc. 1943-46) (which does involve laying down new connections in the brain). For these built up experiences allow us to make better decisions and actions as we age (as seen above) (loc. 1946-56). Still, the number of overall connections in the brain continues to decline as we age, and even the new ones that are not used enough get pruned away so that attention can be focussed on the more important ones.

PART II: THE INTERNET AND INTERNET-RELATED NETWORKS

Now that we have seen how certain complex networks function and evolve in nature, we are prepared to see how the lessons here can help us understand and approach the man-made networks of the internet (and the internet itself). We shall begin by way of examining how the internet has evolved to this point.

Section C: The Growth Phase of the Internet

7. The Evolution of the Internet

a. The Beginnings of the Internet: The ARPAnet

The internet was invented in the mid-to-late 1960s by a group of scientists and researchers working for the branch of the US military that specializes in technology, known as ARPA (Advanced Research Projects Agency) (loc. 430-36) (the agency has since been renamed DARPA—Defense Advanced Research Projects Agency).

The group at ARPA wanted a fast and convenient way to communicate amongst themselves and other scientists and researchers, so they quite simply took their computers and linked them together by way of telephone lines (loc. 436). Ta da! The internet was born (loc. 433) (actually, it was called the ARPAnet at first [loc. 430]). It didn’t take long for the early internet to experience its first crash either. As Stibel explains, “the first two mainframes were connected in 1969; the first letters, L, then O, crossed the internet soon thereafter; a third letter—G—led to the first internet crash minutes later (so much for logging on)” (loc. 436).

It was clear that the network needed a more efficient way to send information back and forth, and so the researchers went about modifying the telephone lines in order to do just this. By 1973, the researchers had developed Ethernet lines that could carry more information and thus were less prone to crashing the system (loc. 436). Still, though, as the network grew in size and was plugged in to more and more computers, the energy demands of the system increased to the point where crashes once again became a threat.

In order to counteract the threat, the researchers eventually came up with a way to slow down the flow of information over the system when the quantity of information being transmitted became too much for the physical infrastructure to handle. The innovation is known as TCP (Transmission Control Protocol [loc. 235]), and it was applied to the network in 1983 (loc. 435). As Stibel explains, “TCP is a simple and elegant network technique that allows efficient transmission of information. It works by monitoring the speed of information retrieval and sending additional information only at that same speed. If information flow is fast—because relatively few people are on the internet at that time—information return will be fast; otherwise, TCP will slow down the internet. With this, it creates a state of equilibrium, thereby avoiding a risk that the internet will become congested and stop altogether” (loc. 240).

Interestingly, TCP was not an entirely novel innovation. Indeed, it turns out that it is something that has long been applied in natural networks—including ant colonies, and our own brains. As Stibel explains, “in 2012, none other than Deborah Gordon and one of her colleagues realized that ants use TCP to forage for food. Ants are sent out of the colony in clusters to determine food availability. When food is plentiful, more ants are sent to forage; when food is scarce, TCP restricts the flow of ants. Gordon and her colleague predictably dubbed their findings ‘the anternet.’ But TCP is not uniquely an anternet peculiarity. The brain, too, regulates the flow of information. In fact, the brain has built-in TCP filters that limit the rate of information flow. The brain regulates that information transmission based on neuronal feedback. In other words, each neuron independently regulates the flow of information depending on the capacity of the network and the task at hand” (loc. 246). (All of this is a clear indication that we should be looking to the solutions of natural networks when we run into problems with our man-mad ones [we shall return to this theme repeatedly below]).

b. The Internet Moves to the National Science Foundation

In any event, by 1990 other government and public organizations had begun establishing their own internets, and it was then determined that it would be best to begin connecting-up these networks to one another. ARPA wanted no part in this process, though, so they gave the project over to the National Science Foundation (NSF). From the time that the NSF took over the internet in 1990, they quickly went about expanding the internet to other networks, such that the entire network “doubled in size every seven months and grew to 50,000 networks, including 4,000 institutions, at its peak” (loc. 441).

By this time, though, there was mounting pressure to let private organizations in on the network (loc. 1994). And so, in 1994, the NSF set up 4 major Internet Exchange Points (“one each in California, New York, Chicago, and Washington, D.C.” [loc. 444]), and thereafter allowed the network to pass into the public domain (loc. 444).

c. The Internet Goes Global (and Adopts the World Wide Web)

Now, as a network that allowed for instant communication between individuals, the internet was already very valuable to people. And therefore, it is likely that it would have exploded in use even if it had not already evolved beyond its original function. However, the internet had recently welcomed an interface that suddenly made it far more valuable to people than it would have been otherwise. That interface was none other than the World Wide Web, which was invented in 1993 (loc. 708, 865).

The World Wide Web allowed internet users to create web sites whereon information could be stored and accessed by all. Thus with this addition the internet moved from a communication device to a communication and information storage device. Or, as Stibel likes to put it, the internet gained a memory (thus making it much more like the brain)—and this made it much more valuable to people. As the author explains, “on the internet, websites are the parallel to memories… [The] World Wide Web… is the usable layer of the internet—the websites and programs that allow us to communicate, store memory, and transmit ideas over the physical internet. When the World Wide Web was invented in 1993, it changed the internet overnight. Prior to that, the internet was a cool idea; with the web, it became an indispensible phenomenon” (loc. 708).

Equipped with its new interface, and opened up to the world, the internet took off. As Stibel explains, “no longer an island bounded by government regulations, the internet exploded into its exponential growth phase. It grew from a couple hundred thousand university and government users in early 1995 to over 16 million users across all industries by the end of 1995” (loc. 446). As for the internet’s interface, the World Wide Web, growth here was just as impressive: “there were no websites in 1993, 20 million websites in 2002, and 600 million sites by 2012” (loc. 713).

8. The Energy Problems (and Solutions)

As you can well imagine, this incredible explosion in the use of the internet quickly put a serious strain on the infrastructure that had been established for it. Even with TCP there to slow down traffic when demand became too extreme, the massive amount of traffic threatened to slow the entire system down to a complete stand-still. The problems began the very year that the internet was unleashed on the public. As the author explains, “in 1994, AOL openly admitted that it could not handle the load or demand of the internet. It started limiting the number of users online during peak times, almost begging customers to switch to competitors” (loc. 484).

In 1995, the creator of Ethernet himself, Bob Metcalfe, declared that the internet “would ‘soon go spectacularly supernova and in 1996 catastrophically collapse’” (loc. 481). Metcalfe was not the only observer to make this prediction (loc. 482), and in 1996 these predictions began to look prophetic, as this was the very year that AOL experienced a major crash. As Stibel explains, “the problems culminated in August 1996 with a huge outage that affected six million AOL users and ultimately forced AOL to refund millions of dollars to angry customers. Clearly, the population of the internet had overshot the carrying capacity of its environment—its bandwidth” (loc. 487).

The internet was on its way to imploding, but then, just in the nick of time, innovations started coming in that expanded the carrying capacity of the network. First modems were invented that could operate at higher speeds, and then the cables that carried information themselves were improved to allow for more traffic. As Stibel explains, “in 1991, modems worked at a speed of 14.4 kilobits per second (kbps). By 1996, the year in which Bob Metcalfe said the internet would collapse, we were cruising at around 33.6 kbps, which many considered to be the upper limit of speed available through a standard four-wire phone cable. But it wasn’t. The 56 kbps modem was invented in 1996 and became widely available in 1998… It became increasingly clear that the phone network and the four-wire phone cable weren’t cut out for transmitting all this new digital data… Cable broadband internet was introduced in the mid-1990s and became widespread at the turn of the century. Using the existing cable television network and its corresponding coaxial wiring, new cable modems, plus Metcalfe’s eight-wire RJ45 Ethernet cords, we were able to radically increase data speeds—from 56 kbps to between 1000 and 6000 kbps—or 1 to 6 megabits per second [mbps]” (loc. 507).

From here, both modems and cables have advanced even further. When it comes to cables, for instance, as the author explains, “we invented larger and faster cables—T1, T3, fiber optics” (loc. 510). And when it comes to modems, “many cable modems are currently capable of speeds up to 30 mbps” (loc. 507).

9. The Continuing Growth of the Internet (and the Challenges This Poses)

With the advances that have come in modems and internet cable technology, the internet has again and again avoided collapse and continued to grow at an astonishing pace. Indeed, whereas the internet quickly achieved 16 million users by 1995, the number has continued to grow since then, such that by the turn of the century “the number was over 300 million. And we broke a billion users a mere ten years later. Today the number is an astronomical 2.4 billion users, or roughly 34 percent of the world’s population” (loc. 446).

Thus just as the invention of cooking allowed our ancestors to increase the amount of calories that they were able to take in (which allowed their brains to keep growing), so too has the invention of more and more advanced internet technologies allowed us to increase the amount of energy devoted to the internet (which has allowed it to keep growing). As Stibel explains, “think of all the things that use energy: cars, factories, drilling, China. None of them individually compares to the consumption growth of the internet, which recent estimates peg at roughly 2 percent of all energy consumed” (loc. 536).

Now, you may be thinking that the growing energy demands of the internet may not, in the end, be a problem; for already 1/3 of us are online, and once virtually everyone on the planet becomes connected the internet will simply stop growing. However, this is not entirely true, as we are not the only ones that are becoming connected. Indeed, virtually anything that can be tagged with a sensor can be hooked up to the internet, and we are increasingly doing just this. To take a few examples, there are sensors on cows that monitor their location, health and whether they are in heat (loc. 454); sensors on cars that monitor maintenance needs (as well as your driving habits) (loc. 456); sensors in agricultural soil that monitor for moisture and nutrients (loc. 459); sensors in refrigerators that tell you when food items are set to expire (loc. 462); sensors in our bodies that monitor for health issues (loc. 462); and even sensors in our brains that pick up electrical activity (loc. 465). And all of these sensors can and are being hooked up to the internet (loc. 450-65).

Thus the number of devices hooked up to the internet already exceeds the world’s population, and it is set to skyrocket in the coming years (as more people, and things, become connected). As the author explains, “in 2012 the number of devices exceeded 9 billion (well over the number of people on earth), and Cisco predicts the number will skyrocket to 50 billion devices by 2020” (loc. 451). And Stibel himself predicts that Cisco’s “number is likely understated by a factor of 4” (loc. 451).

At this rate of growth, the energy demands of the internet are set to increase sharply in the coming years. As the author explains, “the internet is on track to consume 20 percent of the world’s power, just as the brain consumes 20 percent of the body’s power. At the internet’s current rate of growth, it will get there within ten years” (loc. 544).

Of course, one way to reduce the energy demands of a growing internet would be to make it more efficient. However, we have already maximized the internet’s efficiency in several ways. For instance, the internet’s biggest power drains are already located near one another and right next to large sources of energy. As Stibel explains, “we have structured large parts of the internet into what are called server farms, massive storage facilities housed near one another. Part of the reason for this is power constraints, but it is also an efficiency trick. Huge speed efficiencies result from having Facebook, Netflix, Amazon, and all of the smaller guys sharing space” (loc. 555). Elsewhere, Stibel adds that “the data centers for Google, Facebook, Netflix, and many other companies are housed near abundant and cheap energy sources. Some sit near water dams, others near wind power, still others near coal, natural gas or nuclear power” (loc. 541)—for the very reason that being near to these power sources increases efficiency (loc. 535-40).

Over and above these measures, we have also instituted cloud computing as a measure to increase the efficiency of the internet. As Stibel explains, “most people think of cloud computing as a way to store information, which it is, but clouds do more than that. Computing clouds allow for independent computations to happen across the internet, giving individuals access to virtually unlimited computing resources. Where you were once limited to your own computers or servers to process information, the cloud allows you to tap the resources of universities, governments, and large companies such as Amazon, Google, IBM, and Microsoft. There is incredible efficiency associated with this model, as large entities can rent out idle computing resources at a fraction of the cost” (loc. 569).

Thus just as the brain was able to increase its efficiency by way of resorting to modularity, so too has cloud computing and the strategic placement of internet servers increased the efficiency of the internet (in fact, for Stibel, these latter measures are nothing but brain modularity transposed to the world of the internet, for they follow the exact same principles [loc. 553, 565]).

Still, despite the progress we have made in advancing the power and efficiency of the internet, we are still set to run into carrying-capacity limitations eventually. Indeed, this can hardly be avoided (for if we do not run into energy limitations, we will inevitably run into complexity limitations) (loc. 622). But for Stibel, this may not be such a bad thing. For, as we have seen, complex networks stand to improve in quality once they have reached their breakpoint in terms of quantity or size. As the author explains, “the internet continues to evolve, grow, and increase its overall carrying capacity, but eventually we will run out of virtual lichen on our island. When that happens, it will not necessarily be a bad thing. Just as the brain gains intelligence as it overshoots and collapses, so too may the internet. The brain can be our guide to the internet because the two are so similar. We have substituted hardware for wetware, but the fundamental structures are the same: they are both complex networks capable of calculating, remembering, and communicating. Carrying capacity is never infinite, so we will eventually hit a breakpoint. But when that happens, the results will be exciting to see and will likely yield a smaller, yet more efficient, nimble, and—dare I say it—intelligent internet” (loc. 622).

Section D: The Internet’s Interface: The World Wide Web (and Its Websites)

10. The Evolution of the World Wide Web (and How We Navigate It)

In order to understand just how the internet may gain intelligence after it reaches its breakpoint, it is important that we first delve deeper into the internet’s interface: the World Wide Web. As mentioned above, the World Wide Web was invented in 1993, and it quickly caught on. Whereas there were no websites in 1993, there were 20 million within a decade, and 600 million a decade after that. This massive growth led to an interesting evolution.

a. Yahoo!

To begin with, when the web first came on-line no one really knew much about it. Search engines did not yet exist, nor would they have been of much use—for people didn’t even know what was there to be searched (loc. 1101). The first truly helpful guide to the web was the Yahoo! Site. This site was not a search engine at all, but consisted of a list of the best of the best sites on the web, as compiled by two individuals: David Filo and Jerry Yang, two PhD students out of Stanford University (loc. 1095). As Stibel explains, “early iterations of the site were merely David and Jerry’s lists of favorites, broken down into categories and subcategories. Real people, intelligently choosing the web’s best content to present to others… It wasn’t a search engine; in fact, it was meant to eliminate the need for a search engine. Yahoo! Was a web portal, a kind of ‘welcome to the internet’ home page, and in 1994 this was desperately needed. The average user had no idea what was available on this newfangled ‘internet’—she needed a tour guide, and Yahoo!’s portal filled that role” (loc. 1101).

b. The First Search Engines

Once people became familiar with the web, though, this hand-holding method of navigating the web became constricting. People now had a general idea of what was out there, and they wanted a way to explore it on their own. Enter the search engine. The earliest search engines were very simple. All they did was index websites according to keywords, so when you punched in a keyword into a searchbar, all the sites associated with that keyword popped up (loc. 1118). What these search engines did not do was rank websites according to quality—rather, they relied on keyword relevance alone (loc. 1122).

Now, this method of search may have worked well enough for a while, but as the number of websites grew and grew—with quality varying wildly—it quickly became clear that listing search results according to keyword relevance alone simply would not do. As Stibel explains, “in a world where several billion pages are added to the web every single day—some good, some great, but most completely worthless—the primary goal of search engines is to filter. Not to find, but to eliminate” (loc. 1122).

c. The Rise of Google

Enter Google. Google was the first search engine to come up with an adequate solution to the problem of quality control in search results. Google (meaning Larry Page and Sergey Brin) solved the problem by way of harnessing the importance of links. Specifically, Google designed algorithms that ranked each website based on the number and quality of other websites that linked to it. As Stibel explains, Google’s algorithms assume that “the importance of a website is directly proportional to how many other websites link to it. And it is a mater not only of the number of links but also the quality of those links; the thinking being that the best websites should have many other reputable websites that link to them… on Google, Jerry [of Yahoo!] is replaced by the millions of webmasters who create each site and choose their links. The idea is that if a webmaster links to a page he endorses it” (loc. 1132).

Interestingly, Google’s method of evaluating websites is very much similar to the brain’s way of evaluating the importance of neurons and their connections. As Stibel explains, “Google’s method for assessing link relevance works precisely the way a simple neural network does. Links between neurons are weighted based on how relevant (or connected) they are to one another, and that weighting triggers or suppresses activity. Google uses a similar structure to rank or suppress websites through its search results” (loc. 1140).

As ingenious as Google’s algorithms are, though, it turns out that the landscape on the internet is currently shifting in such a way that is challenging Google’s model—and that of the web more generally.

11. The Breakpoint of the Web (and How to Help it Reach Equilibrium Gracefully)

a. The Breakpoint of the Web

The phenomenon that is causing this shift is that the web has in fact already hit its breakpoint. Indeed, though the number of websites continues to grow, the rate of increase has dropped precipitousl

Show more