2014-02-21

The phrase, strong views, weakly held, has crossed my radar multiple times in the last few months.  I didn’t think much about it when I first heard it, beyond noting that it seemed to be almost a tautological piece of good advice. Thinking some more though, I realized two things: the phrase neatly characterizes the first member of my favorite pair of archetypes, the the hedgehog and the fox, and that I am actually much better described by the inverse statement, which describes foxes: weak views, strongly held. 

If this seems counterintuitive or paradoxical to you, chances are it is because your understanding of the archetypes actually maps to more commonplace degenerate versions, which I call the weasel and cactus respectively.



True foxes and hedgehogs are complex and relatively rare individuals, not everyday dilettantes or curmudgeons. A quick look at the examples in Isaiah Berlin’s study of the archetypes is enough to establish that: his hedgehogs include Plato and Nietzsche, and his foxes include Shakespeare and Goethe. So neither foxes, nor hedgehogs, nor conflicted and torn mashups thereof such as Tolstoy, conform to simple archetypes.

The difference is that while foxes and hedgehogs are both capable of changing their minds in meaningful ways, weasels and cacti are not. They represent different forms of degeneracy, where a rich way of thinking collapses into an impoverished way of thinking. 

I seem to have been dancing around these ideas for about a year now, over the course of three fox/hedgehog talks I did last year, and even a positioning for my consulting practice based on it, but I was missing the clue of the strong views, weakly held phrase.

It took a while to think through, but what I have here is a rough and informal, but relatively complete account of the fox-hedgehog philosophy, that covers most of the things that have been bugging me over the past year. So here goes.

Views and Holds

Let’s first make the connection between the fox/hedgehog pair and the views/holds pair explicit.

The basic distinction between foxes and hedgehogs is Archilocus’ line, the fox knows many things, the hedgehog knows one big thing.  The connection to views and holds is this: many things refers to weak views.; one big thing refers to strong views. We’ll get to views and why this connection holds in a minute, but let’s take a quick look at strong and weak holds, about which Archilocus’ has nothing explicit to say. There is an implicit assertion in the definition though.

To get a hedgehog to change his/her mind, you clearly have to offer one big idea that is more powerful than the one big idea they already hold. To the extent that their incumbent big idea has a unity based on ideological consistency rather than logical consistency (i.e., it is a religion rather than an axiomatic theory), you have to effect a religious conversion of sorts. The hedgehog’s views are lightly held in the sense of being dependent on only a few core or axiomatic beliefs. Only a few key assumptions anchor the big idea. That is the whole point of seeking consistency of any sort: to reduce the number of unjustified beliefs in your thinking to the minimum necessary.

To get a fox to change his or her mind on the other hand, you have to undermine an individual belief in multiple ways and in multiple places, since chances are, any idea a fox holds is anchored by multiple instances in multiple domains, connected via a web of metaphors, analogies and narratives. To get a fox to change his or her mind in extensive ways, you have to painstakingly undermine every fragmentary belief he or she holds, in multiple domains. There is no core you can attack and undermine. There is not much coherence you can exploit, and few axioms that you can undermine to collapse an entire edifice of beliefs efficiently. Any such collapses you can trigger will tend to be shallow, localized and contained. The fox’s beliefs are strongly held because there is no center, little reliance on foundational beliefs and many anchors. Their thinking is hard to pin down to any one set of axioms, and therefore hard to undermine.

This means that it is actually easier to change a hedgehog’s mind wholesale: pick the right few foundational beliefs to challenge or undermine, and you can convert a hedgehog overnight. It is the reason the most fervent true believers in a religion are the new converts. It is the reason the most strident atheists are the once-religious. Hedgehogs whose Big Ideas are undermined through betrayal by idols can turn into powerful enemies overnight. 

Now let’s talk weak and strong views.  

The Strength of Views

We don’t hold our beliefs as large collections of atomic propositions. Instead, the bulk of our beliefs are organized into clusters we call views that correspond to beliefs about specific domains. One or a few of these views may be deep views, representing one or more home domains.

If all our views happen to be connected and relatively consistent, we call it a world view. Both foxes and hedgehogs have views. Hedgehogs in addition have world views.

Consider the difference between strong and weak religiosity. In a Christian culture, the former typically evokes the image of a Biblical literalist, who believes every little detail in the Bible literally. The latter typically evokes the image of somebody who believes in an eclectic subset of moral principles captured in favored proverbs, parables and allegories.

The latter belief system is robust to challenges to a vast majority of literal details in the Bible. The former will be forced to defend many more fronts against attack, ranging from the 7-days-of-creation belief to the core belief in the literal resurrection of Christ.

A view is generally a belief complex. A set of interdependent beliefs. Some are so fundamental, they are practically axiomatic (in either an ideological or logical sense). Undermine those fundamental beliefs and everything else false apart. Others are so peripheral, nothing depends on them.

Within views, you find complex structures that behave differently under different interpretations. For example, if you treat a religious view as metaphoric, it becomes a lot harder to undermine than if you treat it as literal. Metaphoric views create strong holds because they are weak interpretations.

So a view is a belief complex with a non-random structure (there are more and less fundamental elements), along with an interpretation: a system of justification that allows you to reach less fundamental beliefs from more fundamental ones.

A strong view is one that encompasses a large number of beliefs in a domain, and is defended with the most literal interpretation available. By contrast, a weak view is one that encompasses only a few critical beliefs, and is defended with the most robust interpretation available.

A strong view is strong in two senses of the word.

It is powerful. Because it says so much, and so literally, to the extent that it is true or unfalsifiable, it is very useful. A detailed and literal religiosity is a fully featured operating system for a lifestyle. A vague and figurative spirituality may be more defensible in debates with atheists, but offers very little by way of practical prescriptions for life. A detailed prediction about the future of an industry, with predictions about individual companies down to the future behavior of their stocks, is something you can bet money on. A loose and figurative prediction at best allows you to quickly interpret events as they unfold in detail.

It is  tedious to undermine even though it is lightly held. A strong view requires an opponent to first expertly analyze the entire belief complex and identify its most fundamental elements, and then figure out a falsification that operates within the justification model accepted by the believer. This second point is complex. You cannot undermine a belief except by operating within the justification model the believer uses to interpret it.  A strong view can only be undermined by hanging it by its own petard, through local expertise.

A view of a home domain has a third source of strength: a great many (in fact, most) beliefs are not explicitly articulated at all, but must be inferred from habits and behaviors. More on that later.

For most views we hold today, whether  we often have no idea what is fundamental and essential, and what is peripheral and dispensable. There is so much knowledge in the world today about major issues such as global warming and the obesity epidemic that nobody seems able to zero in on the fundamental premises within any given goat rodeo.

This was apparent in the recent evolution versus creationism between the Bill Nye and  Ken Ham. I watched a part of it and was struck by how thoroughly pointless it seemed. Neither side could convince the other within their own schemes of justification and interpretation. In large part because neither side really understood the fundamental beliefs in their own view, let alone on the other side.

Changing Your Mind

We change our mind all the time when we are dealing with isolated, atomic beliefs. We might experience a minor stab of chagrin when somebody googles us wrong in real time, but it’s a pinprick that passes.

When people talk about the difficulty of changing minds, both their own and others, they generally mean changing views, complete belief complexes about a particular domain, or world-views, belief complexes that encompass the totality of human existence. 

Changing a view is like uninstalling a software program from your computer, installing a substitute, and learning the new software. Changing a world view is like switching entire operating systems.

Strong views represent a kind of high sunk cost. When you have invested a lot of effort forming habits, and beliefs justifying those habits, shifting a view involves more than just accepting a new set of beliefs. You have to:

Learn new habits based on the new view

Learn new patterns of thinking within the new view

The order is very important. I have never met anybody who has changed their reasoning first and their habits second. You change your habits first. This is a behavioral conditioning problem largely unrelated to the logical structure and content of the behavior. Once you’ve done that, you learn the new conscious analysis and synthesis patterns.

This is why I would never attempt to debate a literal creationist. If forced to attempt to convert one, I’d try to get them to learn innocuous habits whose effectiveness depends on evolutionary principles (the simplest thing I can think of is A/B testing; once you learn that they work, and then understand how and why they work, you’re on a slippery slope towards understanding things like genetic algorithms, and from there to an appreciation of the power of evolutionary processes).

Paradoxically, this again means it is harder to change fox minds than hedgehogs. A lack of deep expertise means there are fewer strong doer habits anchoring beliefs, and instead, beliefs are anchored by beliefs in other domains. Belief modification through behavior modification is harder because there are fewer behaviors to modify.

We now have a basic account of holds and views, strength and weakness, and thumb-nail portraits of foxes and hedgehogs in those terms. So what does it mean to have a strong view, weakly held? What is the best-case behavioral profile of an enlightened, non-degenerate hedgehog?

Strong Views, Weakly Held

Strong views, weakly held is a powerful heuristic because it suggests you cultivate the ability to switch  full-blown hedgehog world views, very fast. In the software metaphor, you get very good at switching out software packages and rebuilding your operating environment anew, and even complete operating systems (in the case of deep conversions).

The key to holding your views weakly is recognizing a basic fact about human thinking: it is far easier to recognize when one of your fundamental beliefs has been undermined than to figure out which of your beliefs is fundamental. It is harder to recognize all the ways in which you can be checkmated than to recognize an opponent’s specific move as a path to an inevitable checkmate (the sign is fear-uncertainty-doubt as all sorts of things start going wrong for you).

This means you learn faster when there is an adversary trying to undermine your beliefs.

This is because for hedgehogs, habits are far more fundamental than the beliefs associated with the habits. But for an adversary who does not have your habits, the logical structure of your beliefs (some of which you may not even be aware of) is all that is relevant. All the behavioral clutter and inertia has been eliminated. They are more free to spot your fundamental premises and attack them (it is a behavioral analog to being hung by your own petard: to have your own habits used against you).

Once you recognize that your adversary has an advantage in learning some things about you, due to the lack of the baggage of habits, you see an adversary as a teacher or a learning aid, rather than somebody who is just there to be defeated (that too, of course).

This is the idea of rapid reorientation (or “fast transients”) in the OODA-loop view of decision-making, and sheds light on what precisely is involved in achieving this ability.

Learning to recognize when your views have been completely undermined, versus lightly damaged.

Immediately switching to the default assumption that every other belief within the view is likely suspect now, even if it hasn’t yet been specifically undermined

In building a new view, on top of new habits, treating old beliefs as false and irrelevant unless proven true and relevant. In other words, if a new habit collides with an old habit or belief, the latter must be assumed guilty until proven innocent.

These represent three levels of sophistication and a path of enlightenment for the hedgehog.

Beginners don’t even know how to recognize when they are completely screwed, and soldier on bravely until somebody puts them out of their misery. Intermediate level thinkers recognize when their position has been completely undermined, but fail to immediately put a question mark on every other belief in the view, and switch into salvage mode rather than reconstruction mode. Advanced thinkers do both, but can be sloppy about preventing old habits and beliefs from contaminating new view formation.

A hedgehog who learns to achieve fast transients is set up to play an infinite game rather than a finite game.

Now, what about the enlightened fox, if there is such a thing?

Weak Views, Strongly Held

I do not hold truly strong views because I do not have much of a capacity for deep domain-specific detail, and outside of very narrow areas, am not much of a doer, which means I have far fewer specialized habits of expertise than powerful doers.

In most areas: politics, culture, governance, technology, startups and all the other topics about which I offer views from an armchair, my thinking could be characterized as weak views, strongly held. Even in areas where I have some home-domain expertise and doer skills, I don’t hold particularly strident and detailed views. To a large extent, I have no home domain. I am a cognitive nomad. 

So by weak views,  I mean I primarily approach all areas (including my nominal home domains) as an outsider, with the intent of identifying and forming opinions about fundamental premises, rather than achieving insider status and mastery.

This is not always possible, because often the most important beliefs within a view are buried too deep inside the technical part of a belief system, and inaccessible to casual outsider tourists. For example, in formal logic, a casual tourist is unlikely to encounter, let alone appreciate, the idea that the axiom of choice is fundamental.

Or worse, the most fundamental beliefs may never have been articulated at all. This is the strong form of Taleb’s model of antifragile doer-knowledge, where the set of habits mark out a bigger space of cognitive dark matter than the set of explicit beliefs cover, leading to the possibility that the most important beliefs have not yet been stated, and might never be.

By strongly held,  I mean, I tend to accept or reject any locally fundamental beliefs I find based on justifications from other domains (often driven by analogy of metaphor). What is strong about the “holding” is that it is anchored by many independent justifications in unrelated domains, just as strong views are anchored by many details and unexamiend habits in one domain. 

Take for example one of my favorite technical ideas: that “random inputs drive open-ended learning.” It’s an idea you find in machine learning, control theory, theories of fitness (ideas like “muscle confusion”) and dieting, cybernetics (Ashby’s law), signal processing, information theory, biological evolution, automata theory, metallurgy, and optimization theory. It also happens to be a key idea in Taleb’s notion of antifragility — the “disorder” that things gain from, though not the key idea (that would be convexity, an equally ubiquitous many-domains idea). To my knowledge, nobody has come up with a canonical grand-unified version of the set of instances of the principle (Ashby’s version comes closest, but is pretty weak).

I have no particular attachment of the version of the idea in my home domain of control theory (where it is known as persistency of excitation).  It is the version I understand best, but not by much. I use the version of the idea that fits the immediate pattern recognition problem the best.

If the basic challenge for the hedgehog is to get better at fast transients in the sense of switching rapidly from one strong view to another, and quickly shifting habits, the basic challenge for the fox is to shift quickly from one pattern of organization of related beliefs in multiple domains to another. Hedgehog fast transients are like paradigm shifts in physics. Foxy fast transients are like shifting from one organization scheme for stamp collecting to another.

As with hedgehog fast transients, when this works, foxes are set up to play the infinite game rather than the finite game. They do not get locked into any particular habits of pattern recognition that can be used against them (such as 2×2 diagrams, archetypes or specific narrative structures).

When a belief works across so many domains and seems fundamental to many, you naturally hold on to it strongly, and even if it is apparently undermined in one, you don’t reject it, because it has some independently demonstrated value and credibility in other domain. You tend to suspect a local mistake, exceptional conditions or a pattern recognition mistake on your own part.

This means, weak views, strongly held, is a heuristic for collecting domain-independent truths.

But hedgehogs presumably encounter and collect domain-independent truths too. What makes the two different is what they do with these collections.

What can you do with domain-independent truths? You can either form totalizing world views, or you can end up with a refactoring mindset. The former is the hedgehog strategy, the latter is the fox strategy.

This gives us two types of religion.

Heuristic and Doctrinaire Religions

The deep difference between foxes and hedgehogs comes down to their preferred styles of thinking and doing outside their home domains. To get at this difference, you have to first get beyond the coarse distinction between generalists and specialists. There are really no pure generalists or pure specialists. Everybody is what career counselors call a T-shaped professional (an awful term, but it’s stuck).  

The difference is that hedgehogs are fat-stemmed Ts who explore the world in a dominantly depth-first way, prioritizing home-domain expertise first, while foxes are fat-bar Ts who explore the world in a dominantly breadth-first way, doing the minimum necessary for survival in a home domain.  

Each has an “antilbirary” of the unknown, to use Taleb’s term, of the complementary shape, as shown below (black is known, white is unknown). Foxes have lots of books 30% read, and a few 100% read, hedgehogs have lots of books 5% read (judged by their cover), and a few 300% read (repeatedly and closely re-read). 



Don’t make the mistake of thinking one is top heavy while the other is solidly rooted. There is no gravity in this T-metaphor.

Let’s consider thinking and doing in turn. Thinking first.

Foxes prefer to rely as little as possible on what they know from their home domain (for the very good reason that as weak-stemmed T’s, they don’t trust their home-domain expertise much anyway), and instead rely on ad hoc metacognition based on freewheeling use of metaphor, narrative, analogy and other kinds of cheap tricks. They eschew Platonic abstractions and grand-unification formalisms.  This makes their religions highly heuristic. Catholicism and Hinduism in daily practice (as opposed to theological study) are highly heuristic religions. In Kahnemann’s terms, foxy religions are System 1 religions.

Heuristic religions are based on fragmentary, unintegrated collections of meta-knowledge. Practicing them involves a lot of energetic and lively metacognition. You have to work with analogies, metaphors, stories, patterns and so forth, in order to form beliefs in new domains.  They require heavy-bar T personalities and cognitive styles.

A non-religious example is the kind of refactoring I do on this blog: apply a grab-bag collection of tools and ideas on sets of meta-knowledge, without attempting to coalesce them into grand unified theories. To think using a refactoring approach, you merely try a number of likely seeming tools, and work with the first one that fits and does something vaguely useful. This is why you can think fast with a foxy religion. The heavy-bar T helps speed you up.

Hedgehogs rely a great deal on what they know from their home domain (because they know a lot there, and are inclined to milk that knowledge) and prefer to apply that knowledge through abstraction and reasoning based on those abstractions. They eschew ad hoc metacognition and work hard to form strong and efficient metanorms instead. This makes their religions highly doctrinaire. Islam and Protestant Christianity are highly doctrinaire. In Kahnemann’s terms, hedgehog religions are System 2 religions. 

Doctrinaire religions are highly integrated collections of meta-knowledge, where the integration is achieved through inductive generalization based on abstract categories inspired by privileged instances of patterns from a home domain. In other words, you use a thick stem to sustain a thin bar on your T.  In order to form beliefs in new domains, you first fit the new domain to your efficient abstractions, and then reason with those abstractions (carefully, because your abstractions come with a home-domain bias).

A non-religious example is Taleb’s philosophy of antifragility. To think using his philosophy of antifragility, you have to first cast the ideas in a particular domain into the abstractions used in his model: disorder, convexity and so forth, and then apply careful formal reasoning. This is why you can only think slow with a hedgehog religion. The light-bar T doesn’t help you much.

On the other hand, when it comes to doing in a new domain, the advantages are flipped. Foxy religions aren’t very useful for guiding actions, quick or otherwise, even though they offer quick-and-dirty appreciations of novelty. Foxy religions are naturally self-limiting. They limit you to an armchair.

Hedgehog religions on the other hand,  yield strong guides to action in alien territory. This is because abstract categories lead to very quick actions once you can fit data to them. These are metanorms: behavioral principles based on abstractions. An example from the antifragility philsophy is Taleb’s principle of integrity: if you see fraud and don’t say fraud, you are a fraud. 

This requires a strong abstraction associated with the concept of fraud.

It is no accident that the metanorm here supports a decisive separation into good and evil. Much of the action driven by efficient metanorms involves actions of ideologically driven inclusion or exclusion. Another example from Taleb is his assertion that much of academic scholarship outside of physics and some parts of cognitive science, is nonsense.

Much of the time, inclusion/exclusion is the only kind of action we take in alien domains anyway. We decide whether or not to visit certain cities or countries. We decide whether or not certain people are worth paying attention to. We decide whether or not certain subjects are worthy of further study.

So far, I haven’t done myself and my foxy brethren any favors. Foxy religions are sloppy, error-prone, quick-and-dirty ways of manufacturing insight porn from armchairs, and are useless in guiding action. Hedgehog religions are careful, reliable and slow-and-steady ways of manufacturing solid guides to action in alien territory.

The hedgehogs win on home turf too. They are solid and expert doers on home-ground, thanks to their thick-stemmed T personalities. Foxes, if their T’s even have a respectable stem, are rarely respected experts and leaders in their fields.

At best, they are credited with being the imaginative and flighty idea people in their home domains, where they don’t produce much, but make for lively party guests and occasionally get their more formidable peers unstuck on some minor point (a capacity usually attributed to luck).

Is there any value at all to being a fox?

The Tetlock Edge

The one slim area where foxiness is generally acknowledged to be an advantage is anticipation. By a slim margin, and based on relatively sparse evidence from one domain (political trend prediction), foxes appear to be somewhat less wrong when it comes to predicting the future than hedgehogs.

It isn’t much, and given the half-life of facts, the presumptive advantage may not last long, but we’ll take it. Beggars can’t be choosers.

Where does this advantage, let’s call it the Tetlock edge, come from? I have a speculative answer.

It comes from eschewing abstraction and preferring the unreliable world of System 1 tools: metaphor, analogy and narrative; tools that all depend on pattern recognition of one sort or the other, rather than classification into clean schema. Fox brains are in effect constantly doing meta-analyses with unstructured ensembles, rather than projecting from abstract models.

That’s where the advantage comes from: eschewing abstraction.

Abstraction creates meta-knowledge via inductive generalization, and can grow into doctrinaire world views. The way this happens is that you try to formalize the interdependencies among all your generalized beliefs. Your one big idea as a hedgehog is an idea that covers everything, the whole T, so to speak.  Abstraction provides you with ways to compute beliefs and actions in domains you haven’t even encountered yet, thereby coloring your judgment of the novel before the fact.

Pattern recognition creates meta-knowledge through linkages among weak views in multiple domains. The many things you know start getting densely connected in a messy web of ad hoc associations. Your collection of little ideas, densely connected, does not cover everything, since there are fewer abstractions. So you can only form beliefs about new domains once you encounter some data about them (which means you have an inclusion bias). And you cannot act decisively in those domains, since you lack strong metanorms. This means pattern recognition leaves you with a fundamentally more open mind (or less strongly colored preconceptions about what you do not yet know).



The way you slowly gain a Tetlock advantage, if you live long enough to collect a lot of examples and a very densely connected mind full of little ideas, is as follows: The more you see instances of a belief in various guises, the better you get at recognizing new instances. This is because the chances that a new instance will be recognizable close to an existing instance in your collection increases, and also because patterns color the unknown less strongly.

As you age, your mind becomes a vessel for accumulating a growing global context to aid in the appreciation of novelty.

Abstraction offers you a satisfyingly consistent and clean world view, but since you generally stop collecting new instances (and might even discard ones you have) once you have enough to form an abstract belief through inductive generalization, it is harder to make any real use of new information as it comes in. There is already a strongly colored opinion in place and guides to action that don’t rely on knowing things. Your abstractions also accumulate metanorms, and give you an increasing array of reasons to not include new information in your world view.

The Fox-Hedgehog Duality

If you’ve been following along closely, you might have foxily jumped to a conclusion pregnant with irony: foxiness is antifragile metacognition and fragile doing. It gains from disorder in the form of new, non-local information.

This is what it means to have a thick-bar/thin-stem T. Since foxy thinking operates via associations among instances of patterns, there is no single point of failure for a broad-based belief. The belief might not even exist in reified form as an abstraction, much as hedgehog doer-beliefs might only exist in the form of unconscious habits.

Hedgehog thinking is fragile metacognition coupled with antifragile doing. It gains from disorder in a local domain, but the associated pattern of metacognition gets progressively weaker, less reliable and more exclusionary.

This is a very strange conclusion, but there is an interesting analogy (heh!) that suggests it is correct — the distinction between structured and unstructured approaches to Big Data, relying on RDBMS technology and NoSQL technology respectively.

Foxes are fundamentally Big Data native people. They operate on the assumption that it is cheaper to store new information than to decide what to do with it. Hedgehogs are fundamentally not Big Data native. If they can’t structure it, they can’t store it, and have to throw it away. If they can structure it with an abstraction, they don’t need to store most of it. Only a few critical details to fit the Procrustean bed of their abstraction.

Because foxes resist the temptation of abstraction (and therefore the temptation to throw away examples of patterns once an inductive  generalization and/or metanorm has been arrived at, or stop collecting), they slowly gains an advantage over time, as the data accumulates: the Tetlock edge.

We can restate the Archilocus definition in a geeky way: The fox has one big, unstructured dataset, the hedgehog has many small structured datasets. 

But this takes a long time and a lot of stamp collecting, and foxes have to learn to survive in the meantime. Young foxes can be particularly intimidated by old hedgehogs, since the latter is likely to have accumulated more data in absolute terms.

So how do foxes survive at all? Why haven’t we gone extinct as a cognitive species.

A behavior of wild foxes is very revealing. The metaphor of the fox in the hen-house is based on a characteristically foxy behavior: when given an opportunity to sneak into a hen-house, a fox will kill every chicken in sight. What the metaphor does not capture is the reason foxes do this: far from having a bloody-minded taste for indiscriminate slaughter, they operate on the assumption that they have to lock in gains on the rare occasions they do get a chance to score big (foxes bury their extra kills all over their territory, according to an Attenborough documentary I once watched).

By contrast, when a general belief exists only as an abstract principle strongly anchored by a detailed understanding in one preferred home domain, and the finer details (including details that potential conflict with clean-edged abstractions) of distant, alien examples have been discarded,  the belief becomes fragile to logical error or new alien counter-examples. Over time, the world-view becomes more unreliable.

To the extent that the abstract beliefs have been assembled into an entire abstract religion, the complete belief structure can unravel if foundational abstract beliefs are undermined, leading to an existential crisis.

Ultimately, the fox-hedgehog duality is a result of bounded rationality. You only have so much room in your head. You have to choose where to put in a lot of detail.

Can you get past this fundamental limit? I don’t know yet. Possibly with computational prosthetics.

The conclusion Isaiah Berlin drew from his study of Tolstoy was this: Tolstoy’s talents were those of a fox, but he believed one ought to be a hedgehog. The resulting tension informs all his work (especially his later work, when he grew religious).

When I first tried to put Taleb’s views in relation to my own, it struck me that his talents are those of a fox, but he believes one ought to be a hedgehog (for a while, I thought that description applied to me, until I looked in my Talebian evil-twin mirror and realized it didn’t).

The account I’ve developed so far, I think, accounts for the lives of both. With Tolstoi, it explains his later moralistic fiction to my satisfaction. With Taleb, it explains the curious build up of ideological tension in his books (what was mere winner’s glee in Fooled by Randomness turns into a virtuoso abstraction in The Black Swan and ideological hatred by the time you get to Antifragile). It explains the paradox of his preference for trial-and-error and heuristic thinking within domains, but abstraction and System 2 logic across domains.

John Boyd appears to have been another mashup character, somewhere in between the two, but closer to Taleb than to Tolstoy.

The effort to transcend the fox-hedgehog dichotomy in one way or another, is certainly laudable. I might even try it myself one day. But for most of us, most of the time, the bigger challenge is avoiding degeneracy. Which brings us to weasels and cacti.

Dissolute Foxes and Hidebound Hedgehogs

It is easy to forget that in Berlin’s original essay, the subjects he was analyzing were all renowned artists, writers or thinkers. But we commonly limit ourselves to thinking about dissolute foxes and hidebound hedgehogs, where both archetypes are reduced to negative stereotypes with no redeeming qualities. What precisely is involved in this reduction?

To see why strong views, strongly held and weak views, weakly held represent degenerate hedgehogs and foxes respectively, consider an alternative representation of each of the four archetypes as 2×2 matrices, where you have rows marked views and holds and columns labeled strong and weak.

Full-blown hedgehogs and foxes will give you a 2×2 matrix that is “full” in a sense (what mathematicians call “full rank”), where you cannot delete any row or column without losing some information. This makes their belief systems truly two dimensional: they have views and world views, cognition processes and meta-cognition processes, norms and metanorms. Where they differ is a matter of emphasis on the bar or stem of the T, and the strength of their coloring of the unknown.

Strong views, strongly held is the stuff of dogma. Cognition without meta-cognition.  A hidebound inability to change views at all, via a disconnection from reality through elevation of fundamental beliefs to unfalsifiable sacredness.

Weak views, weakly held is the stuff of bullshit. Metacognition without cognition. A dissolute  and ephemeral engagement of ideas in purely relative terms.

The first is the pattern of degeneracy that threatens hedgehogs who never unroll from a balled-up state to scurry to another place, turning into de facto cacti.

The second is the pattern of degeneracy that threatens foxes who become unmoored from any kind of ground reality, becoming impossible to pin down, but also incapable of telling truth and falsehood apart, turning into weasels.

Bullshit Detection

Non-degenerate foxes and hedgehogs are both capable of weathering bullshit.

Foxes are bullshit resistant.  They do not build complex and fragile edifices of metacognitive abstraction that might collapse, and are also agnostic to the state of detailed truths of any domain. There’s not much more to say about how they weather bullshit.

A hedgehog weathers bullshit by detecting it. This is a more complex way to weather bullshit.

When one strong view collides with another, sincere ideological opposition from other hedgehogs is easy to detect. In the simplest case, you get the opposite view by flipping all the truth values. Big-picture opposition from a non-bullshitting fox is also easy to detect, because you get coherence with respect to fundamental beliefs.

But an insincere opposition to a strong view will reveal itself by having a random relationship to the elements of the strong view: a bullshit-detector will fire.

If you’re a hedgehog, here’s an explicit little bullshit detection test you can do in specific situations: make a list of 20 basic and obscure beliefs yes/no beliefs in your domain. Now ask another person, with about the same level of claimed home-comfort in that domain, for their beliefs on those questions.

Now do what computer scientists call an XOR between the two sets of yes/no beliefs. If members of a pair of beliefs are both yes or both no, you get a 0, an agreement. If not, you get a 1, a disagreement.

A soul-mate will give you all zeros. Perfect alignment. An idealized nemesis should give you all ones.  Perfect opposition. A fox-hedgehog type opposition should give you either alignment or opposition on the basic beliefs, and random results for the obscure ones. This is an evil-twin relationship.

But if all the results are all random, you are either dealing with a bullshitter, or are yourself a bullshitter. 

Note #1: In one of his guest posts last year on the Tempo blog, Greg Rader came at this same theme from a different angle, that of developmental trajectories to foxhood or hedgehoghood: The Cloistered Hedgehog and the Dislocated Fox.

Note #2: For those of you new to this theme, Tempo might be a useful read, and the glossary I posted recently might be a useful aid. 

Show more