2017-01-12



Vox Day kindly reprinted the review of Uncertainty by Jayne at the Journal of American Physicians and Surgeons. Vox also has a copy of the book himself and there may be more discussions on the way. Stay tuned.

Many good comments at Vox Popoli, and I thought I’d answer some questions here so that they might be collected in one place. It is obvious that Icicle was right when he suggested adopting a popular slogan to MMGA. Together we can Make Models Great Again!

Dc.sunsets said:

Regarding this book, it’s a testament to how much of what people currently take for granted is nothing but a vast network of rationalizations for what amounts to a beehive of effort that produces nothing real and serves largely to support a labyrinth where most people are both robber and victim.

This is surely true in certain fields; something near the entirety of education, vast swaths of sociology, the politicized portions of psychology, and so on come to mind. In these places, Theory searches for Evidence, of which even the most tenuous is accepted as conclusive.

But it’s not so everywhere. Uncertainty builds on absolute certainty, truths which, as the late great David Stove said, the ordinary fellow is a millionaire. Our strongest and best beliefs are those which are not scientific, but philosophical, not physical but metaphysical. And this must be so if any thinking is to get off the ground. That is part of the key message of Uncertainty (the book).

The other is that nature of the cause of some observable thing need not be known, and almost never is known, for us to make decent probabilistic predictions. Ordinary methods that assert, or seem to assert, or pretend to assert, that a cause has been discovered are all in error, all are misleading, and all produce mountainous over-certainty.

MIG asks (see also pyrrhus, wrf3, and Noah B The MacroAggressor), “If randomness does not exist, what about random events like the decay of radioactive atoms?”

Random means unknown or unpredictable (and both of these terms have absolute, universal and local, conditional definitions). Randomness therefore is wholly epistemic and is only a statement of non-certainty. Given this understanding, we can re-ask MIG’s question: “If randomness does not exist, what about unpredictable events like the decay of radioactive atoms?”

There are two considerations: (1) the amount of predictability, and (2) cause. About (1): we can make excellent probabilistic predictions about single atoms or collections of the same, saying things like, “Given the following assumptions, knowledge of size of the radioactive source, and past observations, the probability there will be 12 pings on the Geiger counter in the next 5 minutes is X.” As long as we are careful, these predictions will be found to be calibrated (itself a multi-dimensional concept) with reality.

For (2) suppose we have one uranium atom in front of us. It will decay eventually, and we can predict calibrated probabilities for when. But we cannot say why the decay happens when it does, nor can we say why it doesn’t when it doesn’t. That is, we can suggest why it decays at all, by referencing more fundamental physics; but about the timing, we are ignorant. We do not know the cause of the timing.

And it looks like we cannot know. Bell’s Theorem (for those who have heard about it are probabilistic, meaning they are about uncertainty, and uncertainty is epistemological, meaning about our understanding of uncertainty. Bell tells us that assuming certain information that might (linearly, additively) be associated with cause, predicted probabilities for certain events are not well calibrated. This means that if we did know of the information Bell supposed, it would not help us predicting or in ascertaining cause.

Now, before it decays, the atom is in potential to being decayed; and when it decays, some actual thing must have actualized that potential; which is to say, some real, actual thing must have caused the decay. But what this actual thing is, we do not know—and perhaps we may never know. It cannot have been “randomness” (or “chance”) that was the cause, because there is no actual thing as randomness (or chance). These things do not have actuality; they therefore can have no causal powers.

As you might have suspected, I go into this in more detail in Uncertainty. Noah B The MacroAggressor is unhappy with this section because I do not hold with “quantum superdeterminism”. He says my take is “little more than an assertion that all quantum phenomena must be causal, ignoring the possibility that what we perceive to be causality ultimately arises from noncausal systems.” Why it must be causal I have just explained (although only in the briefest terms in this post). But Noah is right: on metaphysical grounds I do not hold with noncausal systems of any kind.

Teapartydoc says (in part):

I’m a doctor and I can tell you that the vast majority of medical research is worthless crap. Some of what used to be the most prestigious journals are now reduced to having their chief articles being about achieving health care equities and sociological baloney. The reviewer is right. P-values are excuses for publishing nonsense and nonsense gets published anyway without them.

This is so, and it is so because of the ease of writing papers like these. Cobble together a few “instruments”, i.e. questions about emotions and beliefs with pseudo- and ad hoc-quantifications, ask them of some patient group and voila: you have a paper and, if you’re clever enough, the basis of a new “disparity.”

Dc.sunsets again:

I still recall well the Principal Investigator for whom I worked in 1983 going before the US Congress to testify against a proposed ban on the use of animals in experimental research.

Advocates of the ban proposed that “computer models” could be substituted for living creatures.

This was 34 years ago and such delusions have only metastasized…

What’s amusing about this is that since the computer models are all deterministic, even if they have “randomness” built into them in the form of simulations (which, as I show in the book, is not necessary and misleading), the answers to all medical questions are built-in to the models. They must be, else the models could not simulate the disease-cure process. This means we need not ever do a real experiment again. What a tremendous savings of time and money!

Hauen Holzwanderer says, “Chasing zero from a sociological point of view through better p-values and all of the various perturbations thereof sounds like another manifestation of mental illness. One that’s asymptotmatic.” I agree, except for the last statement. Many symptoms emerge. Such as the desire to write grants and say things like “More research is needed.”

Seeingsights says, “A determinist could say that the randomness is only apparent, due to our lack of knowledge of all the actions and reactions involved.” It doesn’t have to be a determinist only who says this: I agree with it.

Tom’s comment about how what is happening in science happened in finance should be read by all. He ends with a disheartening but believable prediction:

But here is a guy [Briggs] who is saying that (If I read it right) the process itself is imperfect and one way to fix it would be to change it so that the fools and charlatans can’t fake it any more. Make them actually produce something that in finance is represented as ‘profit’. That kind of imperative was always present in my industry, so it’s rare that the fools or charlatans ever do anything except waste the time of people trying to hire quants.

If his ideas are adopted, I expect it will lead to an increase in fraudulent data like we see in finance, but hopefully no one will be hurt in the process.

If Tom is right, his hope will be dashed. Because so much statistics happens in “people” fields, like medicine, psychology, and so on, if there is fraud there will be pain.

Kevin has a long comment, the gist of which is:

I now have reached a similar conclusion as VD’s summary of the book suggests – I only regard as real science randomized studies, models that can accurately predict, or science that is hierarchal in nature in the real world (the next experiment or application reveals the flaws in the previous implicitly). Most epidemiology and economics is preliminary, though not without its uses.

Since random means unknown, adding unknowningness (to coin a phrase) can only harm it. It is never “randomizing” (“unknowingizing”?) an experiment that lends it validity, it is control. The more control the better. The more you can exclude alternate explanations of the observable of interest, the better your experiment is.

Now sometimes you want to ensure spurious control is not added, where by spurious control I mean cheating or confirmation bias (self cheating). We have referees flip coins because we do not trust any system devised on men to provide for fair starts to games, and because we assume (and only assume) the referee cannot control the outcome of the coin. But since it is a simple physical system, it can be done (I was better at cheating coin flips when I practiced more; ask any magician for a demonstration).

A Deplorable Paradigm Is More Than Twenty Cents writes:

Some of the book’s key insights are: Probability is always conditional.

That statement is strange. It looks like an attack on basic probability itself. The classic coin flip is an independent trial with two equally probable outcomes, to say it is conditional means it is not independent. But to say a simple coin flip is conditional leads down some rabbit trails that become more mystical than anything else – is the outcome of my coin flip pre-determined by some previous set of events going all the way back to the beginning of time, for example?

Perhaps the comment is something taken out of context; all probabilities within some area, such as medicine, are conditional. That could be demonstrated.

But “all probabilities are conditional” rewrites a fundamental premise of probability theory. I sure hope I’m missing something.

No, sir, Twenty Cents, you have missed nothing. Uncertainty is and attack on the classical notions of probability. I prove (not suggest: prove) the contention that all probability is conditional. I also prove terms like independent need to go, to be replaced by things like relevant and irrelevant.

I have written about all probability being conditional so often that I won’t repeat it here. Except to you give you exercise. Try writing down a probability which has no antecedants whatsoever, that has no conditioning information, that has no premises that support. You will find the exercise either impossible or containing an error.

Finally (as of the last time I checked) Jose writes:

Like most polite people writing about uncertainty, Briggs underestimates the big problem with the use of statistical evidence: selection biases (personal and built into the “peer review” system — a pre-publication veto system in reality), faked data, and p-hacking (more sophisticated faked data). The most sophisticated model in the world, and the best prediction-based testing strategy are no match for made-up data and papers selected according to fit to current narratives.

This is the first time I’ve been called polite, so I am at a loss how to deal with it. But the gentleman is correct. I left most of that kind of thing out of the book, and instead leave it for the blog.

Addendum I saw the comment about Jaynes from buybuydandavis after I had written the above. “I’m particularly interested in the specifics of how you’d differentiate yourself from Jaynes, and what *results* you can produce that he can’t.” The results have to do with finite-discrete settings of all problems, and then taking them to the limit—as Jaynes himself recommended. I also correct a theorem where Jaynes hoped to find (but could not, as it turns out) finite exchangeability. And a few other things. See the book’s Table of Contents.

Show more