2013-08-30

Guest blog by Ilja Schmelzer, a right-wing anarchist and independent scientist

A nice summary of standard arguments against de Broglie-Bohm theory can be found at R. F. Streater's "Lost Causes in Theoretical Physics" website. Ulrich Mohrhoff [broken link, sorry] also combines the presentation of his position with an interesting rejection of pilot wave theory. These arguments I consider in a different file. Here, I consider the arguments proposed in several articles of Luboš Motl's blog "The reference frame": David Bohm born 90 years ago and Bohmists & segregation of primitive and contextual observables, Anti-quantum zeal and in off-topic responses of "Nonsense of the day: click the ball to change its color". Below, we refer to Luboš Motl simply as lumo (his nick in his blog).

Another argument (also with lumo's participation), related to Lorentz-invariance, I have considered at another place.

If you know other interesting pages critical of de Broglie - Bohm pilot wave theory, Nelsonian stochastics, non-local hidden variable theories in general, as well as ether theories, please tell me about them.

The most important thing: Measurement theory

The most important part of physics are, of course, experiments. Moreover, this is also the point where lumo is simply wrong, so it is worth to start with it.:

... it is not true that the de Broglie-Bohm theory gives the same predictions in general. It can be arranged to do so in the case of one spinless particle. But in the real quantum theories we find relevant today, such as quantum field theory, de Broglie-Bohm theory cannot be constructed to match probabilistic QFT exactly, and one can see that its very framework contradicts observable facts.
At another place, we find some hint where his misunderstanding is located:

Your equations about \(X\) are completely irrelevant for the measurement of the spin. The problem is not when one wants to measure \(X\). Indeed, the measurement of \(X\) might occur analogously to its measurement in the spinless case. The problem occurs when one actually wants to measure the spin itself.

The projection of the spin \(j_z\) is an observable that can have two values, in the spin \(1/2\) case, either \(+1/2\) or \(-1/2\). It is a basic and completely well-established feature of QM that one of these values must be measured if we measure it.

How is your 17th century deterministic theory supposed to predict this discrete value? Like with \(X\), it must already have a classical value for this quantity. Except that in this case, it has to be discrete, so it can't be described by any continuous equation. ...

Preemptively: you might also argue that any actual measurement of the spin reduces to a measurement of \(X\). But it's not true. I can design gadgets that either absorb or not absorb the electron depending on its \(j_z\). So they measure \(j_z\) directly. deBB theories of all kinds will inevitably fail, not being able to predict that with some probability, the electron is absorbed, and with others, they're not. This has nothing to do with \(X\) or some driving ways. It is about the probability of having the spin itself.
The last paragraph gives the hint: lumo has interpreted the claim that all measurements reduce to position measurements as "all measurements of the electron reduce to position measurements of the electron". If that would be true, I would concede that lumo's polemics against pilot wave theorists are justified. This was, by the way, the state of the art before Bohm's measurement theory appeared 1952. Thus, lumo's arguments illustrate in a nice way why de Broglie had given up pilot wave theory.

Once the question has been asked how the 17th century deterministic theory manages to predict discrete values, let's explain this story. As a 17th century theory, with real aristocratic origin, it leaves the hard work to servants (quantum operators), reserving for itself the final (and most important) decisions ;-).

First, there is some interaction of the wave function of the electron with the wave function of the measurement device. (There is of course also an equation for the position of the electron \(q_{el}\) – the \(X\) in lumo's text – but it is completely irrelevant, not only at this stage, but in the whole process.) The result of the measurement is, as usual, a wave function of type\[

|\psi\rangle = \alpha_1|{\rm up}\rangle|q_1\rangle + \alpha_2|{\rm down}\rangle|q_2\rangle

\] This exploitation of standard QT is not enough – now decoherence will be exploited in an equally shameless way. We leave it to decoherence considerations to decide which observables of the measurement device become amplified or macroscopic. Assume the quantum states \(|q_1\rangle, |q_2\rangle\) are decoherence-preferred. In this case, decoherence amplifies the microscopic measurement results \(|q_1\rangle, |q_2\rangle\) into classical, macroscopically different states \(|c_1\rangle, |c_2\rangle\). After finishing this hard job, it presents the following state:\[

|\psi\rangle = \alpha_1|{\rm up}\rangle|c_1\rangle + \alpha_2|{\rm down}\rangle|c_2\rangle

\] Now, everything is prepared, it remains to make the really important decision which of the wave packets is the best one ;-). At this moment a hidden variable enters the scene. But, surprise, it is not the hidden variable of the electron \(q_{el}\) (lumo's X), but that of the classical measurement device \(q_c\).

The job of \(q_c\) is not a really hard one. After driving around (no, being driven around by quantum guides) in an almost unpredictable way, it simply takes the wave packet prepared for him by the quantum operators at the point of arrival ;-). In other words, we simply have to put the actual value of \(q_c(t)\) into the full wave function \(|\Psi\rangle\) to obtain the (unnormalized) effective wave function:\[

\psi(q_e) = \Psi(q_e, q_c(t))

\]What we need for this scheme to work as an ideal quantum measurement is not much. We need that the two states of the macroscopic device \(|c_1\rangle, |c_2\rangle\) do not (significantly) overlap as functions of the hidden variable \(q_c\). In this case, whatever the value of \(q_c\), the result \(\psi(q_e)\) will be a unique choice between two effective wave functions, namely between \(|{\rm up}\rangle\) if \(q_c\) is in the support of \(|c_1\rangle\), and \(|{\rm down}\rangle\) otherwise. And we need the quantum equilibrium assumption for \(q_c\) to obtain the probabilities for these two choices as \(|\alpha_1|^2\) resp. \(|\alpha_2|^2\).

Thus, everything works as in quantum theory – Born rule as well as state preparation by measurement (only without any ill-defined wave function collapse or subdivision of the world into a classical and quantum part, or the equally ill-defined "subdivision of the world into systems" used in many worlds or other decoherence-based approaches).

But maybe one of the two assumptions we have used used are wrong? Given Valentini's subquantum H-theorem, together with the numerical results of Valentini and Westman, which show a remarkable relaxation to equilibrium already in the two-dimensional case in a quite short period of time (arXiv:quant-ph/0403034), there is not much hope for observations of non-equilibrium in our universe.

One can, of course, also doubt that macroscopically different states do not have a significant overlap in the hidden variables. Such doubts have been, for example, expressed by Wallace and Struyve for pilot wave field theories. See my paper "Overlaps in pilot wave field theories" at arXiv:0904.0764 about the solution of this problem.

About the zeros of the wave function

There is a second point where experiment is involved, with an easy solution:

How do we know that \(m=l_z/\hbar\) must be an integer? Well, it is because the wave function \(\psi(x,y,z)\) of the m-eigenstates depends on \(\phi\), the longitude (one of the spherical or axial coordinates), via the factor \(\exp(i\cdot m\cdot\phi)\) which must be single-valued. Only in terms of the whole \(\psi\), we have an argument.

However, when you rewrite the complex function \(\psi(r,\theta,\phi)\) in the polar form, as \(R\exp(iS)\), the condition for the single-valuedness of \(\psi\) becomes another condition for the single-valuedness of S up to integer multiples of \(2\pi\). If you write the exponential as \(\exp(iS/\hbar)\), the "action" called S here must be well-defined everywhere up to jumps that are multiples of \(h = 2\pi\hbar\).
That's a nice argument, and, because of this argument, today the original form of de Broglie's "pilot wave theory" is preferred in comparison with the "Bohmian mechanics" version proposed 1952 by Bohm. In pilot wave theory, the pilot wave is really a wave, and you can apply the original argument to show that these observables are quantized. In Bohm's second order version, this is different, and the quantization of certain observables becomes, indeed, problematic. This has been another reason for me (beyond history, see arXiv:quant-ph/0609184) to prefer the name "pilot wave theory" in comparison with "Bohmian mechanics".

More generally, something very singular seems to be happening near the \(R=0\) strings in the Bohmian model of space.
The "model of space" in pilot wave theory is a trivial one, nothing strange happens there if R = 0. The singularity of the velocity at these points is harmless – a simple rotor localized in a string, moreover, there is nothing in the place where velocity becomes undefined.

So even though the Bohmian mechanics stole the Schrödinger equation from quantum mechanics, the superficially innocent step of rewriting it in the polar form was enough to destroy a key consequence of quantum mechanics - the discreteness of many physical observables.
If there would be property rights for equations or functions, one could argue as well that Schrödinger has stolen the wave function from de Broglie's pilot wave theory. Fortunately, such nonsense does not exist in science. But there is a point worth to be mentioned: Without pilot wave theory, there would be no Schrödinger picture, and we would have to use the Heisenberg formalism all the time. And if some Bohm would have found the Schrödinger equation later, it would have been named, as well, an unnecessary superconstruction and banned from physics, for almost the same reasons.

About relativistic symmetry and the preferred frame

Last but not least, there are some claims that pilot wave theories will be unable to recover QFT predictions in the relativistic domain. Unfortunately for his argumentation, the equivalence theorem remains to be a theorem even in the relativistic domain – nothing used in it has any connection to the particular choice of spacetime symmetry. Thus, if the quantum theory has relativistic symmetry for it's observable predictions, the same holds for the observable predictions of pilot wave theory.

More concretely, it is inconsistent with modern physics in many ways, as we will see.

Special relativity combined with the entanglement experiments is the most obvious example. Bell's theorem proves that if a similar deterministic theory reproduces the high correlations observed in Nature (and predicted by conventional quantum mechanics), namely the correlations that violate the so-called Bell's inequalities, the objects in the theory must actually send physical superluminal signals.

But superluminal signals would look like signals sent backward in time in other inertial frames. It follows that at most one reference frame is able to give us a causal description of reality where causes precede their effects. At the fundamental level, basic rules of special relativity are inevitably violated with such a preferred inertial frame.
I was already afraid that lumo does not even understand that in a preferred frame everything is fine with causality. The introduction was, at least, the highly dramatic one which is typical for such crank cases.

I like the formulation "at most". Sounds as if we would really like to have more reference frames and are, now, very disturbed that at most one preferred frame is available ;-).

You might think that the experiments that have been made to check relativity simply rule out a fundamentally privileged reference frame. Well, the Bohmists still try to wave their hands and argue that they can avoid the contradictions with the verified consequences of relativity.
Who is hand waving here? Lumo might, of course, think that experiments rule out a hidden preferred frame. But it's his job, in this case, to point out which observations rule out such a preferred frame. As long as he fails to do it, I don't even have contradictions with any verified consequence of relativity to wave my hands.

I wonder whether they actually believe that there always exists a preferred reference frame, at least in principle, because such a belief sounds crazy to me (what is the hypothetical preferred slicing near a black hole, for example?).
I'm happy to answer this question: The preferred coordinates are harmonic. Given, additionally, the global CMBR frame, with time after big bang as the time coordinate, this prescription is already unique. For a corresponding theory of gravity, mathematically almost exactly GR on flat background in harmonic gauge, physically with preferred frame and ether interpretation, see my generalization of the Lorentz ether to gravity.

But it is possible to see that one can't get relativistic predictions of a Bohmian framework for all statistically measurable quantities at the same moment, not even in principle. If a theory violates the invariance under boosts "in principle", it is always possible to "amplify" the violation and see it macroscopically, in a statistically significant ensemble. If such a violation existed, we would have already seen it: almost certainly.

I would be interested to learn more about this mystical way to amplify high energy violations of Lorentz symmetry into the low energy domain, without access to the necessary high energies. As far, it is lumo who is waving his hands.

I know that there are some nice observations, which use the extremely large distances light has to travel for some astronomical observations, to obtain boundaries for a frequency dependence of the velocity of light. Some of the boundaries obtained in this and different ways suggest even that these Lorentz-violating effects are absent for distances below Planck length. But Planck length is merely the distance where quantum gravity becomes important. The fundamental distance where our continuous field theories start to fail may be different.

In proper quantum mechanics, locality holds. If one considers a Hamiltonian that respects the Lorentz symmetry - such as a Hamiltonian of a relativistic quantum field theory - the Lorentz symmetry is simply exact and it guarantees that signals never propagate faster than light.

In proper quantum mechanics, one can define the operators that generate the Poincaré group and rigorously derive their expected commutators. Also, it is exactly true that operators in space-like-separated regions exactly commute with each other. This fact is sufficient to show that the outcome of a measurement in spacetime point B is never correlated with a decision made at a space-like-separated spacetime point A.

These facts allow us to say that quantum field theory respects relativity and locality. The actual measurements can never reveal a correlation that would contradict these principles. And it is the actual measurements that decide whether a statement in physics is true or not. Bohmian mechanics is different because these principles are directly violated. You may try to construct your mechanistic model in such a way that it will approximately look like a local relativistic theory but it won't be one. Consequently, you won't be able to use these principles to constrain the possible form of your theory. Moreover, tension with tests of Lorentz invariance may arise at some moment.

First, there is no reason not to use some symmetry principles for one part of the theory which do not hold for another part of it. For example, the symplectic structure in the classical Hamilton formalism has another symmetry group – the group of all canonical transformations – than the whole theory including the Hamiltonian.

Then, to postulate a fundamental Poincare symmetry is, of course, a technically easy way if one wants to obtain a theory with Poincare symmetry. But what is the purpose of a postulated global Poincare symmetry in a situation where the observable symmetry is different, depends on the physics, as in general relativity? Whatever the representation of the \(g_{\mu\nu}(x)\) on the Minkowski background – it will (except for simple conformally trivial cases) have a different light cone almost everywhere. If the Minkowski background lightcone is the smaller one, one has somewhere to violate the background Poincare symmetry. It may be always the other way. But in this case, the axioms of the theory give only restrictions for the background Minkowski light cone, not for the physical light cone. Thus, tensions with the physical Lorentz invariance may arise in the same way, because the theory only looks like one which, in the particular point \(x\), has the Lorentz invariance for the metric \(g_{\mu\nu}(x)\). But really it is a theory with Lorentz invariance for a different metric \(\eta_{\mu\nu}\), with a larger light cone, thus, allows for superluminal information transfer relative to \(g_{\mu\nu}(x)\).

String theory, as far as I understand, obtains gravity as a spin two field on Minkowski background. This requires, as far as I understand, that this problem is solved in string theory. Fine. Means, it is a solvable one.

The contradiction between relativity and semi-viable Bohmian models (that violate Bell's inequalities, and they have to in order not to be ruled out by experiments) is a very profound problem of these models. It can't really be fixed.
Again, nice formulation. Sounds like poor Bohmians have tried hard not to violate Bell's inequalities and finally given up. "Semi-viable" is also a nice word. But the "very profound problem" remains hidden. (A nice place for problems in a hidden variable theory.;-))

Instead, I prefer to follow the weak suggestions one can obtain based on mathematical equivalence proofs. When I construct a pilot wave theory based on a relativistic QFT, it seems really hard to avoid the consequences of this theorem to violate Lorentz invariance. At least, I don't know how to manage this. We obtain a pilot wave theory which does not violate observable relativistic symmetries. Simply because there is an equivalence proof for observables.

Today, we have some more concrete reasons to know that the hidden-variable theories are misguided. Via Bell's theorem, hidden-variable theories would have to be dramatically non-local and the apparent occurrence of nearly exact locality and Lorentz invariance in the world we observe would have to be explained as an infinite collection of shocking coincidences.

I'm impressed by the verbal power of "dramatically nonlocal", even more by the "infinite collection of shocking coincidences". Sounds really impressive. But I would not name a nonlocality, which, because of an equivalence theorem, cannot be used even for information transfer, and can be observed only indirectly, via violations of Bell's inequality, a dramatical one. Instead, it seems to me the most non-dramatical one. As well, I would distinguish the simple and straightforward consequences of an equivalence theorem from an "infinite collection of shocking coincidences". Instead, I would be more surprised if an quantum equilibrium large distance low energy limit would not change anything in the symmetry group of a theory.

Last but not least, the Lorentz group is simply the invariance group of a quite prosaic wave equation, an equation we find almost everywhere in nature. And such, a wave equation (or it's linearization) usually defines also an effective (and in general curved) Lorentz metric, so that the wave equation becomes the harmonic equation of this Lorentz metric. As a consequence, for everything which follows such a wave equation we obtain local Lorentz symmetry. (See arXiv:0711.4416, arXiv:gr-qc/0505065 for overviews.)

To assume that a symmetry, which so often and for very different materials appears as an effective symmetry in condensed matter theory, is fundamental, is a hypothesis which seems quite unnatural for me.

... and the ether ...

The similarity with the luminiferous aether seems manifest. ...

I just don't think that this is a rationally sustainable belief. It's just another repetition of the old story of the luminiferous aether.
About the similarity with the aether I fully agree with lumo ;-)))). But what is irrational in the belief that there is an ether? I would like to hear some details. I would be really interesting to hear which of the beliefs expressed in my ether model for particle physics are not rationally sustainable.

Now, it seems we have finished the claims of empirical inadequacy. It's time to consider the metaphysical arguments.

About signs of the heavens

It is not surprising in any way that the new, Bohmian equation for \(X(t)\) can be written down: it is clearly always possible to rewrite the Schrödinger equation as one real equation for the squared absolute value (probability density) and one for the phase (resembling the classical Hamilton-Jacobi equation). And it is always possible to interpret the first equation as a Liouville equation and derive the equation for \(X(t)\) that it would follow from. There's no "sign of the heavens" here.
I think there are "signs of the heavens" here. First, the guiding equation for the velocity is a nice, simple, and local (in configuration space) equation. The derivation mentioned by lumo could as well lead to a dirty nonlocal one.

Then, the equation for the phase resembles the classical Hamilton-Jacobi equation, and for constant density becomes simply identical with it. Now, the same guiding equation is, as well, part of the classical Hamilton-Jacobi theory – a theory which was in no way related to the conservation law of the first derivation.

Now, Hamilton-Jacobi theory is really beautiful mathematics, it has all properties of "signs of the heavens", even if taken only alone. See arXiv:quant-ph/0210140 for an introduction. That one and the same simple law for velocity gives, on one hand, Hamilton-Jacobi theory in the classical limit, and, on the other hand, a Liouville equation, is, at least for me, a sufficiently strong hint from the mathematical heaven. In many worlds I have not seen any comparable signs of beauty.

And there is, of course, the really beautiful derivation of the whole quantum measurement formalism.

How to distinguish useful improvements from unnecessary superconstructions

The mechanistic models add a new layer of quantities, concepts, and assumptions.
Indeed, every new, more fundamental theory adds a new layer of quantities, concepts, and assumptions. So what?

[Einstein] called the picture an unnecessary superconstruction.
Appeal to authority does not count. And there is no reason to expect that the father of relativity would like a theory which violates his child. But how to distinguish unnecessary superconstructions from interesting more fundamental theories? Above add something to the old theory. But useful more fundamental theories allow to explain something else from the old theory: Some postulates of the old theory can be derived now. So, one has to compare what one has to add with what can be derived now.

This relation is quite nice for pilot wave theory: The new layer is, essentially, the configuration together with a single additional equation – the guiding equation for the configuration. What can be derived from this equation is, instead, the whole measurement theory of quantum mechanics, including the Born rule and the state preparation by measurement. Compared with the Copenhagen interpretation, the additional layer also replaces the "classical part" of this interpretation and removes the collapse from the theory.

These last two points have been a major motivation of other reinterpretations as well. In particular, for many worlds it seems to be the only aim. The interpretation I prefer to name "inconsistent histories" is focussed on this aim too. Thus, two things which have been obtained in pilot wave theory first, have been widely recognized today as important contributions to the foundations of quantum theory. One can object that pilot wave theory does not get rid of the classical part, but even extends it into the quantum domain. This depends on what one considers as problematic with the classical part: If the problem is the imprecision of this notion, the absence of well-defined rules for this part, then it is clearly solved in pilot wave theory. Anyway, pilot wave theory was the first interpretation with completely unitary dynamics for the wave function, without a collapse.

One can perhaps create classical mechanistic models that mimic the internal workings of quantum mechanics in many situations. For example, one can write a computer simulation. But you can't say that the details of such a program or Bohmian picture is justified as soon as you confirm the predictions of conventional quantum mechanics.
There is no necessity to justify every detail. The important point of the pilot wave interpretation is that to explain the observable facts there is no necessity to reject classical logic, realism, or to introduce many worlds, inconsistent histories, correlations without correlata or other quantum strangeness and mysticism. We have at least one simple, realistic, even deterministic, explanation of all observable facts. That's enough to reject quantum mystery. Why should we justify every detail of some particular realistic model? There may be several realistic models compatible with observation. I would expect this anyway, given large distance universality.

The mechanistic models add a new layer of quantities, concepts, and assumptions. They are not unique and they are not inevitable. The similarity with the luminiferous aether seems manifest. If they only reproduce the statistical predictions of quantum mechanics, you could never know which mechanistic model is the right one: it could be a computer simulation written by Oracle for Windows Vista, after all.
But what's the problem with this? Is Nature obliged to work with theories which can be inevitably reconstructed by internal creatures? You could never know? Big problem. Anyway, our theories are only guesses about Nature, and we can never know if they are really true. If you doubt, I recommend to read Popper. (I ignore here, for simplicity, the modern ways to recognize the truth of theories, like counting the number of papers written about them, or getting inspirations about the language in which God wrote the world.)

Moreover, science has developed lot's of criteria which allow to compare theories which do not make different predictions: Internal consistency, simplicity, explanatory power, symmetry, mathematical beauty. Lumo uses such arguments himself, thus, he is aware of their power. They are usually sufficient to rule out most of the competing models. And if there remain a few different theories, all in agreement with observation, this is not problematic at all – it is even useful: It allows to see the difference between the empirically established parts of these theories – these parts will be shared by all viable theories – and the remaining, metaphysical parts, which may be very different in the different theories. Thus, they serve as a useful tool to show the boundaries of what science can tell at a given moment.

For example, today the existence of pilot wave theory shows that almost all of the quantum strangeness, in particular the rejection of realism, "quantum logic", and the esoterics of many worlds, are in no way forced on us by any empirical evidence, but purely metaphysical choices of some particular interpretations.

What are the fundamental beables?

I could make things even harder for the Bohmian framework by looking into quantum field theory. What are the real, "primitive" properties in that case?
In the simplest case of a scalar field, the natural candidate for the "primitive property" or the "beable" is simply the field \(\phi(x)\). This is a very old idea, proposed already by Bohm. But the effective fields of the standard model are also bad candidates for really fundamental beables. They are, last but not least, only effective fields, not fundamental fields. In my opinion, one needs a more fundamental theory to find the true beables.

My proposal for such more fundamental beables can be found in my paper about the cell lattice model arXiv:0908.0591. Even if pilot wave theory is not mentioned at all in this paper, it is quite obvious that the canonical quantization proposal for fermion fields I have made there allows to apply the standard formalism of pilot wave theory to obtain a pilot wave version of this theory.

Problems with spin and with particle ontology in quantum field theories

A large part of lumo's arguments is directed against two particular versions of pilot wave theory – strangely, I don't like them too. The first one is the idea to describe particles with spin using only wave functions of particles with spin, but leaving the configuration without spin. In this case, the wave function is no longer a complex function on configuration space, but a function with values in some higher-dimensional Hilbert space. But, as a consequence, the very nice pilot wave way to obtain the classical limit via the Hamilton-Jacobi theory no longer works, and one would have to use the dirty old way based on wave packets to obtain some classical limit.

There are other examples of such pilot wave theories. First, this trick was used by Bell, who has proposed a pilot-wave-like field theory with beables for fermions, but not for bosons. Now, one can argue that this is already sufficient, and leave the bosons without beables. The reverse situation was a theory from Struyve and Westman for the electromagnetic field. Again, it has been argued that this is sufficient. And, for the purpose to obtain a realistic theory which is able to recover QFT predictions, it is. But I think that such pilot wave theories are sufficient only for one purpose: To be used as a quick and dirty existence proof for realistic theories in situations where some parts of the theory cause problems. For this purpose, they are indeed sufficient, if the part of the theory represented in the beables is large enough to distinguish all macroscopic states – a quite weak requirement. If one doubts that a theory without fermions, or without bosons, is sufficient for this, one should think about renormalization: If we use these incomplete theories to describe one type of the bare fields (for some energy), then all types of the dressed fields already depend on this single type.

The second type of theories I don't want to defend are theories with particle ontology in the domain of field theory. One reason is that semiclassical gravity shows nicely that fields are more fundamental, and the pilot wave beables have to be, of course, fundamental. Then, to handle variable particle numbers is a dirty job. There should be something more beautiful. Particles which pretend for a status of beables should be at least conserved.

Therefore, the parts of the argumentation where lumo attacks particle theories I can leave unanswered. Let's note only that a short look at the particle-based approach to field theory in arXiv:quant-ph/0303156 suggests that lumo's arguments don't hit this target as well. This version introduces stochastic jumps into the theory (showing, by the way, that pilot wave theorists are not preoccupied with determinism). But I can leave the comparison to the reader.

About the "segregation" among observables

Because experiments eventually measure some well-defined quantities, the likes of Bohm think that there must exist preferred observables - and operators - that also exist classically. They are classical to start with, they think. Positions of objects are an important example.

But the quantum mechanical founding fathers have known from the very beginning that this was a misconception. All Hermitean operators acting on a Hilbert space may be identified with some real classical observables and none of them is preferred.

I think it is a misconception to interpret pilot wave theory as preferring some observables. It is not an accident that Bell has even proposed another word, beables, for the configuration space variables in pilot wave theory. In particular, measurements of the beables play no special role at all, nor in the classical limit, nor everywhere else in pilot wave theory. To derive the measurement theory, we don't need them (this would be circular anyway). What we need are the actual values of the beables, not some results of observations. Indeed, let's assume for simplicity we consist of atoms, which are the beables of some simplified pilot wave theory. Then, a theory about our observations does not need anything about our observations of atoms – if we "observe" them at all, then only in a quite indirect way, and most people do not observe atoms at all. Therefore, observations of atoms cannot play any role in an explanation of our everyday observations. Of course, in these explanations atoms have to play a role, at least indirectly – as constituent parts of our brain cells. But these atoms inside our brain cells are nothing we observe, if we observe something in everyday life. Thus, we use only the atoms themself, not the observations of atoms, in such explanations of our observations.

Thus, as observables the beables play no special role – in particular, the theory of their measurements can be derived in the same way, without danger of circularity. In particular, their measurements have to be described by self-adjoint operators or POVMs as those of every other observable too. In this sense, there are no preferred observables in pilot wave theory.

And this construction is actually very unnatural because it picks \(X\) as a preferred observable in whose basis the wave vector should be (artificially) separated into the probability densities and phases
Configurations (I prefer "q" instead of "X", because "X" is associated with usual space, while "q" is associated with configuration space) play indeed a special role. But this is the same special role they play in the Lagrange formalism as well as in Hamilton-Jacobi theory. Above are very beautiful, useful approaches. I don't remember to have heard any objections that the Lagrange formalism is unnatural, because it picks "q" as a preferred observable. Instead, the Lagrange formalism is an extremely important tool in modern physics, in quantum field theory as well as in general relativity. Moreover, this "segregation" is a very natural one: If nothing changes, the configuration remains the same, while the velocities have to be zero. Instead, I have found the symmetry between such different things as position and momentum in the Hamilton equations (and, similarly, in the canonical approach to quantum theory) always strange and unnatural, (even if, because of its symmetry, beautiful).

So why lumo does not fight against segregation in the Lagrange formalism? The segregation is the same, the poor momentum variables are degraded to the role of "derivatives". (Or maybe he does? I have not checked. Anyway, the important role of the Lagrange formalism in modern science, which is based on exactly the same "segregation", is a fact which shows that there is nothing wrong with this particular segregation.)

In order to celebrate the Martin Luther King Jr Day, I will dedicate the rest of the text to a fight against the segregation of observables. :-) So my statement is very modest – that observables can't be segregated into the "real" primitive ones and the "fictitious" contextual ones – a fact that trivially rules out all theories (such as the Bohmian ones) that are forced to do so.

... I guess that you must agree that the "philosophical democracy" between all observables is pleasing and natural.
I see no reason at all to find such a "democracy" pleasing. You can observe a honest guy telling us the truth. As well you can observe how a liar is telling us lies. Above are observable. There may be even more symmetry between them. They may even make the same claims: "I have seen that he has stolen the money". That means, without segregation among observables, without destroying observable symmetry, we have to give them equal status. I don't plan to follow this idea, and will always prefer a segregation between truth and lies, even if this destroys some observable symmetries.

The segregation between contextual and non-contextual observables is less important, but is part of our everyday life as well. You can ask somebody about things he has not decided yet. He will think about them, possibly argue with you, and, maybe, give you an answer. This answer does not exist before you have started to argue with him, it is, therefore, contextual. Arguing with somebody else, he could have made a different decision. (Last but not least, this is one purpose of communication – to modify our decisions, if we hear good arguments to do this.) In other words, this answer will be contextual. But in a different situation, he has already decided about this question, and the answer was already part of reality of his thoughts when you have asked him. In this case, the answer is not contextual. Above answers we observe as results of complex verbal interactions, and they are, in this sense, on equal foot. Nonetheless, a realistic theory about his thoughts has to segregate between them. Without segregation, he should be or almighty, able to think and decide about all imaginable questions before you ask him, or completely dependent, deciding about nothing before you ask him.

In all these cases, the same "formalism" is used to obtain the results – communication in human language. Thus, that the same formalism – that of self-adjoint operators, or, more general, of POVM's – is used to describe the results of interactions in quantum theory is in no way an argument against this particular segregation.

Clearly, some quantities in the real world look more classical than others. But what are the rules of the game that separates them? The Bohmists assume that everything that "smells" like \(X\) or \(P\) is classical while other things are not. ...

Clearly, they want some quantities that often behave classically in classical limits.
Clearly not. The "segregation" in pilot wave theory is between configuration and momentum variables, and it is in no way related with one of them being "more classical". In classical situations, above behave classically, and the same segregation exists in classical theory too, in the Lagrange formalism as well as in Hamilton-Jacobi theory. There is no place in pilot wave theory where one has to care that something in the behaviour of the configuration is "classical": In the classical limit, it follows automatically, from the classical Hamilton-Jacobi equation, that everything behaves classically. For other questions this is simply irrelevant.

It is the many worlds community which is focussed around the classical limit. That's reasonable – they have a very hard job to construct something which at least sounds plausible (at least if one uses words like "contains" for a linear relation between some points in a Hilbert space, talks about "evolution" of branches without defining any evolution law, and applies decoherence techniques without explaining how to obtain the decomposition into systems one needs to apply them).

In order to simplify their imagination, the Bohmists imagined the existence of additional classical objects – the classical positions.
Simplification has, it seems, been removed from the aims of science. Ockham's razor is out, simple theories have to be rejected. The higher the dimension, the better.

But the objects are in no way additional. They have been part of the Copenhagen interpretation: Its classical part contains, in particular, all the measurement results. And Schrödinger's cat proves that a unitary wave function alone is not sufficient, that we need something else. Or some non-unitary collapse, or some particular configuration as in pilot wave theory. Something – be it the collapsed wave function, or some different entity – has to describe the reality we see: or the dead, or the living cat. Many worlds claims something different, but introduces, for this purpose, the "branches" – some sort of collapsed wave functions without collapse, or configurations without a guiding equation, which is claimed to be "contained" in the wave function. (How a decomposition of some vector into a linear combination of others defines a containment relation remains unclear. A concept where a function like \(\psi(q) = 42\) "contains" all possible universes has it's appropriate place in the Hitchhiker's Guide to the Galaxy, not in scientific journals.) The approach named "consistent histories" leaves us with many inconsistent histories, subdivided into families.

Theories with physical collapse need dirty and artificial non-unitary modifications of the Schrödinger equation. The branches of many worlds are, it seems, left today without any equations at all. (A very scientific approach, indeed. Time to rename it into "many words"). Only pilot wave theory gives us a nice, simple, and beautiful equation for this "additional" entity. Moreover, it allows, just for nothing, to derive the whole measurement formalism of quantum theory.

Imagination is completely irrelevant for these questions. I see, of course, no reason to object if a theory allows to simplify our imaginations too. Instead, I would count it as one additional advantage of a theory. But I recognize that this attitude is not shared by other scientists. And there are, indeed, good reasons to prefer theories which are complex and mystical. Imagine you are in a company of nice girls (or boys, whatever you prefer), and they ask you what you are doing. Isn't it much more impressive if you can tell them about curved spacetimes, large dimensions, a strange new quantum realism, or even quantum logic, many worlds and other strange quantum things? Compare this with the poor 17th century scientist, the fighter against any form of mystery, the classical loser in every popular mystery film. The choice is quite obvious.

About history

Louis de Broglie wrote these equations for the position of one particle, David Bohm generalized them to N particles.
Not correct, the configuration space version of pilot wave theory was presented by de Broglie already at the Solvay conference. See de Broglie, L., in “Electrons et Photons: Rapports et Discussions du Cinquieme Conseil de Physique”, ed. J. Bordet, Gauthier-Villars, Paris, 105 (1928), English translation: G. Bacciagaluppi and A. Valentini, Quantum Theory at the Crossroads: Reconsidering the 1927 Solvay Conference”, Cambridge University Press, and arXiv:quant-ph/0609184 (2006)

I think that in analogous cases, we wouldn't be using the name of the "updater" for the final discovery.
After having read something about the history of this theory (I do not care that much about history), I use "pilot wave theory" instead of "Bohmian mechanics". But Bohm has a point too: de Broglie has broken his theory as not viable, being unable to develop the general measurement theory. This has been done by Bohm. Therefore, if I use names, I use now the combination "de Broglie-Bohm".

Of course that I have always known that Bell constructed his inequalities because he wanted to prove exactly the opposite than what he proved at the end. He was unhappy until the end of his life. Bad luck. Nature doesn't care if some people can't abandon their prejudices.
This sounds like lumo thinks that Bell has tried to prove, with his inequalities, that quantum mechanics is wrong. This does not sound very plausible. It is quite clear that he liked Bohmian mechanics, that he has seen it's nonlocality as an argument against it, and tried to remove this argument, by showing that this nonlocality is a necessary property of all hidden variable theories. About his bets before the experiments have been performed, there is the following quote: "In view of the general success of quantum mechanics, it is very hard for me to doubt the outcome of such experiments. However, I would prefer these experiments, in which the crucial concepts are very directly tested, to have been done and the results on record. Moreover, there is always the slim chance of an unexpected result, which would shake the world." (Freire, arXiv:quant-ph/0508180, p.20)

[arguing against "I've read that the Broglie-Bohm theory makes the same predictions that the normal quantum randomness theory makes but the latter was chosen because it was conceived first.":]

Concerning the first point, people can have various theories in the first run. But once they have all possible alternative theories, they can compare them.

Second, it is not true that the probabilistic interpretation was conceived "first". Quite on the contrary. Technically, it's true that de Broglie wrote his pilot wave theory in 1927, one year after Max Born proposed the probabilistic interpretation, but the very idea that the wave connected with the particle was "real" was studied for many years that preceded it. Both de Broglie (1924) and Schrödinger (1925) explicitly believed that the wave was real which is incorrect.
Given that de Broglie has given up pilot wave theory shortly after 1927, unable to find a viable measurement theory for other observables than position, one can say that pilot wave theory appeared in a viable form only 1952, with Bohm's measurement theory. At that time, the Copenhagen interpretation was already well-established (even if the label "Copenhagen interpretation" was coined only later). So there was an advantage of historical accident for the standard interpretation.

In 1952, Bohm wrote down a very straightforward multi-particle generalization of de Broglie's equations and added a very controversial version of "measurement theory". Is it a substantial improvement you expect from 25 years of progress?
Depends on how many people have worked on it during this time. In this case, most of these 25 years nobody has worked on it. In particular, de Broglie himself had broken it, because he was unable to find the "very controversial" measurement theory found later by Bohm. Bohm, who was 1927 only 10 years old, had not worked most of this time in this domain too. Thus, very few man-years have been sufficient to transform a theory broken by it's creator as not viable into a viable theory. I would name this a sufficiently efficient and substantial improvement.

The next important defender of this theory – again almost alone for a long time – was Bell. The results of his work in the foundations of quantum theory are also well-known. Despite their foundational character, they have caused a large experimental activity. Thus, also a quite efficient relation between man-years and results.

(Given that lumo has not understood the main point of Bohm's measurement theory, we can ignore the characterization of this theory as "very controversial").

About decoherence and the classical limit

Moreover, the question which of them will emerge as natural quantities in a classical limit cannot be answered a priori. Which observables like to behave classically? Well, it is those whose eigenstates decohere from each other.
The role of decoherence in the classical limit is largely overexaggerated, see the Hyperion discussion about this (Ballentine, Classicality without Decoherence: A Reply to Schlosshauer, Found Phys (2008) 38: 916-922, DOI 10.1007/s10701-008-9242-0, Schlosshauer, Classicality, the ensemble interpretation, and decoherence: Resolving the Hyperion dispute, Found Phys (2008) 38: 796-803, DOI 10.1007/s10701-008-9237-x, arXiv:quant-ph/0605249, Wiebe and Ballentine, Phys. Rev. A 72:022109, 2005, also arXiv:quant-ph/0503170).

Essentially, you can measure every operator, together with every other, if the accuracy of the common measurement is below the boundaries of the uncertainty relations. And in the classical \(\hbar \to 0\) limit they all like to behave classically.

Everything in this real world is quantum while the classical intuition can only be an approximation, and it is a good approximation only if decoherence is fast enough i.e. if the interference between the different eigenstates is eliminated. If it is so, the quantum probabilities may be imagined to be ordinary classical probabilities and Bell's inequalities are restored.

So if you want to know whether a particular quantity may be imagined to be classical, you need to know how quickly its eigenvectors decohere from each other. And the answer depends on the dynamics. Decoherence is fast if the different eigenvectors are quickly able to leave their distinct fingerprints in the environment with which they must interact.
A nice description of the decoherence paradigm. The little dirty secret of decoherence is that it depends on some decomposition of the world into systems. Such a decomposition can be found, without problems, if we have some classical context as in the Copenhagen interpretation, or some well-defined configuration of the universe as in pilot wave theory, by considering an environment of the actual state of the universe. But without such a background structure you have nothing to start these decoherence considerations. The different systems we see around us – cats, for example – cannot be used for this purpose, at least not if we want to avoid circular reasoning. arXiv:0901.3262 The Hamilton operator, taken alone, is not enough to derive a decoherence-preferred basis uniquely.

Mechanistic models of state-of-the-art quantum theories are not available: it is partly because it's not really possible and it's not natural but it is also partly because the champions of Bohmian mechanics are simply not good enough physicists to be able to study state-of-the-art quantum theories. They're typically people with philosophical preconceptions who simply believe that the world has to respect their rules of "realism" or even "determinism".
I have a quite nice "mechanistic model" for the standard model of particle physics. One which essentially allows to compute the SM gauge group (as a maximal group which fulfills a few simple "mechanistic" axioms). How many more years (and how many more man-years) string theory needs to reach something comparable?

The idea of "philosophical preconceptions" is quite funny. My concept is quite pragmatical: If there is a simple way to do the things, use it. Simplicity is a good thing, independent of the age or the popularity of the particular concept. About determinism I don't care even today, in particular I have certain sympathies for Nelson's stochastics. And I have as well looked at non-realistic interpretations of quantum theory, like the concept I prefer to name "inconsistent histories". But I think there should be really good evidence to justify the rejection of such simple, general, fundamental and beautiful principles like realism. But pilot wave theory would be preferable even without it, simply for the beauty of the guiding equation.

Last but not least, some funny but unimportant polemics

The attempts to return physics to the 17th century deterministic picture of the Universe are archaic traces of bigotry of some people who will simply never be persuaded by any overwhelming evidence – both of experimental and theoretical character – if the evidence contradicts their predetermined beliefs how the world should work.
Well formulated. I like such polemics. Especially replacing the standard 19th century in such flames by 17th century is nice. But there is room for enhancement. In philosophy of science, I follow Popper, who likes to identify the origin of some of his ideas in Ancient Greece. I also prefer the economic system based on ideas of Adam Smith in comparison with much more modern ones developed by Lenin and Mao, so one can identify this sympathy for old ideas as deeply rooted in my personality. Indeed, I think there is nothing wrong with old ideas.

To describe pilot wavers as "predetermined" sounds really nice, but is, unfortunately, wrong. There are, of course, people who follow predetermined ideas. But these are the ideas they have learned in their youth. Where are the proponents of pilot wave ideas supposed to have learned it? What I was teached was quantum theory and Marxism-Leninism, not pilot wave theory and Adam Smith. And I remember, in particular, some uncritical fascination learning von Neumann's proof of impossibility of a classical picture. I have had nor a prejudice for 17th century determinism, nor any of the "bourgeois prejudices" the communists have liked to argue against.

It was not predetermination, but the power of arguments (in particular, of Bell's "speakable and unspeakable in quantum mechanics"), which has persuaded me to switch to pilot wave theory. And an important part of this argumentative power was the simple proof of equivalence between pilot wave theory and quantum theory. There simply is no experimental evidence against pilot wave theory.

And, indeed, the "experimental evidence" presented by lumo was (in his polarizer argument, and similar ones about spins) based on the common error not to take into account the measurement device, or (in his quantization argument) not applicable to de Broglie's version of pilot wave theory. About the theoretical evidence judge yourself.

But the very fact that the Bohmists actually don't work on the cutting-edge physics of spins, fields, quarks, renormalization, dualities, and strings is enough to lead us to a very different conclusion: they're just playing with fundamentally wrong toy models and by keeping their focus on the 1-particle spinless case, they want to hide the fact that their obsolete theory contradicts pretty much everything we know about the real world.
It is always fun to compare the "very facts" of such claims with reality. The one-particle spinless case has never been in the focus of my interest, except if this appears sufficient to show some serious problems of other interpretations ( arXiv:0901.3262, arXiv:0903.4657). The results of my work with spins, fields, and quarks I have already mentioned. And even renormalization is on my todo list, even if some other problems have, yet, higher priority for me.

I'm not sure that naming strings and dualities "cutting-edge physics" is justified. This is clearly a domain of research I leave to lumo – it may have a value as a nice exercise in mathematics, which is an important part of human culture, even if it has nothing to do with physics. Of course, one never knows – results of pure mathematicians, who have been proud of doing things which will never find an application, are applied today in cryptography. It would be a really nice joke if some result found by lumo would find a physical application in some hidden variable ether theory ;-).

Show more