CLOUD: First, off-topic news. CLOUD at CERN published some new results. Previously, they showed that the sulphur acid doesn't seem to be enough to produce more clouds. Now, if the acid is combined with oxidized organic vapours from plants etc., low-lying clouds may be created and cool the planet. The cosmic rays may still modulate and matter a lot but only if the concentration of the two previous ingredients is low enough for the cosmic rays to be helpful.
Loopholes like that are probably not too interesting
Steven Weinberg posted a playful quant-ph preprint two days ago:
Quantum Mechanics Without State Vectors
The title is a bit misleading. What Weinberg mostly tries to do is to study possible symmetry transformations of density matrices that don't descend from transformations of pure states. This blog post has two comparable parts. The first one is dedicated to Weinberg's negative, knee-jerk reactions to the foundations of quantum mechanics. The second part is dedicated to his hypothetical non-pure-state-based density matrices.
Anti-quantum noise as an introduction
The beginning of the paper reflects Weinberg's personal dissatisfaction with quantum mechanics.
Two unsatisfactory features of quantum mechanics have bothered physicists for decades.
However, there are no (i.e. zero) unsatisfactory features of quantum mechanics so what has bothered the physicists were inadequacies and stubbornness of these physicists themselves, not unsatisfactory features of quantum mechanics.
The first is the difficulty of dealing with measurement.
There is no difficulty of dealing with measurement. On the contrary, measurement – or observation or perception (these two words may ignite different emotions in the reader's mind but the physical content is equivalent) – is the process whose outcomes quantum mechanics is predicting, and it is doing so probabilistically.
Weinberg continues:
The unitary deterministic evolution of the state vector in quantum mechanics cannot convert a definite initial state vector to an ensemble of eigenvectors of the measured quantity with various probabilities.
It cannot and indeed, it doesn't and shouldn't. The unitary deterministic evolution is just a time-dependence of the state vector or the density matrix or the operators whose purpose is to calculate the probabilities of measurements. And one must know what the questions are before he demands that a quantum mechanical theory calculates the answers for him.
Later, the paper discusses this point in detail and I am sure that as soon as Weinberg gets sober, he agrees with me. By the ensemble, he really means a particular decomposition of a density matrix as\[
\rho = \sum_i p_i \ket{\psi_i}\bra{\psi_i}
\] into state vectors and probabilities. If we allow \(\ket\psi\) not to be orthogonal to one another, \(\rho\) may be expanded in infinitely many ways according to this template. But it's obvious that a particular choice to decompose the density matrix is unphysical. All predictions about the physical system are encoded in the density matrix itself. Weinberg himself admits this trivial fact – if the choice of the decomposition were real, its change could be used to sent the information superluminally.
Weinberg sort of suggests that he discovered that only the density matrix matters (while the particular decomposition does not). Sorry to say but at least in Prague (and I guess that almost everywhere else), those things were taught as the elementary stuff to the undergraduates and it was correctly claimed that these basic constructions and interpretations of the density matrix were pioneered by the fathers of the density matrix – Felix Bloch, Lev Landau, and John von Neumann. It's very clear why only the density matrix and not some decomposition matters: everything that quantum mechanics may predict are the probabilities and all of them may be calculated via formulae that only depend on \(\rho\) and not some "finer information" about \(\rho\) such as a decomposition.
See my blog post on the density matrix and its classical counterpart.
The same comments apply to the state vector. If a physical system is described by the state vector, all predictions – the probabilities – may be calculated from the state vector itself, so e.g. a particular decomposition of the form\[
\ket\psi = \sum_i c_i \ket{\phi_i}
\] can't possibly be physical. The decomposition of a pure state to the eigenstates of a particular observable is more directly useful for those who are just planning to measure this observable. But that's it. A decomposition may be more useful than another one; but it cannot be "more right".
Now, the density matrix is a more general object than the state vector. The state vector describes the state of a physical system of which we have the "maximum knowledge" allowed by the laws of quantum physics. It typically means that we have measured the eigenvalues of a complete set of commuting observables. Even with this "maximum knowledge", i.e. a state vector, almost all predictions are inevitably just probabilistic. This directly follows from the uncertainty principle. In the case of the maximum knowledge, all the predictions may still be calculated from the density matrix\[
\rho = \ket \psi \bra \psi
\] using the same formulae we are using for the most general density matrix. None of these claims is new in any way. All of them were fully understood in the late 1920s, undergraduate students of quantum mechanics should understand them in the first semester, and Weinberg or other contemporary physicists shouldn't try to take credit for these things.
Here we seem to be faced with nothing but bad choices. The Copenhagen interpretation assumes a mysterious division between the microscopic world governed by quantum mechanics and a macroscopic world of apparatus and observers that obeys classical physics.
Like always in science, we are facing both good choices and bad choices. Quantum mechanics – as explained by the "Copenhagen interpretation" – precisely formulates what the laws of physics may do and may not do, what is physical and what is unphysical. Only results of measurements are real facts and the laws of physics may calculate the (conditional) probabilities that the observations of a certain property or quantity will be something or something else if the results of previous measurements were something or something else.
This basic paradigm – that only observations are meaningful and only probabilities may be predicted – isn't open to "interpretations". These conceptual assumptions are the postulates of quantum mechanics – the physical theory, not a direction in philosphers' or artists' babbling – in the same sense as the equivalence of different inertial systems is a postulate of the special theory of relativity. And indeed, every good "modern" way of talking about quantum mechanics – consistent histories, quantum Bayesianism, and perhaps others – agrees with these basic pillars of modern physics. Every wrong way of talking about quantum physics – deBroglie-Bohm theories, many worlds, and Ghirardi-Rimini-Weber or Penrose-Hameroff kindergarten-like-real collapses, among others – try to deny that physics has irreversibly switched from the classical foundations to new, quantum foundations.
Quantum mechanics – and by that, I mean what the heroic Copenhagen folks discovered and what is studied by those who respect the basic foundations of quantum mechanics, not the wrong, ideologically motivated pseudoscientific delusions about what quantum mechanics "should be" – doesn't introduce any mysterious division between the microscopic world and the macroscopic world. Instead, all objects in the world, whether they are microscopic or macroscopic, obey the laws of quantum mechanics. This fact has been known since the 1920s, too. In fact, people have developed the quantum theory of crystals, conductors, gases (including Fermi-Dirac and Bose-Einstein statistics), paramagnets, diamagnets, ferromagnets, and other macroscopic materials and objects in the late 1920s and early 1930s. It is absolutely ludicrous to suggest that quantum mechanics has any problem with macroscopic objects.
What is true is that for large enough objects, classical physics works as well in the sense that it is a qualitatively good approximation of quantum mechanics. In the limit \(\hbar\to 0\), quantum mechanics may generally be approximated by a classical theory. One must still realize that quantum mechanics is always right and always exact while classical physics is only sometimes right and it is only approximately right: it is a limit of quantum mechanics. This limiting procedure has many aspects and implications. The existence of the classical limit is needed for nearly classical observers like us to be able to talk about the predicted observations and their probabilities using the same language that is used in classical physics (a fact – something has happened or not – means exactly the same thing in quantum physics as it does in classical physics; probabilities mean the same thing in quantum mechanics and classical physics, too; only the quantum mechanical rules that allow us to deduce that something will probably occur out of the knowledge of something else that has occurred in the past are different than they are in classical physics).
Decoherence is a calculable process – caused by sufficiently strong interactions of the object of interest with sufficiently many degrees of freedom in the environment – by which the information about the relative phases encoded in the state vector or, more generally, about the off-diagonal elements of the density matrix in a basis (one that ultimate agrees with – and defines – the common-sense decomposition) is rapidly being lost. That process implies that the probabilities encoded in the density matrix may be approximated by classical ones because the potential for characteristic quantum phenomena in the future – in particular, re-interference of parts of the state vector – has been practically lost.
But decoherence doesn't weaken quantum mechanics in any way; it is not an addition to quantum mechanics that has to be made to fix a "bug" in quantum mechanics. Quantum mechanics has no bugs. Decoherence is a consequence of the laws of quantum mechanics that justifies the classical reasoning as an approximation in certain situations. You may still insist on the exact quantum interpretation of all the probabilities, however! None of these insights weakens the fact that quantum mechanics is a perfectly and exactly valid theory of the microscopic world as well as the macroscopic world. Niels Bohr and Werner Heisenberg not only knew it but they helped to lay the foundations of the actual modern quantum mechanical theories of many macroscopic objects and phenomena. It is a disrespectful untruth for someone to suggest that something fundamental was missing in the Copenhagen school's description of macroscopic objects. This untruth has almost become the consensus of popular books on quantum mechanics which doesn't make it any less outrageous. Such attacks on the universal validity of quantum mechanics are as outrageous as the attacks against heliocentrism voiced 90 years after the Galileo trial.
If instead we take the wave function or state vector seriously as a description of reality, and suppose that it evolves unitarily according to the deterministic time-dependent Schrödinger equation, we are inevitably led to a many-worlds interpretation [2], in which all possible results of any measurement are realized.
There exists no sequence of logical arguments with reasonable assumptions that would imply that "all possible results are realized". And there doesn't even exist any "theory" that would describe at least basic features of the world around us in agreement with the paradigm that "all possible results are realized". Every time someone "concludes" that all results are realized, the conclusion follows either from sloppy and circular thinking, cheating, a brain defect, or a combination of these three reasons. Weinberg derives many things as a real perfectionist so it's due to a non-Weinbergian trait when he suddenly claims that something clearly invalid and indefensible may be derived from something else (which is also wrong, but in a different way) – that we are "inevitably led" somewhere. We are surely not. If there were a glimpse of an argument that makes sense, Weinberg would show it instead of screaming words like "inevitably".
These comments about the non-existence of the "logical derivation" are not too important because the assumption, the claim that "the state vector describes the objective reality", is demonstrably wrong.
To avoid both the absurd dualism of the Copenhagen interpretation and the endless creation of inconceivably many branches of history of the many-worlds approach, some physicists adopt an instrumentalist position, giving up on any realistic interpretation of the wave function, and regarding it as only a source of predictions of probabilities, as in the decoherent histories approach [3].
The "instrumentalist" position is exactly what the founders of orthodox quantum mechanics (Copenhagen school: Bohr, Heisenberg, Jordan, Born, Pauli, Dirac, and so on) would defend. They may have called it a "philosophy", and a "positivist" one, not an "instrumentalist" one, and the wordings and sociology may have changed but the physical content is exactly the same. The content is also the same as the content of the slogan "shut up and calculate". You just shouldn't insist on talking about things that cannot be measured. You may talk about them but it is perfectly fine for a theory to declare some or all unmeasurable questions to be physically meaningless and a person criticizing a theory for its not talking about unmeasurable things is simply not acting as a scientist!
In my opinion, this is the most correct interpretation of the positivist/instrumentalist/shut-up-and-calculate attitude and the general philosophy of this attitude was really uncovered by Einstein's thoughts about relativity, too. At least Heisenberg would always credit Einstein with bringing this general positivist philosophy to physics. Einstein realized that a theory doesn't have to define the objective meaning of the "simultaneity of two events" because there exists no objective instrumental test of whether or not two distant events occurred simultaneously. The only difference between relativity and quantum mechanics is that relativity only declared a small number of things "subjective" (well, observer-dependent) and people got used to it while many things remained Lorentz-invariant. Quantum mechanics makes every and any knowledge fundamentally subjective but the logic why it's fine is qualitatively the same as it is in the case of the simultaneity of events in relativity! The fundamental universal reason why it's fine for a theory to declare things subjective, i.e. observer-dependent, is that an observer is needed to make observations (sure!). So in general, every part of the observation may depend on the observer. There are things that the observers will ultimately agree upon (Will a fast broom be caught in a barn in relativity? Has the Schrödinger's Cat started a nuclear war a minute ago?) but the agreement may be a nontrivial derived fact while the intermediate steps in the derivations may be different for different observers. The agreement between different observer doesn't have to be and isn't due to the fundamentally and exactly objective character of almost everything in the world!
The other problem with quantum mechanics arises from entanglement [4]. In an entangled state in ordinary quantum mechanics an intervention in the state vector affecting one part of a system can instantaneously affect the state vector describing a distant isolated part of the system.
Entanglement isn't a problem. Entanglement is the generic as well as the most general quantum description of correlation(s) between two subsystems. Almost all states are entangled i.e. refusing to be tensor-factorized to independent states of the subsystem. Quantum mechanics would be reduced to "nothing" or would lose its "quantumness" if entanglement were "forbidden". The predictive interpretation of the entanglement is exactly the same as the predictive interpretation of correlations in classical physics. But quantum mechanics and its entanglement may actually imply predictions that can't follow from any classical model – like simultaneously guaranteed correlations in many pairs of quantities, high correlations violating Bell's inequalities, and other things. But that's simply because it's a different theory. The class of classical theories may have looked large to many people but it's too small and constraining for the correct theories of Nature and the correct theories of Nature are quantum mechanical and refuse to belong to the classical class!
It is true that in ordinary quantum mechanics no measurement in one subsystem can reveal what measurement was done in a different isolated subsystem, but the susceptibility of the state vector to instantaneous change from a distance casts doubts on its physical significance.
It doesn't just cast doubts. It proves that the state vector – or the density matrix – can't be viewed as an objective feature of reality. Indeed, the wave function may rapidly change and if it were a piece of the objective reality, this "collapse" would be in conflict with relativity. But the actual, correctly interpreted quantum mechanics has no problem with relativity. It may be compatible with relativity and indeed, quantum field theory and string theory are guaranteed to be compatible with relativity.
The state vectors or density matrices are data summarizing the subjective knowledge about the physical system. And the "collapse" is nothing else than the subjective process – taking place in the brain – that allows us to replace the original complicated probability amplitudes encoding distributions of all quantities by the conditional probability distributions in which the already known outcomes of measurements are taken into account as facts.
I could go on for a while. Although Weinberg avoids writing "clearly and atrociously wrong" claims about the foundations of quantum mechanics that others like to produce, I don't really feel comfortable with a single sentence he is writing about the right interpretation of quantum mechanics, its applicability, or its history.
Generalizing transformations of density matrices
But these provocative comments about the foundations of quantum mechanics are not supposed to be the key content of the preprint, I guess. Instead, Weinberg wants to generalize symmetry transformations that may apply to density matrices.
The basic "modest proposal" is that the density matrix is more fundamental than a state vector. Well, I agree with that, kind of. The pure density matrix\[
\rho = \ket\psi \bra\psi
\] is a special case of the density matrix corresponding to the "maximum knowledge". Note that the overall phase of the state vector \(\ket\psi\) does not affect the predictions because the phases cancel in \(\rho\) and all predicted probabilities may be calculated using this \(\rho\). While the state vector (pure state) may be viewed as a special example of a density matrix, it is sort of sufficient, too, because the most general density matrix may be written as a mixture (a real linear combination) of squared pure vectors with some probabilities as the coefficients (see the formula at the top and use it to diagonalize the density matrix; the vectors on the right hand side will be orthogonal to each other in this case). The probabilities predicted from a density matrix are therefore weighted averages of the probabilities predicted from pure states – the weighted averaging is no different than in the corresponding classical calculation. For this reason, the "quantum essence" of the predictions is hiding in the pure states and the density matrices may be – but don't have to be – considered an "engineering addition" added on top of the calculus of pure states. The probabilistic ignorance from both sources (the unavoidable uncertainty hiding already in the pure states; and the probabilistic, classical-like mixing from the density matrices) gets mixed up and both types of ignorance may be treated together using the natural and simple formalism of density matrices which is why it's totally OK to think that the equations involving density matrices are "fundamental".
What does it mean to have a symmetry in quantum mechanics? We mean a linear operator \(U\) that acts as\[
\ket\psi \to U \ket\psi
\] Simple. The corresponding bra-vector \(\bra \psi\) is just the Hermitian conjugate so it transforms accordingly:\[
\bra\psi\to \bra\psi U^\dagger.
\] Because the density matrix is a combination of \(\ket\psi\bra\psi\) objects, it transforms as\[
\rho \to U \rho U^\dagger.
\] Great. If the density matrix as well as operators \(L\) transform by this conjugation and if \(UU^\dagger=U^\dagger U = 1\) i.e. if the transformation is unitary (linear and preserving probabilities), then the expectation values\[
{\rm Tr}(\rho L_1 L_2\dots )
\] are conserved because \(U^\dagger U\) cancel everywhere including the beginning and the end (due to the cyclic property of the trace). All predicted probabilities may be written in this form as well, with \(L_i=P_i\) chosen as some projection operators on the Yes subspace of the Yes/No questions, so the predicted probabilities may be invariant under the symmetry transformations, too.
(Consistent histories work with a mild generalization of this formula for probabilities in which we consider probabilities of whole histories i.e. traces \[
{\rm Tr}(P_n\dots P_2 P_1\rho P_1 P_2 \dots P_n)
\] involving traces of products of the density matrix and a multiplicative sequence of several projection operators encoding different properties at different times. For the different histories to be mutually exclusive in the classical sense, we demand a sort of "orthogonality" consistency conditions for these pairs of histories. The consistent histories aren't really quite new; they're just the normal Copenhagen formalism adapted to composite questions about several properties of the system at different times.)
The main technical question that Weinberg is addressing in the paper is whether there may be transformations of the density matrix\[
\rho \to g(\rho)
\] that are not descendants of transformations of the pure state vectors\[
\ket\psi \to g(\ket\psi ).
\] If we assume that the transformations are linear in the matrix entries of \(\rho\), the transformations on the \(N\)-dimensional Hilbert space are pretty much elements of the \(U(N)\) group. But the density matrix has \(N^2\) different real parameters (if we allow the trace to be anything) – it would be \(2N^2\) if the entries were complex but the Hermiticity reduces the number of parameters exactly to one-half. And Weinberg and others could think about transformations that may mix these \(N^2\) entries in more general ways than ways descended from the pure state vector transformations i.e. different from \(\rho\to U\rho U^\dagger\). In other words, he wants to talk to the matrix entries of the density matrix directly, i.e. via \(U(N^2)\) transformations of a sort (or some useful subgroup that acts on the entries differently than the action descended from the \(U(N)\) transformations of the pure states).
This is a potentially interesting business of looking for loopholes and exceptional structures.
For the time evolution (i.e. the transformation by time-translations), there is a well-known generalization of the "normal" transformation given by the Lindblad equation\[
\eq{
\dot\rho&=-{i\over\hbar}[H,\rho]+\\
&+\sum_{n,m = 1}^{N^2-1} h_{n,m}\left(L_n\rho L_m^\dagger-\frac{1}{2}\left(\rho L_m^\dagger L_n + L_m^\dagger L_n\rho\right)\right).
}
\] The commutator term describes the "normal continuous differential evolution" of the density matrix that is derived from the Schrödinger's evolution of the pure state vector. The terms involving the mutually orthogonal operators \(L_m\) are new. Such a form of the time evolution may be obtained for open systems, i.e. from tracing over some environmental degrees of freedom. When you do it in this way, the evolution is naturally irreversible, time-reversal-asymmetric, and that's why Weinberg talks about the semi-group structures (a semi-group is almost like a group but the inverse element isn't required; for example, the group of renormalization group "flows" is really a semi-group because the "integrating out of the degrees of freedom" is irreversible).
At a fundamental level, one expects the "normal transformation" of the density matrix to be the only physically kosher one and there are partial theorems that Weinberg acknowledges. But he is looking for loopholes. The Lindblad equation is one such loophole.
But he wants to focus on more exotic, exceptional, cheeky ways to access the individual matrix entries of the density matrix. His first provocative example is\[
\rho = \pmatrix {a_1 & b_3& b^*_2\\
b^*_3 & a_2 & b_1\\
b_2 & b^*_1 & a_3 }
\] where \((b_1,b_2,b_3)\) are supposed to transform as a complex triplet under an \(SU(3)\) group while \(a_1,a_2,a_3\) are three real singlets, not transforming at all. Note that these \(SU(3)\) transformations are preserving the trace (well, even the individual diagonal entries) and the Hermiticity of the density matrix.
This \(SU(3)\) action on the density matrix is different from the descendant of the usual \(U(3)\) action on the pure states in the Hilbert space. Why? Well, under the \(SU(3)\) subgroup of the latter, the density matrix transforms as the adjoint i.e.\[
{\bf 3}\otimes \overline{\bf 3} = {\bf 8} \oplus {\bf 1}.
\] On the other hand, Weinberg's proposed mutated density matrix transforms as another 9-dimensional representation, namely\[
{\bf 3}\oplus \overline{\bf 3} \oplus {\bf 1} \oplus {\bf 1} \oplus {\bf 1}.
\] There's no adjoint representation here at all. OK, in the group-theoretical terminology that Weinberg seems to avoid for unknown reasons, we may ask with him: Are there some interesting physical models in which the density matrix transforms differently than in the adjoint representation of the \(U(N)\) symmetry acting on the Hilbert space (into which all the symmetry transformations are normally embedded)?
There are tons of reasons why the usual "adjoint representation option" is the only physically meaningful one, and Weinberg describes several of them. For example, the final portion of the preprint is dedicated to positivity – to the requirement that the general mutated transformations of the density matrix must preserve the non-negativity of its eigenvalues. Also, it's obvious that the "adjoint representation option" is the only possible one if we allow all operators, including all projection operators on arbitrary pure states, and we require the traces \({\rm Tr}(\rho L)\) to be conserved by the symmetry transformations.
But it seems to me that Weinberg doesn't articulate the most obvious reason why we want to insist on the "adjoint representation option": the trace \({\rm Tr}(\rho L)\) is contracting the operator \(L\) with something else, so this something else should mathematically transform as an operator, too. Otherwise there are no nontrivial singlets in the bilinear product.
The density matrix isn't really an observable – the probabilities can't be measured by a single measurement – but in some sense, it is the "operator of probabilities" (the eigenvalues are probabilities of the corresponding eigenstates, if we choose this basis) and it must transform in the same way as operators. Observables must transform in the adjoint because they are operators, stupid. We're supposed to know how to multiply them, in an associated way, so they're really matrices or generalized, infinite-dimensional matrices of a sort. Observables have transformation properties "derived" from the pure states because they may be defined as something that depends on the pure states. Another interesting issue is whether the "algebra of observables" has a preferred representation. In the simple models like non-relativistic quantum mechanics and quantum field theory (and therefore string theory in well-known backgrounds which admit a quantum-field-theory-based description), we're used to the "Yes" answer, at least morally. In the quantum mechanics generalized in the way I still count as quantum mechanics, the answer is demonstrably always "Yes"; the space of pure states is "canonical". Note that this space is large and unifies the spaces with all eigenvalues of whatever you could consider "Casimirs of an algebra". If we talk about quantum fields etc., we are considering all operators and their products, not just a limited set of symmetry generators.
One may weaken the requirement that all these things are in the adjoint representation in some way and try to look for exceptional solutions (loopholes) to these weakened requirements, and that's what Weinberg is doing. But at the end, or at the beginning, he should have asked what are the broader rules of the game or the motivation behind this whole business. Of course that if one weakens some postulates sufficiently, i.e. doesn't require the operators and/or density matrix to transform in the adjoint, there will generically be new solutions to the weakened constraints. But we must ask:
Are these new solutions physically relevant for our world or worlds that enjoy at least some qualitative kinship to our world (e.g. some highly exotic string vacua)?
Are these new solutions mathematically interesting so that these new non-adjoint exceptions are exciting to be studied for mathematical reasons?
I would say that if the answers to both questions were "No", then the research of these exceptions would be pretty much worthless. If it is not worthless, which of these two questions is answered by a "Yes"? Maybe both questions are answered by a "Yes"? That would be thrilling, indeed. The answer to the first question is more likely to be "No" and the "Yes" answer would be shocking but there may be something new waiting over there.
If the answer to the second question is "Yes", then these new solutions could be analogous to the exceptional Lie groups. Someone could think that \(SU(N),SO(N),USp(2N)\) are the only compact simple Lie groups. But there actually exist the exceptional groups \(E_6,E_7,E_8,F_4,G_2\), too. We could have overlooked them but we may find them if we're careful, too. Similarly, all the transformations on the space of density matrices is being typically embedded into \(U(N)\) by assuming that the density matrix transforms in the adjoint representation. But it could perhaps transform as another representation of the group, perhaps a different group than \(U(N)\).
If such an interesting exception exists, the formulation of the "theory" using these mutated density matrices must forbid pure states. The "theory" would only work in terms of the density matrices. Is that possible? One thing to notice is that it must be impossible to fully identify a pure state by measuring a complete set of commuting observables. Pure states just shouldn't be allowed – otherwise the "theory" would have to tell us how the pure states transform as well, and the density matrices' transformation laws would have to be derived from that.
What does it mean that pure states aren't allowed in the theory? It means that there is no "classical-like knowledge" in the theory. Creatures living in that theory can't ever be 100% certain about pretty much anything. Their freedom to measure the observables (general operators/matrices on the Hilbert space) is fundamentally restricted in some universal way. If they were certain about something, that they have a pure state, then the pure states would probably be back in the game. So yes, I think that the existence of the classical limit of the usual sort also forces us to admit the usual "adjoint representation option" for the density matrices. Yes, I tend to think that in our world, at least e.g. in an \(n\)-qubit quantum computer embedded into the real world, it's possible to design a (usually complicated, composite) procedure to measure an arbitrary observable (given by any matrix on the \(2^n\)-dimensional Hilbert space). Such a procedure would have to be banned in "Weinberg's realm of loopholes". To avoid direct contradictions with the engineering tests, with the ability of quantum computation experts' to measure almost anything, the observables that may be measured in Weinberg's realm of loopholes, at least approximately, should be at least slightly "dense" in the space of operators.
But cannot there be something that is physically "close" to the adjoint representation option but is fundamentally different? Maybe it could describe the world around us, too. Let me tell you a scenario that I can prove to be impossible but for a while, at least if you are just smoking marijuana, you could think that it is an ingenious idea. Maybe the density matrix transforms as a large irreducible representation of the monster group and physical symmetries we know are only approximated by transformations embedded into the monster group!
Again, I can show that our world can't be like this particular proposal – and no world that looks like a "related" vacuum (e.g. other conventional enough vacua of string theory) can behave like that, either. (The monster group is relevant for the quantum description of all black holes states in the maximally curved \(AdS_3\) background of pure 3D gravity, as Witten has argued, but I think that this theory still allows arbitrary pure states and respects the decomposition of density matrices to pure states; maybe there's some natural way to restrict allowed values of the density matrix to a rational subset, however.) But of course, it is conceivable that some overlooked scenario involving mutated, twisted, and strangely constrained density matrices exists. It is possible that this is a gem that is waiting to be discovered and one must weaken some assumptions or axioms to find it.
The existence of a research project often tries to promote its own importance and it's just illegitimate in the absence of evidence
However, as always, I think it's critically important not to degrade science to the industry of rationalization of a wishful thinking, a posteriori justification of some "cool" conclusions that are actually assumptions of the industry. I think that even the question whether there exists an interesting, at least remotely physical, mutated formalism for non-adjoint density matrices is a scientific question that must be approached rationally and scientifically. Scientists must compare the evidence in favor and against the answer "yes, such an interesting generalization exists".
And for me, i.e. as far as I can evaluate the available evidence including the newest paper by Weinberg, the odds are way over 99.7% that such an interesting generalization doesn't exist. I am less certain about this answer than about the claim that "Bohmian, real many worlds, and objective collapses as a rewriting of quantum mechanics will always remain stinky piles of šit" – but I am still sufficiently certain that I would be willing to bet one million crowns on that assuming that the criteria of the bet would be sufficiently "objective".