2016-02-25

Interview by Richard Marshall.



Thomas K. Metzinger is full professor and director of the theoretical philosophy group and the research group on neuroethics/neurophilosophy at the department of philosophy, Johannes Gutenberg University of Mainz, Germany. From 2014-2019 he is a Fellow at the Gutenberg Research College. He is the founder and director of the MIND group and Adjunct Fellow at the Frankfurt Institute of Advanced Studies, Germany. His research centers on analytic philosophy of mind, applied ethics, philosophy of cognitive science, and philosophy of mind. The MIT Press are about to publish a 2-volume set of the most comprehensive collection of papers on consciousness, brain, and mind available edited by himself and Jennifer M Windt. It gathers 39 original papers by leaders in the field followed by commentaries written by emerging scholars and replies by the original paper’s authors. Taken together, the papers, commentaries, and replies provide a cross-section of cutting-edge research in philosophy and cognitive science. Open MIND is an experiment in both interdisciplinary and intergenerational scholarship, a Robin Hood style project in which he got the money from somewhere else and threw out the whole material FOR FREE first, for all the poor countries in the world, demonstrating that they could do this faster and better than any electronic journal or publisher. In this interview he thinks aloud about his long standing interest in consciousness, the epistemic agent model of the self, the ego tunnel as a metaphor of conscious experience, the problem with the idea of a ‘first-person’ point of view, introspective Superman and Superwoman as advanced practitioners of classical mindfulness meditation, why nothing lives in the ego tunnel, what the rubber hand illusion shows, why we’re unconscious and mind-wander most of the time, what the narrative default-mode does, the impact of culture on the ego tunnel, why trendy ‘illusion talk’ annoys him, what dreaming shows us, why AI is ethically dangerous, why meditation and spirituality need the cold bath of good analytic philosophy and the challenges facing young philosophers of cognitive science and what he is trying to do to help them. Take your time with this one: after all, that ego tunnel, it’s you…

3:AM: What made you become a philosopher?

Thomas K Metzinger: At 19, I could never understand how someone would embark on their life without having first confronted and clarified the truly fundamental questions. But then, I was quite disappointed in the Frankfurt philosophy department. Of course, I strongly sympathized with Habermas and the philosophers representing the Frankfurt school, but I also saw the lack of conceptual clarity, and perceived the not-so-revolutionary self-importance in the epigones of Horkheimer, Adorno, and Habermas. Of course, a lot of the debates were completely beyond me in the beginning but I did sometimes sense some complacency and pretentiousness as well – young people are quite sensitive to this. But most of all, the whole thing was not politically radical enough for me. At 19, I basically held the position that if you were intellectually honest and really wanted to get in touch with political reality then you had to smell tear-gas. (It was the time of big protests and violent demonstrations about the expansion of the Frankfurt airport and squats in the Westend. To be fair, I should also admit that most of the time – unlike our future Foreign Minister and Vice Chancellor Joschka Fischer – I belonged to those guys who started running first.) I almost dropped out of philosophy, when, in a seminar on Descartes’ Passions de l’Ame, a young lecturer by the name of Gerhard Gamm first made me see that if the mind has no spatial properties, then there could never be a spatial locus of causal interaction with the world, neither in the pineal gland, nor anywhere else in the brain or the physical world. That point got me hooked. I ended up writing a thesis on recent contributions to the mind-body problem, from U.T. Place to Jaegwon Kim, and got drawn deeply into Anglo-Saxon analytical philosophy of mind.

But of course, there is an earlier personal history too. I guess I was always a slightly critical spirit, and I clearly remember how, at the age of 8, I cried when I first understood that everybody had to die. And how disappointed I was by those adults’ absolutely silly attempts at comforting me – they seemed strange to me, and just didn’t get the point. At around 12 years of age a little scholar awakened in me as I bought and immediately devoured my first two books ever. By today’s standards, these were really bad books – popular science books reporting on purported parapsychological results, titled “Radar of the Psyche” and “PSI: Psychic Discoveries behind the Iron Curtain”. But the fledgling investigator in me immediately came to the firm conclusion that all of this was not only highly relevant, but also that quite obviously there was a truly scandalous lack of rigorous research! So while all the other boys wanted to become locomotive drivers, space explorers or sky divers I decided I had to become a professor of parapsychology. And, as I soon found out, there was only one single professorial chair in all of Germany! At around 15, in high-school, we read quite a bit of Sartre and Camus. Today I think it is pedagogically dubious to give this type of philosophy to young people in the age of puberty. The next serious pedagogical mistake was then made by my father, who actually had a copy of Aldous Huxley’s “Doors of Perception” lying around. My father died last year, and only recently have I come to notice how big his influence on my early intellectual development actually was. At 16 (the parapsychologist long having turned into a staunch Trotzkyist) I was ill, had to stay in bed for three weeks, and was terribly bored. On his nightstand I found a copy of Georg Grimm’s Die Lehre des Buddho. Another turning point in my intellectual life, and in more than one way. Since then I have always read some Indian philosophy on and off, also travelled in India and Asia quite a bit, but I have, perhaps wisely, perhaps unfortunately, always kept this on the private, purely amateur level. I have never systematically integrated it with my teaching and or into academic publications, as an increasing number of Western philosophers of mind are now doing, and in very interesting and innovative ways. Just after finishing high-school I flew to Montreal, hitch-hiked down to New York, back up via the Transcanada Highway all the way through to Vancouver, down to Berkeley and back, 21.800 km all in all. I clearly remember that during those 5 months there were two books in my backpack all the time: Theodore Roszak’s “The Making of a Counter Culture” and Patanjali’s “Yoga Sutras”. Then I hit the Frankfurt philosophy department.



3:AM: You’re interested in the philosophy of consciousness and the self.

TM: Yes, it is true that I have had a long-standing interest in consciousness. In 1994 I founded the Association for the Scientific Study of Consciousness together with Bernard Baars, the late William Banks, George Buckner, David Chalmers, Stanley Klein, Bruce Mangan, David Rosenthal, and Patrick Wilken. I hung around in the Executive Committee and various committees of the ASSC for much too long, even acted as president in 2010, but a couple of thousand e-mails and 19 conferences later it is satisfying to see how a bunch of idealists actually managed to create an established research community out of the blue. The consciousness community is now running perfectly, it has a new journal, brilliant young people are joining it all the time and my personal prediction is that we will have isolated the global neural correlate of consciousness by 2050. I also tried to support the overall process by editing two collections, one for philosophers and one of an interdisciplinary kind: Conscious Experience (1995, Imprint Academic) and Neural Correlates of Consciousness (2000, MIT Press)



In the beginning of the ASSC, foundational conceptual work by philosophers was very important, followed by a phase in which the neuroscientists moved in with their own research programs and contributions. Observing the field for more than a quarter century now, my impression is that what we increasingly need is not so much gifted young philosophers who are empirically informed in neuroscience or psychology, but more junior researchers who can combine philosophical metatheory with a solid training in mathematics. The first formal models of conscious experience have already appeared on the horizon, and as we incrementally move forward towards the first unified model of subjective experience there are challenges on the conceptual frontier that can only be met by researchers who understand the mathematics. We now need open minded young philosophers of mind who can see through the competing formal models in order to extract what is conceptually problematic – and what is really relevant from a philosophical perspective.

In the early Sixties it was Hilary Putnam who, in a short series of seminal papers, took Turing’s inspiration and transposed concepts from the mathematical theory of automata – for example the idea that a system’s “psychology” would be given by the machine table it physically realizes – into analytical philosophy of mind, laying the foundations for the explosive development of early functionalism and classical cognitive science. One major criticism was that some mysterious and simple “intrinsic” qualities of phenomenal experience exist (people at the time called them “qualia”) and that they couldn’t be dissolved into a network of relational properties. The idea was that there are irreducible and context-invariant phenomenal atoms, subjective universals in the sense of Clarence Irving Lewis (1929:121-131), that is, maximally simple forms of conscious content, for which introspectively we possess transtemporal identity criteria. But the claim was shown to be empirically false and nobody could really say what “intrinsicality” actually was. Then we got the interdisciplinary turn, neural networks, dynamical systems theory, embodiment, emotions, evolutionary psychology and cognitive robotics became directly relevant, providing valuable “bottom-up constraints” for philosophy of mind. Simultaneously we saw the great renaissance of consciousness research – many of the best minds began to share the intuition that it was now becoming empirically tractable, ripe for a combined interdisciplinary attack. Today, the old philosophical project of developing an “universal psychology” – for example, a general theory of consciousness that is hardware- and species-independent – may return in new guise. Perhaps Giulio Tononi’s Integrated Information Theory or the Predictive Processing Approach following Karl Friston’s 2010 proposal for a unified theory of brain dynamics already supply us with first analytical building blocks for a truly general and abstract theory of what consciousness and an individual first-person perspective really are, in all systems that have them. It may also be that in the end we arrive at a new and deeper understanding of why there just cannot be a universal theory of what “phenomenality” or “subjectivity” are. But it seems clear that armchair metaphysics won’t help. Today it needs people who can penetrate the mathematics of theoretical neurobiology.

3:AM: Your claim is that no one is or ever has been a self, that the self is a myth. Rather you say that we are transparent self-models. Can you unpack the general claim for us?

TM: Oh no, that would be a serious misunderstanding. I certainly don’t say that you are the phenomenal self-model active in the brain right now! You are the whole person, abstract, social, psychological properties and all, the person that now asks this question. That person as a whole is the epistemic subject, it wants to know things, t is a cognitive agent. In this person’s head, however, there is a complex subpersonal state, namely the ongoing neurocomputational dynamics generating a phenomenal self. Often, but not always, the conscious representational content of this model is one of a person and of an epistemic agent.

This epistemic agent model, or EAM for short, is a highly specific and philosophically interesting content-layer in phenomenal self-consciousness – or at least I think so. For example, the notion of an EAM might pick out more precisely what we really mean when we discuss the “origin of the first-person perspective”. One central point is that the transition from subpersonal to personal-level cognition is enabled by this specific form of conscious self-representation, namely, a global generative model of the cognitive system as an entity that actively constructs, sustains and controls knowledge relations to the world and itself. In recent publications I have shown that we only possess an EAM for about one third of our conscious life-time.

However, under SMT (the “self-model theory of subjectivity”) the crucial and more basic point is that, due to the phenomenal transparency of the self-model in our heads we – the whole organism – identify with its content. If it is a model of a person, then the organism begins to behave like a person. I have always been interested in this dynamic relationship between the epistemic subject (the person or biological organism as a whole) and the phenomenal self (constituted by a subpersonal state in the brain): How exactly do we first become epistemic subjects by functionally and phenomenologically identifying with the conscious model of a knowing self in our brains and operating under it? My second German monograph was called Subjekt und Selbstmodell (plus an obscenely long subtitle: “The Perspectivalness of Phenomenal Consciousness against the Background of a Naturalistic Theory of Mental Representation”) – and it was perhaps a mistake to call the expanded English version Being No One. But at the time I thought Americans just need something like that.

I should perhaps also add that SMT, for me, is not a ready-made theory, but a self-defined research program that I have been pursuing for three decades now. A lot of my work can be seen in this context: For example, given the background of the failure German idealist models of self-consciousness (e.g. in Fichte), it was always clear to me that the bulk of human self-consciousness really is a “pre-reflexive” affair, as many agree today. So it was natural to search for the minimal form of selfhood, and the interim result – coming out of our virtual reality experiments and interaction with dream researchers – is that it is what I call “transparent spatiotemporal self-location”, and in principle independent of any form of bodily or mental agency, as well as of emotional or explicit spatial content.

For example, Jennifer Windt has interestingly shown that there can be a robust form of phenomenal self-consciousness in so called “bodiless dreams”, in which the self is consciously represented as an extensionless point in space. Another example of how I have tried to incrementally fill in the holes was to go directly into what has recently been called a “reputation trap”, namely, by reading up on the (thin) empirical literature on out-of-body experiences and discussing it in Being No One. I can recommend trying to actively ruin your reputation. It generates innovative research: Olaf Blanke had caused an OBE by direct electrical brain stimulation in 2002 and was in search of a theoretical framework, we got together and all of the experimental work on full-body illusions, on virtual embodiments in avatars and robots came out of it. Josh Weisberg has dubbed this the “method of interdisciplinary constraint satisfaction”. As a philosopher, you define constraints for any good theory explaining what you are interested in, then you go out and search for help in other disciplines. You find out that these people are much smarter than you and that you were completely wrong in almost anything you had thought about the issue so far. It is a really rewarding strategy, because it not only ruins your reputation and minimizes your chances on the job market, it also leaves you in a state of complete confusion.

Speaking of which, the last time I did this was by looking into empirical work into “mind wandering” and “spontaneous task-unrelated thought”. Moving upwards from the different levels of content constituting the experience of “embodiment” the human self-model it is now time to look at the cognitive self-model for some time. As it turns out, cognitive agency (as opposed to what many philosophers intuitively think) is a very rare phenomenon, as is mental autonomy. This discovery may help to get us closer to an empirically grounded and much more differentiated conceptual understanding of what we were actually asking for when, in the past, we talked about “a” or “the” first-person perspective. I think there is phenomenal self-consciousness without perspectivalness and I am interested in the transition.

3:AM: Why do you use the metaphor of conscious experience as an ego tunnel? Why a tunnel?

TM: Because phenomenal properties supervene locally. Conscious experience as such is an exclusively internal affair: Once all functional properties of your brain are fixed, the character of subjective experience is determined as well. If there was no unidirectional flow of time on the level of inner experience, then we would live our conscious lives in a bubble, perhaps like some simple animals or certain mystics – locked into an eternal Now. However, our phenomenal model of reality is not only 3D, but 4D: Subjective time flows forward, the phenomenal self is embedded into this flow, an inner history unfolds. That it is why it is not a bubble, but a tunnel: There is movement in time. But of course one of the interesting characteristics of the Ego Tunnel is that it creates (as Finnish philosopher Antti Revonsuo called it) a robust “out-of-the brain experience”, a highly realistic experience of not operating on internal models, but of effortlessly being in direct and immediate contact with the external world – and oneself.

Please note how The Ego Tunnel is the only non-academic book I have ever written. It was aimed at the interested lay person, an experiment in the public understanding of philosophy. The very first sentence of this book reads: “This book has not been written for philosophers or scientists.” And the Ego Tunnel-metaphor is of course inspired by VR-technology. If what I just said is correct then experiential externality is virtual, as is the “prereflexive” phenomenology of being infinitely close to oneself – phenomenal consciousness truly is appearance. Virtual reality is the representation of possible worlds and possible selves, with the aim of making them appear as real as possible – ideally, by creating a subjective sense of “presence” and full immersion in the user. Interestingly, some of our best theories of the human mind and conscious experience itself describe it in a very similar way: Leading current theories of brain dynamics like Karl Friston, Jakob Hohwy or Andy Clark describe it as the constant creation of internal models of the world, predictively generating hypotheses – virtual neural representations – about the hidden causes of sensory input through probabilistic inference. Slightly earlier, philosophers like Revonsuo and myself have pointed out at length how conscious experience is exactly a virtual model of the world, a dynamic internal simulation, which in standard situations cannot be experienced as a virtual model because it is phenomenally transparent, diaphanous in the sense of G. E. Moore – we “look through it” as if we were in direct and immediate contact with reality. What is historically new, and what creates not only novel psychological risks, but also entirely new ethical and legal dimensions, is that one virtual reality now gets ever more deeply embedded into another virtual reality: As VR-technology hits the mass consumer market in 2016 the conscious mind of human beings, which has evolved under very specific conditions and over millions of years, now gets causally coupled and informationally woven into technical systems for representing possible realities. Increasingly, it is not only culturally and socially embedded, but also shaped by a technological niche that over time itself quickly acquires a rapid, autonomous dynamics and ever new properties. This creates a complex convolution, a nested form of information flow in which the biological mind and its technological niche influence each other in ways we are just beginning to understand. Michael Madary and myself have just published the first Code of Ethical Conduct for Virtual Reality ever. In doing this, our main goal was to provide a first set of ethical recommendations as a platform for future discussions, a set of normative starting points that can be continuously refined and expanded as we go along.

3:AM: There seems to be a problem with linking third person descriptions of the mind to first person ones. Is this because we can examine the ontology of the ego tunnel from the objective, third person, scientific point of view but can only experience it from the first person point of view? Is this why we can’t we experience the models as models?

TM: Not quite, it is a bit more complicated. First, it is completely unclear what “from the first-person point of view” really means – all we have is a visuo-grammatical metaphor. Sometimes, in my darker moments, I think that there is an ongoing conspiracy in the philosophical community, an organized form of self-deception, as in a cult, to simply all together pretend that we knew what “first-person perspective” (or “quale” or “consciousness”) means, so that we can keep our traditional debates running on forever. Second, a conscious model is only transparent if our introspective attention has no access to its underlying neurodynamic construction process, to earlier processing stages in the brain (to its “non-intentional” or “vehicle” properties, if you prefer more traditional terminology). Unconscious models – the large majority of representational processes unfolding in the brain – are neither transparent nor opaque. What many people don’t see is that there are abundant examples of phenomenal opacity: It is one of the most interesting features of the human conscious model of reality that, first, it can contain elements that are not experienced as mind-independent, as unequivocally real, as immediately given, and second, that there is a “gradient of realness” in which one and the same content can be experienced transparently or in an opaque fashion.

Examples of phenomenally opaque states are sensory illusions, pseudo-hallucinations, two-dimensional images floating in the space in front of you (as in synaesthesia or in hypnagogic imagery), and most importantly, the phenomenology of conscious thought. In these cases, it is part of the phenomenology itself that you are confronted with or operating on representations which might be true or false. Often there is a subjective character of misrepresentationality, like “This isn’t real!” or “This certainly is not mind-independent!” And about the phenomenology of “realness” – I believe we should really take our own phenomenology more seriously. What a good theory of conscious must explain is the variance in this subjective sense of realness: There clearly is a phenomenology of “hyperrealness”, for example during religious experiences or under the influence of certain psychoactive substances. But, as recent research shows, there can also be extremely high-degrees of “existential certainty” during certain types of epileptic seizures. On the other hand, during traumatic experiences or, for example, in patients suffering from depersonalization/derealisation disorder we find a dreamlike, dissociated quality – the world and one’s own body may be experienced as misrepresentational and “unreal”.

In ordinary life, the phenomenology of embodied emotions is an excellent example for dynamic changes between transparency and opacity: You can “directly perceive” that your wife is cheating you, or you can become aware of the possibility that maybe it is you who has a problem, that your “immediate” emotional representation of social reality might actually be a misrepresentation.

In short, I believe that if we would carefully apply the distinction between transparency and opacity to the different layers of the human self-model, looking at self-consciousness in a much more careful and fine-grained manner, then we might also arrive at a new answer to your original question: What a “first-person perspective” really is.

3:AM: Superman, with his enhanced speed, would presumably not be able to remain conscious on your view would he, because he’d be able to see the models?

TM: False. Introspective Superman would enjoy a non-centred phenomenology of “global opacity”: Everything would appear to him as what – at least under certain currently popular descriptions – it actually is from a third-person perspective, namely the content of one big representation, like one big thought or one big pseudo-hallucination. He would lose the phenomenology of naïve realism, and most importantly, what I have termed “the phenomenology of identification”. Introspective Superman’s biological body would not experientially identify with the content of his current conscious self-model any more.

“Introspective Superman” …. – I like your thought experiment! It has great potential, because it brings out a nice logical possibility that has been explored in Buddhist philosophy or in Advaita for many centuries: Imagine Introspective Superwoman, an advanced practitioner of classical mindfulness meditation, plus a global, opaque state of consciousness that is like “lucid waking” (like a dream in which you have become aware that you are dreaming, but during the wake state). Her attentional capacities would be so strong, that there would be a continuous phenomenology of representationality, and no naïve realism – neither about the world-component nor about the self-component of the model. That is, there would be no identification with an epistemic agent or an “introspecting self” – one would rather predict a phenomenology of “the whole world effortlessly looking into itself”.

Your “Introspective Superman”-scenario also allows us to ask new questions: Would there be a possibility to keep even the phenomenology of identification, but to tie it to phenomenality per se, namely, by turning the process of conscious experience itself into the unit of identification? In this phenomenal configuration, which is clearly possible from a logical point of view, experiential states would still arise, but they would not be subjective ones, because the underlying subject-object-structure of consciousness had been dissolved. In other words, what, today, we vaguely call the individual “first-person perspective” would have disappeared, because its origin (I call it the “unit of identification”) was now not eliminated, but maximized. Perhaps this is how we should imagine your Introspective Superman? Phenomenologically, such an aperspectival form of consciousness would not be a subjective form of experience any more – rather a globalized conscious experience of “the world looking”. One interesting, and remaining, philosophical question would be if, for this class of states, we would still want to say that they constitute a form of conscious self-representation.

3:AM: Why isn’t what you call the Ego what we might think of as being the self? Doesn’t anything live in the ego tunnel?

TM: Nothing lives in the Ego Tunnel, just as nothing lives in a 3D-movie – even if the audience is completely immersed and fully identifies with the hero. You know, I am not bound to terminological conventions – we could always say “What we have called the self in the past really is X” (where I have tried to offer a theory on what that X is). We could call all systems currently operating under a transparent phenomenal self-model, or all those having the “ability”, the functional potential, “selves”. If that gives things a more politically correct ring from your preferred ideological perspective, or if, psychologically, it helps you with mortality-denial – fine. There is just no entity there, no individual substance, and scientifically we can predict and explain everything we want to predict and explain in a much more parsimonious way. If you are interested in a short sketch of some metaphysical options, there is a chapter called “The No-Self-Alternative” In Shaun Gallgher’s Oxford Handbook of the Self.

3:AM: What makes consciousness a subjective phenomenon and how do you think experiments such as the rubber hand experiment helps show that the self is purely experiential?

TM: Consciousness is phenomenologically subjective whenever there is a stable, consciously experienced first-person perspective. To have a first-person perspective in this sense, I have argued, a cognitive system needs a model of the intentionality relation itself: It needs an internal model of itself as currently directed at an intentional object, for example a set of satisfaction conditions (a representation of an action goal, as in practical intentionality) or a set of truth conditions (an object of knowledge, as in theoretical intentionality). Such a flexible, continuously changing model of being dynamically directed at various intentional objects allows a system to consciously experience itself as being not only a part of the world, but of being fully immersed in it through a dense network of causal, perceptual, cognitive, attentional, and agentive relations. The core-idea behind this notion of a “phenomenal model of the intentionality relation” is that the decisive feature characterizing the representational architecture of human consciousness lies in continuously co-representing the representational relation itself. This PMIR, however, has nothing to do with possessing a concept of “intentionality” and it also is not something static, abstract, or timeless – I rather think of an embodied, dynamic, circular flow of causality underlying our phenomenal experience of being directed at the world, our inner image of what Margaret Anscombe once called the “arrow of intentionality”. Subjectivity means to catch yourself in the act.

And this might be what it means for consciousness to be subjective in an epistemological sense: In our own case it is the ability to represent knowledge under highly specific, neurally realized data-format. Subjectivity is an ability, the capacity to use a new inner mode of presenting the fact that you currently know something to yourself. For a human being, to possess a consciously experienced first-person perspective means to have acquired a very specific functional profile and distinctive level of representational content in one’s currently active phenomenal self-model: It has, episodically, become a dynamic inner model of a knowing self. Recently, I have begun to call this an “epistemic agent model”. The point then is that representing facts under such a model creates a new epistemic modality. All knowledge is now accessed under a new internal mode of presentation, namely, as knowledge possessed by a self-conscious entity intentionally directed at the world. Therefore, it is subjective knowledge. This notion of a conscious model of oneself as an individual entity actively trying to establish epistemic relations to the world and to oneself, I think, comes very close to what we traditionally mean by notions like “subjectivity” or “possession of a 1PP”.

It is the rubber hand illusion that got me into all of this virtual reality research, the now famous VERE-project, and the attempt to create robust full-body illusions in Olaf Blanke’s lab in Lausanne. I went to these neuroscientists and basically said: “For philosophical reasons having to do with pre-reflexive self-consciousness and the theory of embodiment, I urgently want reproducible out-of-body experiences in healthy subjects and a whole-body variant of the rubber-hand illusion!” They said: “We don’t really understand what you mean, and besides, the brain never sees the whole body from the outside – this is impossible!” What the rubber hand demonstrates is how our Bayesian brains are very sensitive to statistical correlations in the environment, and how the phenomenology of ownership just follows suit if the underlying model of reality changes. Here is a figure from our 2007 paper in SCIENCE:

Creating a whole-body analog of the rubber-hand illusion. (A) Participant (dark blue trousers) sees through a HMD his own virtual body (light blue trousers) in 3D, standing 2 m in front of him and being stroked synchronously or asynchronously at the participant’s back. In other conditions, the participant sees either (B) a virtual fake body (light red trousers) or (C) a virtual noncorporeal object (light gray) being stroked synchronously or asynchronously at the back. Dark colors indicate the actual location of the physical body or object, whereas light colors represent the virtual body or object seen on the HMD. (Image used with kind permission from M. Boyer.)

The self-model theory is not simply one philosophical model among others. It has been laid out as an interdisciplinary research program right from the beginning, as firmly anchored in scientific data as possible. If the basic idea of the self-model theory is on the right track, it yields a whole range of empirical predictions that would have to be experimentally testable. One of these predictions is that it must in principle be possible to directly connect the conscious self-model in the human brain to external systems – for instance to computers, robots, or also to artificial body images on the Internet or in virtual realities. This prediction has recently been corroborated. Under the conceptual assumptions of the self-model theory it must in principle be possible to couple the human self-model in a causally direct way with artificial organs for acting and sensing, while bypassing the non-neural, biological body. Through this, we could not only experientially, but also functionally situate ourselves in technologically generated environments in completely novel ways. For the last five years I have been working in a research project funded by the European Union, the VERE project, in cooperation with scientists and philosophers from nine countries. One of the research goals of this ambitious project was to go beyond the classical experiments from the year 2007 and stably transfer our sense of selfhood to avatars or robots that can perceive for us, move, and interact with other self-aware agents (“VERE” is the acronym of Virtual Embodiment and Robotic Re-Embodiment). But my official philosophical position still says that we will never really succeed in this.

In an ambitious pilot study, our Israeli colleagues Ori Cohen, Doron Friedman, and their collaborators in France demonstrated that it is possible to read out action intentions of a test subject using real-time functional magnetic resonance imaging. These can then directly be transferred as high-level motor commands to a humanoid robot, which transforms them into bodily actions, while the test subject can simultaneously witness the whole experiment visually through the eyes of the robot. This process is based on generated motor imagery, allowing test subjects to “directly act with their PSM”, [by remote-controlling a humanoid robot in France from a scanner in Israel.

This technical development is philosophically interesting for a number of reasons, for not only does it enable us to act in the world, to a large extent, “bypassing the biological body”, but also, to test theories about the emergence of the sense of selfhood more precisely than ever before. Many of these developments are historically new. I still believe that gut feelings, the sense of balance, and spatial self-perception are so firmly coupled to our biological body that we will never be able to leave it experientially on a permanent basis. The human self-model is anchored on interoception, it cannot simply be “copied out” of the brain. All that can be done is that new kinds of tools become functionally integrated with the self-model in our brain – not only rakes or sticks, but also avatars or robots, for example. But I must confess that I am starting to have doubts. For firstly, it could be that simply different and newly extended forms of self-consciousness which will in the future be generated by ever more densely couplings between self-model and avatars or robots – and secondly, technological progress in this area happens surprisingly fast.

For philosophers, this type of technological development – the development of what I call “self-model interfaces” – is interesting, and for several reasons: firstly, because of its ethical and cultural consequences, but also because it constitutes a historically new form of acting. I have introduced the notion of a “PSM-action” to be able to describe this new element more precisely. PSM-actions are all those actions in which a human being exclusively uses the conscious self-model in his brain to initiate an action. Of course, there will have to be feedback loops for complex actions, for instance, when seeing through the camera eyes of a robot, perhaps adjusting a grasping movement in real-time (which is still far from possible today). But the relevant causal starting point of the entire action is now not the body made of flesh and bones anymore, but only the conscious self-model in our brain. We simulate an action in the self-model, in the inner image of our body, and a machine performs it. For philosophical theories of self-consciousness this is interesting, because it allows us to investigate the “prereflexive” mechanism of identification with a body more closely.

We are systems that continuously extract the causal structure of the world in an attempt to predict what our next sensory input will be, and we use our bodies in active inference and our attentional mechanisms to constantly optimise the precision of our predictions. I think it is a great merit of philosophers like Jakob Hohwy and Andy Clark that they have made the seminal and ground-breaking work of British mathematician and theoretical neurobiologist Karl Friston accessible to the philosophical community. I also predict that in the empirically informed quarters of philosophy of mind we will soon see a whole new round of the classical internalism/externalism debate. Under this new approach, self-consciousness is an ongoing process of predicting global properties of ourselves, using a unified model – the self-model. Most of this is not introspectively accessible, most self-knowledge we have is unconscious self-knowledge. It is only the conscious partition of this dynamic process that we can direct our attention to – and it certainly is not all “purely experiential”, as you say. Of course, there is systematic self-deception, there are cognitive biases like male overconfidence bias and unrealistic optimism plus an individual bias blind spot, there is also an illusion of transtemporal identity, and there is a lot of philosophically relevant empirical evidence for functionally adequate forms of misrepresentation. But all this doesn’t make self-consciousness “purely experiential”: We simply would not be here if the self-models in the brains of our ancestors had not extracted the relevant causal structure of our bodies, of peripersonal space and our physical environment, and that of other minds and our group sufficiently well. Our internal models condense millions of years of interacting with this world, in many domains model evidence and statistical reliability are extremely robust – that is why we have even come to explicitly model ourselves as “knowing selves”, Homo sapiens sapiens.

3:AM: Typically conscious experience experiences as a subject that is the centre of a world. A couple of questions arise from this. The first is: what does it mean to say that conscious experience is not just related to the world but is related to it as knowing selves and can there be mental states that are not conscious, or are all mental states conscious at some level even in those patients who deny they exist? How does examining patients who deny they exist help this philosophical question?

TM: First, a lot of evidence shows that most of our cognitive processing is unconscious – phenomenal experience is just a very small slice or partition of a much larger space in which mental processing takes place. As a first-order approximation, I would say that phenomenality is “availability for introspective attention”: Consciousness is a property of all those mental contents to which you can in principle direct your attention. That is a good working concept to start with: It is not necessary to form a concept or inner judgment, just the availability for subsymbolic metarepresentation and optimization of precision expectations is enough. That is why many animals are phenomenally conscious, even though they may not have “thoughts” in a more narrow, philosophically charged sense. Also note how the “intro” in “introspective attention” does not necessarily imply that this attention is directed to the organism’s self-model: Directing attention to some sensory content that is internally represented as an aspect of an external, mind-independent object – the blueness of the sky, the redness of an apple – still is “intro”spective in the sense that, from a third-person perspective, it only operates on an exclusively internal model in your brain. At any point in time, there will be a Markov blanket separating your currently active conscious model of reality from extra-organismic reality or other parts of the brain, all behaviour that is based on your conscious experience alone can be predicted from events inside of this statistical boundary. This means that, epistemologically and methodologically speaking, nothing outside of it adds any information in terms of predictability. All attention is introspection. Of course, subjectively, all this may be experienced as the “direct” and unmediated perception of an outside world.

One interesting question, closely related to the one you posed above, could be if the same principle holds for unconscious mental states and processes, for things that are outside of the conscious model of reality, but which we would still term “mental” or “intentional” states. Another interesting question would be how much of our behaviour is really based “on your conscious experience alone”. This may depend on our epistemic interests and the temporal scale, if you will, the “time-window” which we choose to look at the human mind. Perhaps almost all mental states have a conscious and an unconscious part?

My PhD student Iuliia Pliushch and me have recently invented the “dolphin model of cognition”. Dolphins frequently leap above the water surface. One reason for this behaviour could be that, when travelling longer distances, jumping can save the dolphins energy as there is less friction while in the air. Typically, the animals will display long, ballistic jumps, alternated with periods of swimming below, but close to the surface. “Porpoising“ is one name for this high-speed surface-piercing motion of dolphins and other species, in which leaps are interspersed with relatively long swimming bouts, often about twice the length of the leap. “Porpoising” may also be the energetically cheapest way to swim rapidly and continuously and to breathe at the same time.

Pliushch and me think that, just as dolphins cross the surface, thought processes often cross the border between conscious and unconscious processing as well, and in both directions. For example, chains of cognitive states may have their origin in unconscious goal-commitments triggered by external stimuli, then transiently become integrated into the conscious self-model for introspective availability and selective control, only to disappear into another unconscious “swimming bout” below the surface. Conversely, information available in the conscious self-model may become “repressed” into an unconscious, modularized form of self-representation where it does not endanger self-esteem or overall integrity. However, in the human mind, the time windows in which leaps into consciousness and subsequent “underwater” processing unfold may be of a variable size – and there may actually be more than one dolphin. In fact, there may be a whole race going on! Just like your “Introspective Superman” the “dolphin model of cognition” has the advantage that it can be gradually enriched by additional assumptions. For example, we can imagine a situation where only one dolphin at a time can actually jump out of the water, briefly leaping out of a larger, continuously competing group. We can imagine the process of “becoming conscious” as a process of transient, dynamic integration of lower-level cognitive contents into extended chains, as a process of “cognitive binding” with the new and integrated contents becoming available for introspection. But we might also point out that individual dolphins are often so close to the surface that they are actually half in the water and half in the air.

What exactly is this process we call “conscious thinking” in the first place? Conscious thinking exists, for instance, also during the night, in states of dreaming. During dreams, we possess no control whatsoever over our thoughts and we are not able to control our attention volitionally. Sometimes there is the possibility to “awake” within a state of dreaming and regain mental autonomy. Such dreams are called “lucid dreams”, for in such dreams the dreamer realizes that he is currently dreaming, and hence he also regains control over thinking processes and the ability for volitional control of attention.

But what about conscious thinking during the day? Depending on the scientific study, our mind wanders during 30-50% of our conscious waking phases. At night, during our non-lucid dreams and those sleeping stages in which we have complex conscious thought but no pictorial hallucinations, we also lack the ability to suspend or terminate the thinking process – an ability of central importance for mental self-control. You cannot be a rational subject without veto-control on the level of mental action. Then there are also various types of intoxication or light anesthesia, of illness (e.g., fever dreams or depressive rumination), or of insomnia, in which we are in a sort of helpless twilight state, plagued by constantly recurring thoughts we cannot stop. In all these phases our mind wanders and we have no control over our thinking processes or our attention. According to a conservative estimate, the part of our self-model that endows us with real mental autonomy only exists during around one third of our entire conscious life. We do not exactly know when and how children first develop the necessary capacities and layers of their self-model. But it is a plausible assumption that many of us gradually lose it towards the end of their lives. If we consider all empirical findings regarding mind wandering together, we arrive at a surprising result that can hardly be underestimated as far as its philosophical significance is concerned: Mental autonomy is the exception, loss and absence of cognitive control is the rule.

As far as inner action is concerned, we are only rarely truly self-determined persons, for the major part of our conscious mental activity rather is an automatic, unintentional form of behavior on the subpersonal level. Cognitive agency and attentional agency are not the standard case, but rather an exception; what we used to call “conscious thinking” is actually most of the time an automatically unfolding subpersonal process. One interesting aspect is that we do not notice this fact – it is highly counterintuitive, at least it seems “a bit exaggerated” to most of us. Not only seems there to be a wide-spread form of “introspective neglect”, resembling a form of anosognosia or anosodiaphoria related to the frequent losses of cognitive self-control characterizing our inner life. The phenomenon of mind wandering is also clearly related to denial, confabulation, and self-deception. I once gave a talk about mind wandering to a group of truly excellent philosophers, pointing out the frequent, brief discontinuities in our mental model of ourselves as epistemic agents and one participant interestingly remarked: “I think only ordinary people have this. As philosophers, we just don’t have this because we are intellectual athletes!” The introspective experience and the corresponding verbal reports of one’s own mind wandering seem to be strongly distorted by overconfidence bias, by illusions of superiority and introspection illusion (in which we falsely assume direct insight into the origins of our mental states, while treating others’ introspections as unreliable). Not only for philosophers of mind it is probably also influenced by confirmation bias related to one’s own theoretical preconceptions and culturally entrenched notions of “autonomous subjectivity”, by self-serving bias, and possibly by frequent illusions of control on the mental level.

When you are simply observing your breath, you are perceiving an automatically unfolding process in your body. By contrast, when you are observing your wandering mind, you are also experiencing the spontaneous activity of a process in your body. What physical process is that, exactly? A multitude of empirical studies show that areas of our brains responsible for the wandering mind overlap to a large extent with the so-called “default-mode network”. The default-mode network typically becomes active during periods of rest, and as a result, attention is directed to the inside. This is what happens, for instance, during daydreams, unbidden memories, or when we are thinking about ourselves and the future. As soon as a concrete task needs to be done, this part of our brain is deactivated and we concentrate immediately on the solution to the currently pending problem.

My own hypothesis is that the default-mode network mainly serves to keep our autobiographical self-model stable and in good shape: Like an automatic maintenance program, it generates ever new stories, which all have the function to make us believe that we are actually the same person over time. The default mode network has a high metabolic price, it costs the organism a lot of energy, and it has been shown that mind wandering diminishes your general quality of life – as a whole person, you pay a psychological price too. What is it that is so precious, that we pay such a high price for it? I believe it is the creation of a robust illusion of transtemporal identity. Only as long as we believe in our own identity over time does it make sense for us to make future plans, avoid risks, and treat our fellow human beings fairly – for the consequences of our actions will, in the end, always concern ourselves. My hypothesis is that exactly this was one of the central conditions in the evolution of social cooperation and the emergence of large human societies: It is yourself who will be punished or rewarded in the future, it is yourself who will either enjoy a good reputation in the future or be subjected to retaliation. What we need for that is an intact “narrative self-model”, an illusion of sameness. Then the “stabs of conscience” can make us even more self-conscious, integrating individual preferences with group preferences.

But on closer inspection, the narrative default-mode does not, I believe, actually produce thoughts. It continuously generates an inner environment, something I would describe as “cognitive affordances”, because they afford an opportunity for inner action. They actually are only precursors of thoughts, spontaneously occurring mental contents, that, as it were, are constantly calling out “Think me!” to us. Interestingly, such proto-thoughts also possess something like the “affordance character” just mentioned, because they reveal a possibility. That possibility is not a property of the conscious self, and not a property of the little proto-thought currently arising – it is the possibility of establishing a relation by identifying with it.

Imagine you are trying to lose weight and attempting to concentrate on writing an article, but there is a bowl with your favorite chocolate cookies in your field of vision, a permanent immoral offer. If we are capable of rejecting such offers or to postpone them into the future, then we can also concentrate on that which we currently want to do. Now exactly the same principle also holds for our inner actions: If we lose the ability in question for a single moment only, we are immediately being hijacked by an aggressive little “Think me!” and our mind begins to wander. Often our wandering mind then automatically follows an inner emotional landscape. Speaking as a phenomenologist, it seems to me that a considerable portion of mind wandering actually is “mental avoidance behaviour”, an attempt to cope with adverse internal stimuli or to protect oneself from a deeper processing of information that threatens self-esteem. It will try, for instance, to flee from unpleasant bodily perceptions and feelings and somehow reach a state that feels better, like a monkey brachiating from branch to branch. Not acting, it seems, is one of the most important human capacities whatsoever, for it is the basic requirement of all higher forms of autonomy. There is outer non-acting, for instance in successful impulse control (“I will not grasp for this bowl of chocolate cookies now!”). And there is inner non-acting, exemplified by the letting go of a train of thought and resting in an open, effortless state of awareness, which can sometimes follow. There is thus an outer and an inner silence. Someone who cannot stop his outer flow of words will soon be unable to communicate with other human beings at all. Whoever loses the capability for inner silence, loses contact to himself and soon won’t be able to think clearly any more.

Show more