2015-12-16

This debate is from the comments to a public Facebook post by Eliezer Yudkowsky. Some comments were omitted for brevity. Links were inserted for clarity.

Eliezer Yudkowsky: [This is cross-]posted [with] edits from a comment on [the] Effective Altruism [Facebook group], asking who or what I cared about.

I think that I care about things that would, in your native mental ontology, be imagined as having a sort of tangible red-experience or green-experience, and I prefer such beings not to have pain-experiences. Happiness I value highly is more complicated.

However, my theory of mind also says that the naive theory of mind is very wrong, and suggests that a pig does not have a more-simplified form of tangible experiences. My model says that certain types of reflectivity are critical to being something it is like something to be. The model of a pig as having pain that is like yours, but simpler, is wrong. The pig does have cognitive algorithms similar to the ones that impinge upon your own self-awareness as emotions, but without the reflective self-awareness that creates someone to listen to it.

It takes additional effort of imagination to imagine that what you think of as the qualia of an emotion is actually the impact of the cognitive algorithm upon the complicated person listening to it, and not just the emotion itself. Like it takes additional thought to realize that a desirable mate is desirable-to-you and not inherently-desirable; and without this realization people draw swamp monsters carrying off women in torn dresses.

To spell it out in more detail, though still using naive and wrong language for lack of anything better: my model says that a pig that grunts in satisfaction is not experiencing simplified qualia of pleasure, it’s lacking most of the reflectivity overhead that makes there be someone to experience that pleasure. Intuitively, you don’t expect a simple neural network making an error to feel pain as its weights are adjusted, because you don’t imagine there’s someone inside the network to feel the update as pain. My model says that cognitive reflectivity, a big frontal cortex and so on, is probably critical to create the inner listener that you implicitly imagine being there to ‘watch’ the pig’s pleasure or pain, but which you implicitly imagine not being there to ‘watch’ the neural network having its weights adjusted.

What my model says is that when we have a cognitively reflective, self-modely thing, we can put very simple algorithms on top of that — as simple as a neural network having its weights adjusted — and that will feel like something, there will be something that it is like that thing to be, because there will be something self-modely enough to feel like there’s a thing happening to the person-that-is-this-person.

If the one’s mind imagines pigs as having simpler qualia that still come with a field of awareness, what I suspect is that their mind is playing a shell game wherein they imagine the pig having simple emotions and that feels to them like a quale, but actually the imagined inner listener is being created by their own minds doing the listening. Since they have no complicated model of the inner-listener part, since it feels to them like a solid field of awareness that’s just there for mysterious reasons, they don’t postulate complex inner-listening mechanisms that the pig could potentially lack. You’re asking the question “Does it feel like anything to me when I imagine being a pig?” but the power of your imagination is too great; what we really need to ask is “Can (our model of) the pig supply its own inner listener, so that we don’t need to imagine the pig being inhabited by a listener, we’ll see the listener right there explicitly in the model?”

Contrast to a model in which qualia are just there, just hanging around, and you model other minds as being built out of qualia, in which case the simplest hypothesis explaining a pig is that it has simpler qualia but there’s still qualia there. This is the model that I suspect would go away in the limit of better understanding of subjectivity.

So I suspect that vegetarians might be vegetarians because their models of subjective experience have solid things where my models have more moving parts, and indeed, where a wide variety of models with more moving parts would suggest a different answer. To the extent I think my models are truer, which I do or I wouldn’t have them, I think philosophically sophisticated ethical vegetarians are making a moral error; I don’t think there’s actually a coherent entity that would correspond to their model of a pig. Of course I’m not finished with my reductionism and it’s possible, nay, probable that there’s no real thing that corresponds to my model of a human, but I have to go on guessing with my best current model. And my best current model is that until a species is under selection pressure to develop sophisticated social models of conspecifics, it doesn’t develop the empathic brain-modeling architecture that I visualize as being required to actually implement an inner listener. I wouldn’t be surprised to be told that chimpanzees were conscious, but monkeys would be more surprising.

If there were no health reason to eat cows I would not eat them, and in the limit of unlimited funding I would try to cryopreserve chimpanzees once I’d gotten to the humans. In my actual situation, given that diet is a huge difficulty to me with already-conflicting optimization constraints, given that I don’t believe in the alleged dietary science claiming that I suffer zero disadvantage from eliminating meat, and given that society lets me get away with it, I am doing the utilitarian thing to maximize the welfare of much larger future galaxies, and spending all my worry on other things. If I could actually do things all my own way and indulge my aesthetic preferences to the fullest, I wouldn’t eat any other life form, plant or animal, and I wouldn’t enslave all those mitochrondria.

Tyrrell McAllister: I agree with everything you say about the inadequacy of the “pigs have qualia, just simpler” model. But I still don’t eat pigs, and it is for “philosophically sophisticated ethical” reasons (if I do say so myself). When I watch pigs interact with their environment, they seem to me, as best as I can tell, to be doing enough reflective cognition to have an “inner listener”.

Eliezer: Tyrell, what looks to you like a pig’s brain must be modeling itself?

Tyrell: “Modeling itself” is probably too weak a criterion. But pigs do seem to me to do problem-solving involving themselves and their environments that is best explained by their working with a mental model of themselves and their environment. (See, e.g., Pigs Prove to Be Smart, if Not Vain.)

I acknowledge that this kind of modeling isn’t enough to imply that there is an inner listener, so there I am being more speculative.

Also, I should have written, “they seem to me, as best as I can tell, with sufficient probability given the utilities involved, to be doing enough reflective cognition to have an ‘inner listener'”.

(I do eat fish, because the benefits of eating fish, and the probability of their being conscious, seem to make eating them the right thing to do in that case.)

Eliezer: Tyrell: This looks to me like environment-modeling performing a visual transform, and while that implies some degree of cognitive control of the visual cortex it doesn’t imply brains modeling brains.

If my model is correct then the mirror test is actually an ethically reasonable place to put a “do not eat” barrier; passing the mirror test may not be sufficient, but it seems necessary-ish (leaving aside obvious caveats about using the right modality).

Jamie F Duerden: Pigs seem reasonably ‘smart’, insofar as recognising names and processes, solving puzzles and so on. I don’t know whether they recognise themselves in a mirror and are aware of their own awareness of that fact, but I would not be especially surprised to discover it was so. Yet I still eat bacon, because it is filling, very tasty, and a great source of protein. I would not, however, eat a pig which I had been keeping as a pet.

This distinction seems to be consistently applied by whichever part of my brain makes intuitive ‘moral’ judgements, because I experience no psychological backlash when contemplating eating people I don’t know, but am disturbed by the idea of eating someone I was friends with. Comparing those sensations to the equivalent responses for ‘farmed/wild animal’ and ‘pet’ yields negligible difference. I have hunted animals for meat, so this is not a failure to visualise unknown animals correctly. I am forced to conclude that ‘does it have qualia?’ is not an important variable in my default ‘is it food?’ function. (Nor apparently in my ‘would it make a good pet?’ function.) As I routinely catch myself empathising with hypothetical AIs, I suspect this may be a more general complete failure to have separate categories for the various sorts of ‘mind’.

Luke Muehlhauser: I think my probability distribution over theories of consciousness (in animals, humans, and machines) looks something like this:

~30%: Basically Eliezer’s model, described above

~30%: Apes and maybe some others have an inner listener, but it results in less salient subjective experience than the human inner listener due to its less integrated-y self-modely nature, and this subjective salience drops to 0 pretty sharply below a certain kind/degree of self-modely-ness (ape level? pig level?), rather than trailing off gradually down to fish or whatever.

< 5%: pansychism, consciousness is fundamental, consciousness is magical, and similar theories

~35%: other theories not close to the above, most of which I haven’t thought of

I try to limit my meat intake due to how much mass I have on theory categories #2 & 4, but I’m not strictly vegetarian or vegan because I’m choosing to devote most of my “be ethical” willpower/skill points elsewhere. But this does mean that I become more vegetarian/vegan as doing so becomes less skill+willpower-requiring, so e.g. I want that all-vegan supermarket in Germany to open a branch in Berkeley please.

David Pearce: Some errors are potentially ethically catastrophic. This is one of them. Many of our most intensely conscious experiences occur when meta-cognition or reflective self-awareness fails. Thus in orgasm, for instance, much of the neocortex effectively shuts down. Or compare a mounting sense of panic. As an intense feeling of panic becomes uncontrollable, are we to theorise that the experience somehow ceases to be unpleasant as the capacity for reflective self-awareness is lost? “Blind” panic induced by e.g. a sense of suffocation, or fleeing a fire in a crowded cinema (etc), is one of the most unpleasant experiences anyone can undergo, regardless or race or species. Also, compare microelectrode neural studies of awake subjects probing different brain regions; stimulating various regions of the “primitive” limbic system elicits the most intense experiences. And compare dreams – not least, nightmares – many of which are emotionally intense and characterised precisely by the lack of reflectivity or critical meta-cognitive capacity that we enjoy in waking life.

Anyone who cares about sentience-friendly intelligence should not harm our fellow subjects of experience. Shutting down factory farms and slaughterhouses will eliminate one of the world’s worst forms of severe and readily avoidable suffering.

Jai Dhyani: [Eliezer], how confident are you about this? I would like to bet on the proposition: “In 2050 you will not be willing to eat a cow-as-opposed-to-vat grown steak in exchange for $50 inflation-adjusted USD.” I think there is at least a 10% chance of this occurring.

Eliezer: Jai Dhyani, if I was accustomed to vat-grown steak I probably would value that ethical combo continuing more than I valued $50.

Mason Hartman: I think you’ve laid the groundwork for a useful model of consciousness, but it’s not clear why you’ve arrived at the position that pigs probably don’t have it. It seems unlikely to me that only a very small handful of species have been or are “under selection pressure to develop sophisticated social models of conspecifics.” Basically, a little red flag quietly squeaks “Check whether this person actually knows much about animal social behavior!” whenever someone says something that seems to imply that humans are the pinnacle of social-animal-ness.

Also, I think consciousness probably “unlocks” a lot of value aside from better dinner conversation – predation, for example, seems like the sort of thing one might be better able to prevent with the ability to model minds, including one’s own mind.

Brent Dill: Incidentally, baboons have quite a rich set of social scheming instincts. All of this guy’s stuff is amazing.

Eliezer: I wouldn’t eat a BABOON. Eek.

Brent: Well, yeah. Eating primates is an excellent way to get really really sick.

Eliezer: They recognize themselves in mirrors! No eat mirror test passers! Anything which has the cognitive support for that could have an inner listener for all I currently know about inner listeners.

Mason: The mirror test has some other problems with it – the big one being that a lot of non-Western kids don’t pass it. I assume Kenyan 6-year-olds will remain off the menu.

William Eden: I am personally more inclined towards the panpsychism view, for various reasons, not least of which is Eliezer’s post, ironically enough.

I care about subjective experience apart from concepts of personhood. Imagine a superintelligence discovered humans, decided that we lacked some critical component of their cognition, and because of this they felt justified in taking us apart to be used as raw atoms, or experimented on in ways we found distressing.

Eliezer: Or even worse, they might not think that paperclips had ethical value? I think if you’re going to go around caring about conscious beings instead of trade partners, you need to accept that your utility function looks weird to a paperclip maximizer. Saying that you care about subjective experience already relegates you to a pretty weird corner of mindspace relative to all the agents that don’t have subjective experience and had different causal histories leading them to assign terminal value to various agent-like objects.

Brent: I think that all ‘pain’, in the sense of ‘inputs that cause an algorithm to change modes specifically to reduce the likelihood of receiving that input again’, is bad.

I think that ‘suffering’, in the sense of ‘loops that a self-referential algorithm gets into when confronted with pain that it cannot reduce the future likelihood of experiencing’.

Social mammals experience much more suffering-per-unit-pain because they have so many layers of modeling built on top of the raw input – they experience the raw input, the model of themselves experiencing the input, the model of their abstracted social entity experiencing the input, the model of their future-self experiencing the input, the models constructed from all their prior linked memories experiencing the input… self-awareness adds extra layers of recursion even on top of this.

One thought that I should really explore further: I think that a strong indicator of ‘suffering’ as opposed to mere ‘pain’ is whether the entity in question attempts to comfort other entities that experience similar sensations. So if we see an animal that exhibits obvious comforting / grooming behavior in response to another animal’s distress, we should definitely pause before slaughtering it for food. The capacity to do so across species boundaries should give us further pause, as should the famed ‘mirror test’. (Note that ‘will comfort other beings showing distress’ is also a good signal for ‘might plausibly cooperate on moral concerns’, so double-win).

At that point, we have a kind of consciousness scorecard, rather than a pure binary ‘conscious / not conscious’ test.

Eliezer: I think you just said that a flywheel governor on a steam engine, or a bimetallic thermostat valve, can feel pain.

Brent: I did, and I intended to. “Pain”, in and of itself, is worthless. “Suffering” has moral weight. “Pain x consciousness = suffering”, and a flywheel governor just isn’t complex enough to be very conscious.

David Brin: See this volume about how altruism is actually very complicated. (I wrote two of the chapters.)

Mason: I have some other thoughts, assuming my understanding of Eliezer’s theory is fairly accurate. These are the claims I’m assuming he’s making:

(a) Meaningful consciousness (i.e. that which would allow experiences Eliezer would care about) in the animal kingdom is exceedingly rare.

(b) Animals only evolve meaningful consciousness when under selection pressure to develop sophisticated social models.

(c) The mirror test is a decent way to determine whether a species possesses, or has the capacity for, meaningful consciousness.

Assuming these claims were true, I’d make some predictions about the world that don’t seem to have been borne out, including:

(1) Given the rarity of meaningful consciousness, animals who pass the mirror test will likely be closely related to one another – but humans, elephants, dolphins, and magpies have all passed the mirror test at one point.

(2) Animals who pass the mirror test should be exceptionally social or descended from animals who are/were exceptionally social. I admittedly don’t know much about magpies, but a quick Google doesn’t seem to imply that they’re exceptionally more social than other birds.

Eliezer: I hadn’t heard of magpies before. African grey parrots are surprisingly smart and apparently play fairly complex games of adultery. Dolphins are the only known creatures besides humans that form coalitions of coalitions. Not sure what elephants do socially but they have large brains and the balance of selection pressures is plausibly affected thereby (i.e., it is more advantageous to have good brain software if you have a larger brain, and if you have a bigger body and brain improvements buy percentages of body-use then you can afford a bigger brain, etc.)

Mason: I think all of the test-passing species are pretty social (some of them possibly more so than us, depending on how you measure social-ness), but they don’t seem exceptionally so. Many, many animals play very complex social games – in my opinion (and much to my surprise) even chickens have fascinating social lives.

The question isn’t “Do we have evidence these animals are highly social?” but “Do we have evidence these animals tend to be social to an extent/in a way which other animals aren’t?”

Rob Bensinger: I don’t believe in phenomenal consciousness. I think if you try to put quotation marks around a patch of your visual field (e.g., by gesturing at ‘that patch of red in the lower-left quadrant of my visual field’), some of the core things included in your implicit intension will make the gesture non-referring. Asking about ‘but what am I really subjectively experiencing?’ is like going: ‘but what is my deja-vu experience really a repetition of?’ The error is trickier, though, because the erroneous metarepresentation is systematic and perception-like (like a hallucination or optical illusion, but one meta level up) rather than judgment-like (like a delusion or hunch).

Like Luke said, believing I’m a zombie in practice just means I value something functionally very similar to consciousness, ‘z-consciousness’. But ‘z-consciousness’ is first and foremost a (promissory note for a) straw-behaviorist, third-person theoretical concept. Thinking in those terms — starting with one box, some of whose (physical, neuronal, behavioral…) components I sometimes misdescribe as ‘mind’, rather than starting with separate conceptual boxes for ‘mind’ and ‘matter’ and trying to glue them together as tightly as possible — has been a really weird (z-)experience. It’s had some interesting effects on my intuitions over the past few years.

1. I’m much more globally skeptical that I can trust my introspection and metacognition.

2. Since (z-)consciousness isn’t a particularly unique kind of information-processing, I expect there to be an enormous number of ‘alien’ analogs of consciousness, things that are comparable to ‘first-person experience’ but don’t technically qualify as ‘conscious’.

3. I’m more inclined to think there are fuzzy ‘half-conscious’ and ‘quarter-conscious’ states in between z-consciousness and z-unconsciousness.

I entertained limited versions of those ideas in my non-eliminative youth, but they’re a lot more salient and personal to me now. And as a consequence of 1-3, I’m much more skeptical that (z-)consciousness is a normatively unique kind of information-processing. Since I think a completed neuroscience will overturn our model of mind fairly radically, and since humans have strong intuitions in favor of egalitarianism and symmetry, it wouldn’t surprise me if certain ‘unconscious’ states acquired the same moral status as ‘conscious’ ones.

The practical problem of deciding which alien minds morally ‘count’ will become acute as we explore transhuman/posthuman mindspace, but the principled problem is already acute; if we expect our ideal, more informed selves to dispense with locating all value in consciousness (or, equivalently, if we expect to locate all value in a bizarrely expansive conception of ‘consciousness’), we should do our best to already reflect that expectation in our ethics.

So, I’m with Eliezer in thinking pig pain isn’t just ‘human pain but simpler’, or ‘human pain but fainter’. But that doesn’t much reassure me: to the extent human-style consciousness is an extremely rare and conjunctive adaptation dependent on complex social modeling, I become that much less confident that that’s the only kind of information-processing I should be concerned for, on a Coherent-Extrapolated-Volition-style reading of ‘should’. My four big worries are:

(a) Pigs might still be ‘slightly’ conscious, if consciousness (ideally conceived) isn’t anywhere near as rare and black-and-white as our non-denoting folk concept of consciousness.

(b) If consciousness is rare and exclusive, that increases the likelihood that our CEV would terminally value some consciousness-like unconscious states. Perhaps the pig lacks a first-person perspective on reality, but has a shmerspective that is strange and beautiful and rich, and we ought ideally to abandon qualia-chauvinism and assign some value to shmuffering.

(c) Supposing our concept of ‘consciousness’ really does turn out to be incoherent unless you build in a ton of highly specific unconscious machinery ‘behind the scenes’, that further increases the likelihood that our CEV would come to care about something more general than consciousness, something that captures the ‘pain’ aspect of our experience in a way that can also apply to systems that are ‘free-floating pain’, pain sans subject. (Or sans a subject as specific and highly developed as a human subject.)

(d) Once we grant pigs might be moral patients, we also need to recognize that they may be utility monsters. E.g., if they’re conscious or quasi-conscious, they might be capable of much more acute (quasi-)suffering than ordinary humans are. (Perhaps they don’t process time in the way we do, so every second of pain is ‘stretched out’.) This may be unlikely, but it should get a big share of our attention because it would be especially bad.

I bounce back and forth between taking the revisionist posthuman consciousness-as-we-think-of-it-isn’t-such-a-big-deal perspective very seriously, and going ‘that’s CRAZY, it doesn’t add up to moral normality, there’s no God to make me endorse such a ridiculous “extrapolation” of my ideals, I’ll modus tollens any such nonsense till I’m blue in the face!’ But I’m not sure the idea of ‘adding up to moral normality’ makes philosophical sense. It may just being a soothing mantra.

We may just need to understand the principles behind consciousness and what-makes-us-value-consciousness on an especially deep level in order to avoid creating new kinds of moral patient; I don’t know whether avoiding neuromorphism, on its own, would completely eliminate this problem for AI.

David: Rob, you might want to explore (d) further. Children with autism have profound deficits of self-modelling as well as social cognition compared to neurotypical folk. So are profoundly autistic humans less intensely conscious than hyper-social people? In extreme cases, do the severely autistic lack consciousness’ altogether, as Eliezer’s conjecture would suggest? Perhaps compare the accumulating evidence for Henry Markram’s “Intense World” theory of autism.

Eliezer, I wish I could persuade you to quit eating meat – and urge everyone you influence to do likewise.

Kaj Sotala: Eliezer, besides the mirror test, what concrete functional criteria do you have in mind that would require the kind of processing that you think enables subjective experience? In other words, of what things can you say “behavior of kind X clearly requires the kind of cognitive reflectivity that I’m talking about, and it seems safe to assume that pigs don’t have such cognitive reflectivity because they do not exhibit any behaviors falling into that class”?

Also, it seems to me that this theory would imply that we aren’t actually conscious while dreaming, since we seem to lack self-modeling capability in (non-lucid) dreams. Is that correct?

I would also agree that “pigs have qualia, only simpler” seems wrong, but “pigs have qualia, only of fewer types” would seem more defensible. For example, they might lack the qualia for various kinds of sophisticated emotional pain, but their qualia for physical pain could be basically the same as that of humans. This would seem plausible in light of the fact that qualia probably have some functional role rather than being purely epiphenomenal, and avoiding tissue damage is a task with a long evolutionary history.

It would feel somewhat surprising if almost every human came equipped with the neural machinery that reliably correlated tissue damage with the qualia of physical pain, and the qualia for emotional pain looked like outgrowths of the mechanism that originally developed to communicate physical pain, but most of our evolutionary ancestors still wouldn’t have had the qualia for physical pain despite having the same functional roles for the mechanisms that communicate information about tissue damage.

Michael Vassar: My best guess is that there is moral subjectivity and possibly also moral objectivity, that moral subjectivity works pretty similarly to Eliezer’s description here. However, moral subjectivity is a property of mirror test passers some of the time, but not very much of the time, and how much depends on the individual and varies over quite a large range. It probably also varies in intensity quite a lot, probably in a manner that isn’t simply correlated with its frequency of occurrence but probably is simply correlated with integrated information. It’s probably present in various types of collective. A FAI would probably have this. Neither a mob nor the members of the mob have it.

There’s also probably moral objectivity. Dreamers, pigs, flow-states, and torture victims have this, but most AGIs probably don’t. Most people, most of the time, do have it, but, e.g. certain meditators may not. It’s harder to characterize its properties. My best guess is that it’s a set of heuristics for pruning computational search trees. “measure” might refer to the same thing.

Luke: On the topic of Rob’s skepticism about introspection, see my post. I should also note that when I mentioned “integrated-y” above, I wasn’t endorsing IIT at all. I have the same reaction to IIT as Aaronson, and encouraged Aaronson to write his post on that subject.

Rob: If you’re skeptical about whether you’re conscious at times you aren’t thinking about consciousness (even though you can in many cases think back and remember those experiences later, and consider what their subjective character was like at the time — as you remember it — and learn new things, things which seem consistent with your subjective experiences at other times), it’s possible you should also be skeptical about whether you’re conscious at times you are thinking about consciousness. Especially when you’re merely remembering such times.

If you can misremember your mental state of five minutes ago as a conscious one, what specifically forbids your misremembering your mental state of five minutes ago as a conscious-of-consciousness one?

My own worries go in a different direction. I’m fine with the idea that I might be confabulating some of my experiences. I’m more concerned that large numbers of other subjects in my brain may be undergoing experiences as a mechanism for my undergoing an experience, or as a side-effect. What goes into the sausages?

Some people fear the possibility that anesthetics prevent memory-of-suffering, instead of preventing suffering. I’m a lot more worried that ordinary human cognition (e.g., ordinary long-term memory formation, or ordinary sleep) could involve large amounts of unrecorded ‘micro-suffering’. Subsystems of my own brain are mysterious to me, and I treat those subsystems with the same kind of moral uncertainty as I treat mysterious non-human brains.

Brent: Note that dogs have evolutionary pressure to express depression, because they have spent the last 40,000 years co-evolving with humans, being selected explicitly for their capacity to emulate human emotional communication and bonding.

Mason: Brent – I’m not convinced that dogs have ever been bred to emulate negative human emotions. Until recently, many breeds were used primarily for work. Many – e.g. herding breeds – don’t actually make very good companions unless made to work or perform simulated work (e.g. obedience/agility/herding trials) to manage their physical/intellectual needs. A dog that would cease to work as effectively during periods of emotional stress (e.g. by displaying symptoms of depression) would probably not be selected for. And yet these breeds are often the most expressive across the board (and the most capable of reacting to emotional expression in humans), as evidenced by their extensive use as therapy/emotional support animals.

It seems very likely to me that we either created actual emotional complexity through selective breeding, or that we just took animals that already had very complex emotional lives and bred for a communication style that was intuitive to us. If we had only been breeding for behavior that simulated emotional expression, we would probably have avoided behaviors that aren’t conducive to the work dogs have done throughout their history with humans. Keeping dogs primarily as pets is a very new thing.

Eric Schmidt: Responding to the initial post:

IMO [Eliezer’s] making a huge, unfounded leap in assuming that all qualia only arise in the presence of an intelligent “inner listener” for some mysterious reason. For all we know, you could engender qualia in a Petri dish. For all we know, there are fonts of qualia all around us. You are restricting your conception of qualia to the kind you are most familiar with and which you hold in highest regard: the feeling of human consciousness and intelligence, the feeling of your integrated sense of self. But if qualia can originate in a Petri dish, it could certainly originate in a pig. I used to think like you do when I was younger, but IMO it’s just an unfounded bias towards the familiar, towards ourselves. AFAIK, if you poke a pig w[ith] a pointy stick, some pain feelings will engender in the universe. No, no intelligent consciousness will be there to stare at them, reflect on them, it won’t lead to the various sorts of other qualia that it can for humans (e.g. the qualia of noticing the pain, dwelling on it, remembering it, thinking about it, self pity, whatever), but AFAIK it’s still there in our universe, tarnishing it slightly (assuming pain is in some sense negatively valued and ought to be minimized).

On vegetarianism: Well, demand for meat keeps livestock species’ populations artificially way high. So as long as those livestock are living net-positive-qualia lives, then great. The more the better. (Aside: maybe [the] Fermi paradox is [because] earth is a farm for Gorlax who eats advanced civilizations in a single bite as an exquisite delicacy. I’m totally okay with that: the TV here is that good.) So I think eating slaughtered animals is fine, so long as the animals aren’t miserable. I’d like to see some data on that. In general, I’d like to see a consumer culture that pressured the meat industry to treat the animals decently, somehow assure us that they’re living net-positive-qualia lives.

EDIT: By “as far as I know”/”as far as we know” I mean that it hasn’t been disproven and there’s no compelling reason to believe it’s false.

Eliezer: Nobody knows what science doesn’t know. The correct form of the statement you just made is “For all I [Eric Schmidt] know, you could engender qualia in a Petri dish.”

My remaining confusion about consciousness does not permit that as an open possibility that could fit into the remaining confusion. I am not thinking about an intelligent being required to stare at the quale, and then experience other quales about being aware of things. I am saying that it is a confused state of mind, which I am now confident I have dissolved, to think that you could have a “simple quale” there in the first place. Those amazingly mysterious and confusing things you call qualia do not work the way your mind intuitively thinks they do, and you can’t have simple ones in a petri dish. Is it that impossible to think that this is something that someone else might know for sure, if they had dissolved some of their confusion about qualia?

Confusion isn’t like a solid estimating procedure that gives you broad credibility intervals and says that they can narrow no further without unobtainable info, like the reason I’m skeptical that Kurzweil can legitimately have narrow credibility intervals about when smarter-than-human AI shows up. Confusion means you’re doing something wrong, that somebody else could just as easily do right, and exhale a gentle puff of understanding that blows away your bewilderment like ashes in the wind. I’m confused about anthropics, which means that, in principle, I could read the correct explanation tomorrow and it could be three paragraphs long. You are confused about qualia; it’s fine if you don’t trust me to be less confused, but don’t tell other people what they’re not allowed to know about it.

To be clear, pigs having qualia does fit into remaining confusion; it requires some mixture of inner listeners being simpler than I thought and pigs having more reflectivity than I think. Improbable but not forbidden. Qualia in a petri dish, no.

In ethical terms, where society does not derogate X as socially unacceptable, and where naive utilitarianism says X is not the most important thing to worry about / that it is not worth spending on ~X, I apply a “plan on the mainline” heuristic to my deontology; it’s okay to do something deontologically correct on the mainline that is not socially forbidden and which is the best path according to naive utilitarianism. Chimps being conscious feels to me like it’s on the mainline; pigs being conscious feels to me like it’s off the mainline.

David: Eliezer, we’d both agree that acknowledged experts in a field can’t always be trusted. Yet each of us should take especial care in discounting expert views in cases where one has a clear conflict of interest. [I’m sure every meat eating reader hopes you’re right. For other reasons, I hope you’re right too.]

Eliezer: What do they think they know and how do they think they know it? If they’re saying “Here is how we think an inner listener functions, here is how we identified the associated brain functions, and here is how we found it in animals and that showed that it carries out the same functions” I would be quite impressed. What I expect to see is, “We found this area lights up when humans are sad. Look, pigs have it too.” Emotions are just plain simpler than inner listeners. I’d expect to see analogous brain areas in birds.

David: Eliezer, I and several other commentators raised what we see as substantive problems with your conjecture. I didn’t intend to rehash them – though I’ll certainly be very interested in your response. Rather, I was just urging you to step back and reassign your credences that you’re correct and the specialists in question are mistaken.

Eliezer: I consider myself a specialist on reflectivity and on the dissolution of certain types of confusion. I have no compunction about disagreeing with other alleged specialists on authority; any reasonable disagreement on the details will be evaluated as an object-level argument. From my perspective, I’m not seeing any, “No, this is a non-mysterious theory of qualia that says pigs are sentient…” and a lot of “How do you know it doesn’t…?” to which the only answer I can give is, “I may not be certain, but I’m not going to update my remaining ignorance on your claim to be even more ignorant, because you haven’t yet named a new possibility I haven’t considered, nor pointed out what I consider to be a new problem with the best interim theory, so you’re not giving me a new reason to further spread probability density.”

Mark P Xu Neyer: Do you think people without developed prefrontal cortices – such as children – have an inner listener?

Eliezer: I don’t know. It would not surprise me very much to learn that average children develop inner listeners at age six, nor that they develop them at age two, and I’m not an expert on developmental psychology nor a parent so I have a lot of uncertainty about how average children work and how much they vary. I would certainly be more shocked to discover that a newborn baby was sentient than that a cow was sentient.

Brian Tomasik: I wrote a few paragraphs partially as a response to this discussion. The summary is:

There are many attributes and abilities of a mind that one can consider important, and arguments about whether a given mind is conscious reflect different priorities among those in the discussion about which kinds of mental functions matter most. “Consciousness” is not one single thing; it’s a word used in many ways by many people, and what’s actually at issue is the question of which traits matter more than which other traits.

Also, this discusses why reflectivity may not be ethically essential:

In Scherer’s view, the monitoring process helps coordinate and organize the other systems. But then privileging it seems akin to suggesting that among a team of employees, only the leader who manages the others and watches their work has significance, and the workers themselves are irrelevant.[2] In any event, depending on how we define monitoring and coordination, these processes may happen at many levels, just like a corporate management pyramid has many layers.

Buck Shlegeris: BTW, Eliezer, AFAICT pigs have self awareness according to the mirror test: they only fail it because pigs don’t care if they have mud on their face. They are definitely aware that the pig in the mirror is not another pig. Is that enough uncertainty to not eat them?

From Wikipedia: “Pigs can use visual information seen in a mirror to find food, and show evidence of self-recognition when presented with their reflection. In an experiment, 7 of the 8 pigs tested were able to find a bowl of food hidden behind a wall and revealed using a mirror. The eighth pig looked behind the mirror for the food.[28]”

Eliezer: What I want to see is an entity not previously trained on mirrors, to realize that motions apparent in the mirror are astoundingly correlated to motions that it’s sending to the body, i.e., the aha! that I can control this figure in the mirror, therefore it is me. This seems to me to imply a self-model. If you train pigs to use a mirror generically in order to find food, then what you’re training them to do is control their visual imagination so as to take the mirror-info and normalize it into nearby-space info. This tells me that pigs have a visual imagination which is not very surprising since IIRC the back-projections from higher areas back to the visual cortex were already a known thing.

But if the pig can then map what it sees in the mirror onto its spatial model of surrounding space, and as a special case can identify things colocated in space with itself, you’ve basically trained the pig to ‘solve’ the mirror test via a different pathway that doesn’t need to go through having a self-model. I’m sorry if it seems like I’m moving the goalpost, but when I say “mirror test” I mean a spontaneous mirror test without previous training. There’s similarly a big difference between an AI program that spontaneously starts talking about consciousness and an AI program that researchers have carefully crafted to talk about consciousness. The whole point of the mirror test is to provoke (and check for) an aha! about how the image of yourself in the mirror is behaving like you control it; training on mirrors in general defeats this test.

Buck: I don’t quite buy your reasoning. Most importantly, the pigs are aware that the pig in the mirror is not a different pig. That seems like strong evidence of self awareness. One of the researchers said “We have no conclusive evidence of a sense of self, but you might well conclude that it is likely from our results.”

So pigs haven’t passed or failed the mirror test, but they seem aware that a pig in a mirror is not a different pig, and experts in the field seem to think pigs are likely to have self awareness.

And again, I think that the burden of proof is on the people who are saying that it’s fine to torture the things! Like, I’m only 70% sure that pigs are conscious. But that’s still enough that it’s insane to eat them.

(Also, when you say “an entity not previously trained on mirrors”: literally no species can immediately figure out what a mirror is. Even humans have to have a while around them to figure them out, which is as much training as we give to pigs.)

Andres Gomez Emilsson: Harry would say: We must investigate consciousness and phenomenal binding empirically.

Let us take brain tissue in a petri dish, or something like that, and use bioengineered neural bridges between our brains and the petri dish culture to find out whether we can incite phenomenal binding of any sort. Try to connect different parts of your brain to the petri dish. E.g what happens when you connect your motor areas to it, and what happens when you add synapses that go back to your visual field? If you are able to bind phenomenologies to your conscious experience via this method, try changing the chemical concentrations of various neurotransmitters in the petri dish. Etc.

This way we can create a platform that vastly increases the range of our possible empirical explorations.

Eliezer: Um, qualia are not epiphenomenal auras that contaminate objects in physical contact. If you hook up an electrical stimulator to your visual cortex, it can make you see color qualia (even if you’re blind). This is not because the electrical stimulator is injecting magical qualia in there. The petri dish, I predict with extreme confidence, would seem to you to produce exactly the same qualia as an electrical stimulator with the same input/output behavior. Unless you have discovered a new law of physics. Which you will not.

Andres: I’m thinking about increasing the size of the culture in the petri dish until I can show that phenomenal binding is happening by doing something I would not be able to do otherwise:

If I increase the size of my visual field by adding to my brain a sufficient number of neurons in a petri dish appropriately connected to it, I would be able to represent more information visually than I am capable of with my normal visual cortex.

This is thoroughly testable. And I would predict that you can indeed increase the information content of a given experience by this method.

Eliezer: You’re not addressing my central point that while hooking up something to an agent with an inner listener may create what that person regards as qualia, it doesn’t mean you can rip out the petri dish and it still has qualia in it.

David: A neocortex isn’t needed for consciousness, let alone self-consciousness. Perhaps compare demonstrably self-aware magpies, who (like all birds) lack a neocortex.

Eric, many more poor humans could be fed if corn and soya products were fed directly to people rather than to factory-farmed nonhuman animals we then butcher. Such is the thermodynamics of a food chain. Qualia and the binding problem? I’ve put a link below; but my views are probably too idiosyncratic usefully to contribute to the debate here.

Eliezer: The PLOS paper seems to be within the standard paradigm for mirror studies. Pending further confirmation or refutation this is a good reason not to eat magpies or other corvids.

David: Eliezer, faced with the non-negligible possibility that one might be catastrophically mistaken, isn’t there a powerful ethical case for playing safe? If one holds a view that most experts disagree with, e.g. in my case, I’m sceptical that a classical digital computer will ever be conscious, I’d surely do best to defer to consensus wisdom until I’m vindicated / confounded. Or do you regard the possibility that you are mistaken as vanishingly remote?

Francisco Boni Neto: Empathic brain-modeling architecture as an conditio sine qua non for an ‘inner listener’ that is a requirement for context-dependent qualia or what-is-likeness that-it-is-like-that-thing-to-be seems like an exaggeration of the “simulation” and “mirroring” theory of affective cognition, overstating top-bottom processes: “there must be a virtualization process, helped by socially complex networks of interactions that are positively selected in modern humans, so I can simulate other humans in my brains while I sustain my own self-modelying thingy that adds a varied collection of qualia types in the mammalian phenotype that are lacking in other less complex phenotypes (e.g. pigs)”.

It seems like a very adaptationist thought that overstates the power and prevalence of natural selection towards the certain recent branches of the phylogenetic tree against the neural reuse that makes ancient structures so robust and vital in processing vivid what-is-likeness and that makes bottom-up processing so important despite the importance of the pre-frontal cortex in deliberative perceptual adaptation and affective processing. That is why I agree when David Pearce points out that many of our most intensely conscious experiences occur when meta-cognition or reflective self-awareness fails. Super vivid, hyper conscious experiences, phenomenic rich and deep experiences like lucid dreaming and ‘out-of-body’ experiences happens when higher structures responsible for top-bottom processing are suppressed. They lack a realistic conviction, specially when you wake up, but they do feel intense and raw along the pain-pleasure axis.

Eliezer: It is impossible to understand my position or pass any sort of Ideological Turing Test on it if you insist on translating it into a hypothesis about some property of hominids like “advanced reflectivity” which for mysterious reasons is the cause of some mysterious quales being present in hominids but not other mammals. As long as qualia are a mysterious substance in the version of my hypothesis you are trying to imagine, of course you will see no motivation but politics for saying the mysterious substance is not in pigs, when for all you know, it could be in lizards, or trapped in the very stones, and those unable to outwardly express it.

This state of confusion is, of course, the whole motivation for trying to think in ways that don’t invoke mysterious qualia as primitive things. It is no use to say what causes them. Anything can be said to cause a mysterious and unreduced property, whether “reflectivity” or “neural emergence” or “God’s blessing”. Progress is only made if you can plausibly disassemble the inner listener. I am claiming to have done this to a significant extent and ended up with a parts list that is actually pretty complicated and involves parts not found in pigs, though very plausibly in chimpanzees, or even dolphins. If you deliberately refuse to imagine this state of mind and insist on imagining it as the hypothesis that these parts merely cause an inner listener by mysteriousness, then you will be unable to understand the position you are arguing with and you will not be able to argue against it effectively to those who do not already disbelieve it.

David: Eliezer, I’ve read Good and Real, agree with you on topics as varied as Everett and Bayesian rationalism; but I still don’t “get” your theory of consciousness. For example, a human undergoing a state of blind uncontrollable panic is no more capable of reflective self-awareness or any other form of meta-cognition than a panicking pig. The same neurotransmitter systems, same neurological pathways and same behavioural responses are involved in the panic response in both pigs and humans. So why is the human in a ghastly state of consciousness but the pig is just an insentient automaton?

Eliezer: One person’s modus ponens is another’s modus tollens: I’m not totally sure people in sufficiently unreflective flow-like states are conscious, and I give serious consideration to the proposition that I am reflective enough for consciousness only during the moments I happen to wonder whether I am conscious. This is not where most of my probability mass lies, but it’s on the table. I think I would be equally surprised to find monkeys conscious, or people in flow states nonsentient.

Daniel Powell: If there existed an entity which was intermittently conscious, would the ethics of interacting with it depend on whether it was conscious at the moment?

What about an entity that has never been conscious, but might become so in the future – for example, the uncompiled code for a general intelligence, or an unconceived homo sapiens?

I’m having a hard time establishing what general principle I could use that says it is wrong to harm a self-aware entity, but not wrong to harm a creature that isn’t self-aware, but wrong to cause “an entity which used to be self-aware and currently isn’t, but status quo will be again” to never again be self aware, but not wrong to cause a complex system that has the possibility to create a self-aware entity not to do so.

I thought I held all four of those beliefs from simple premises, but now I doubt whether my intuitive sense of right and wrong cares about sentience at all.

Eliezer: Daniel, I think the obvious stance would be that having unhappy memories is potentially detrimental / causes suffering, or that traumatic experiences while nonsentient can produce suffering observer-moments later. So I would be very averse to anyone producing pain in a newborn baby, even though I’d be truly shocked (like, fairies-in-the-garden shocked) to find them sentient, because I worry that might lose utility in future sentient-moments later.

Carl Shulman: I find [Eliezer] unconvincing for a few reasons:

1. Behavioral demonstration of self-models along the lines of the mirror test is overkill if you’re looking for the presence of some kind of reflective “thinking about sense inputs/thoughts”, the standard mirror test requires other additional processing and motivation to pass the standard presentation. So one should expect that creatures for which there isn’t yet a wikipedia mirror test entry could also pass idealized tests for such reflective processing, including somewhat less capable relatives of those who pass. [Edit: see variation among human cultures on this, even at 6 years old, and the dependence on grooming behavior and exposure to mirrors, as discussed in the SciAm article here.]

2. There is a high degree of continuity in neural architecture and capacities as one moves about the animal kingdom. Piling on additional neural resources to a capacity can increase it, but often with diminishing returns (even honeybees with a million neurons can do some neat tricks). If one allows consciousness for magpies and crows and ravens, you should expect with fairly high probability that some less impressive (or harder to elicit/motivate/test) versions of those capacities are present in other birds, such as chickens.

3. You haven’t offered neuroscience or cognitive science backing for claims about the underlying systems, just behavioral evidence via the mirror test. The claim that other animals don’t have weaker versions of the mechanisms enabling passage of the mirror test, or capable of reflective thought about sense inputs/reinforcement, is one subject to neuroscience methods. You don’t seem to have looked into the relevant neuroscience or behavioral work in any depth.

4. The total identification of moral value with reflected-on processes, or access-conscious (for speech) processes, seems questionable to me. Pleasure which is not reflected on or noticed in any access-conscious way can still condition and reinforce. Say sleeping in a particular place induced strong reinforcement, which was not access-conscious, so that I learned a powerful desire to sleep there, and not want to lose that desire. I would not say that such a desire is automatically mistaken, simply because the reward is not access-conscious.

5. Related to 4), I don’t see you presenting great evidence that the information processing reflecting on sense inputs (pattern recognition, causal models, etc) is so different in structure.

“Now, a study published September 9 in The Journal of Cross-Cultural Psychology is reinforcing that idea and taking it further. Not only do non-Western kids fail to pass the mirror self-recognition test by 24 months—in some countries, they still are not succeeding at six years old.

What does it mean? Are kids in places like Fiji and Kenya really unable to figure out a mirror? Do these children lack the ability to psychologically separate themselves from other humans? Not likely. Instead researchers say these results point to long-standing debates about what counts as mirror self-recognition, and how results of the test ought to be interpreted.” (link)

Eliezer: More seriously, the problem from my perspective isn’t that I’m confident of my analysis, it’s that it looks to me like any analysis would probably point in the same direction — it would give you a parts list for an inner listener. Whereas what I keep hearing from the opposing position is “Qualia are primitive, so anything that screams probably has pain.” What I need to hear to be persuaded is, “Here is a different parts list for a non-mysterious inner listener. Look, pigs have these parts.” I don’t put any particular weight on prestigious people saying things if this does not appear to be the form of what they’re saying — I would put significant weight on Gary Drescher (who does know what cognitive reductionist philosophy looks like) private-messaging me with, “Eliezer, I did work out my own parts list for inner listeners and I’ve studied pigs more than you have and I do think they’re conscious.”

Carl: Quote from [Stanford Encyclopedia of Philosophy] summarizing some of my objection to Eliezer’s use of the mirror test and restrictive [higher-order theory] views above:

“In contrast to Carruthers’ higher-order thought account of sentience, other theorists such as Armstrong (1980), and Lycan (1996) have preferred a higher-order experience account, where consciousness is explained in terms of inner perception of mental states, a view that can be traced back to Aristotle, and also to John Locke. Because such models do not require the ability to conceptualize mental states, proponents of higher-order experience theories have been slightly more inclined than higher-order theorists to allow that such abilities may be found in other animals.”

Robert Wiblin: [Eliezer], it’s possible that what you are referring to as an ‘inner listener’ is necessary for subjective experience, and that this happened to be added by evolution just before the human line. It’s also possible that consciousness is primitive and everything is conscious to some extent. But why have the prior that almost all non-human animals are not conscious and lack those parts until someone brings you evidence to the contrary (i.e. “What I need to hear to be persuaded is,”)? That just cannot be rational.

You should simply say that you are a) uncertain what causes consciousness, because really nobody knows yet, and b) you don’t know if e.g. pigs have the things that are proposed as being necessary for consciousness, because you haven’t really looked into it.

Carl: Seems to me that Eliezer is just strongly backing a Hofstadter-esque HOT [higher-order theory] view. HOT views are a major school of thought among physicalist accounts of consciousness. Objections should be along the lines of the philosophical debate about HOT theories, about how much credence to give them and importance under uncertainty about those theories, and about implementation of the HOT-relevant (or Higher-Order-Experience relevant) properties in different animals.

BTW, Eliezer, Hofstadter thinks dogs are conscious in this HOT way, which would presumably also cover chickens and cows and pigs (chickens related to corvids, pigs with their high intelligence, cows still reasonably capable if uninterested in performing and with big brains).

“Doug [Hofstadter], on the other hand, has a theory of the self, and thinks that this is just the same as talking about consciousness. Note that this concern with consciousness is not the same concern as whether there is a “subject” that “has” experiences over and above the public self; you can believe that talk of consciousness is irreducible to talk of the built self without thereby positing some different, higher self that is the

Show more