Zombie Fred

A neat zombie post from Barry Arrington (thanks, Barry!  I do appreciate, and  this is without snark, the succinctness and articulacy of your posts – they encapsulate your ideas extremely cogently, and thus make it much easier for me to see just how and why I disagree with you!)

Barry writes:

In the zombie thought experiment we are supposed to imagine a person (let’s call him Fred) who looks and acts exactly like a fully conscious human being. Fred eats, drinks, converses, laughs, cries, etc. exactly like a human being, but he is in fact a biological robot with no subjective consciousness at all. The point of the thought experiment is that I can experience only my own consciousness. Therefore, I can be certain only of my own consciousness. I have to take Fred’s word for his consciousness, and if Fred is in fact a robot programed to lie to me and tell me he is conscious, there is no way I could know he is lying. Here’s the kicker. With respect to any particular person, everyone else in the world may in fact be a zombie Fred, and if they were that person would never be able to know. I may assume that everyone else is conscious, but I cannot know it. I can experience my own consciousness but no other person’s.

Gelernter points out that from an outside observer’s perspective, a fully conscious, self-aware person cannot be distinguished from a zombie Fred. They behave exactly alike. Here is where it gets interesting. If a conscious person and a zombie behave exactly alike, consciousness does not confer a survival advantage on the conscious person. It follows that consciousness is invisible to natural selection, which selects for only those traits that provide a survival advantage. And from this it follows that consciousness cannot be accounted for as the product of natural selection. Nature would not have done it that way.

Where does this get us? It is hard to say. At the very least, it seems to me that the next time an anti-ID person employs the “God would not have done it that way” argument, I can respond with “And nature wouldn’t have either so where does that leave us?” response.

 

If I can attempt to summarise this even more succinctly than Barry has done:

  1. A zombie robot (Fred) would, by definition, behave exactly like a conscious person, and thus be indistinguishable from a conscious person.
  2. Therefore consciousness does not make any detectable difference to behaviour
  3. Therefore consciousness cannot help a person survive
  4. Therefore it cannot have evolved.

If Barry reads this and thinks I have misunderstood him, I would welcome correction, either here (where he has OP posting permissions) or at UD (which I will check periodically).

OK, well, here goes: if consciousness, as per the Barry’s hypothetical, makes absolutely no difference to the behaviour of the person (I don’t mind if Fred looks like a robot, but it must behave like a person), then Fred should do the following:

  • If I make a sudden unexpected noise, Fred should startle, and look around to see what is happening.
  • If I whisper to Fred, it should come closer in order to hear more clearly.
  • If it doesn’t understand me, it should ask me to repeat or rephrase.
  • If it finds itself short of battery power, but also in danger of being struck by lightning, it ought to be able to weigh up which is more risky, risking a battery outage by waiting for the storm to pass over, or risking lightning strike by heading straight to the charging point
  • If I ask it to go to the shop and buy me something nice for supper, but not too fancy, it should be able to find its way there, check the shelves for some things it thinks I might like, weigh up what I might think looks too fancy, pick something, maybe spot the chocolates I like on the way out (hey, a girl can dream), and decide to pay for them out of its own money as a gift, and return home with a smile, explaining what it had selected and why, then surprise me with the chocolates.
  • If it reads a story about a hurricane in the Philippines, it should get on to the internet, and donate some money, as much as it thinks it can afford, to leave it enough to pay for its annual service, and recharging fees.

 

In other words, Fred has to be able to:

  • React appropriately to unexpected danger signals.
  • Recognise when it needs to take action (e.g. move closer) in order to gain relevant information
  • Recognise when information is insufficient, and seek clarification
  • Make decisions that involve anticipating future events and contingencies, and weighing up the least bad of two poor options to avoid serious trouble.
  • Understand non-specific instructions, plan a strategy to fulfill someone else’s goal, weigh up what someone else would decide in the same circumstances, conceive of a novel course of action in order to please another person, and carry it out.  Signal apparent pleasure at having been able to please that person.
  • React to information about people’s distress by conceiving of a course of action that will alleviate it.

If Fred were truly able to do all these things, and more – involving anticipation, choice of strategy, weighing up immediate versus distant goals and deciding on a course of action that would best bring about the chosen goal, being able to anticipate another person’s wishes, and regard fulfilling them as a goal worth pursuing (i.e. worth spending energy on), being able to anticipate another person’s needs, and regard alleviating that person’s needs – my question is: what is Fred NOT “conscious of” that we would regard a “conscious” person as being “conscious” of?

What is being startled if not being “conscious” of an alarming signal? What is trying to gain further information, if not a volitional act?  What is recognising that information is lacking if not a metacognitive examination of the state of self-knowledge?  What is anticipating another’s desires and needs, if not the ability to imagine “what it it is like” to be that person?  What is wanting to please or help another person if not the capacity to recognise in another being, the kinds of needs (recharging? servicing?) that mandate your own actions?

In other words, what is consciousness, if not these very capacities?  And if consciousness is these very capacities, then why should they not evolve?  They are certainly likely to promote successful survival and reproduction.

In other words, I think the premise of the argument breaks down on examination.  I think that human behaviour is what it is because we are conscious of precisely these things.  A human being who is not capable of being aware of an alarming sound, of seeking out further information in response to an interesting stimulus, of anticipating her own needs, of making decisions on her own behalf or on behalf of another person, of responding to another’s needs, would be, well, unconscious. Asleep. Comatose.

Trying to divorce behaviour from consciousness is, I suggest, fundamentally incoherent – consciousness is intimately related (as heads are to tails on a coin, heh) to decision-making, and decision-making involves action, even if that action is merely the moving of an eyeball to a new fixation, in order to gain new relevant sensory information.  We are not computers, and nor are robots – the thing about robots is that, like us, they move – they act.  Sure, humans can, tragically, be both immobile and conscious, and it is a major medical challenge to find out whether a person is conscious if they cannot physically act.  But that is because the way a person acts is a major clue to whether they are conscious. And, interestingly, the most promising ways of using brain imaging to communicate with “locked in” patients is to get them to imagine actions. Even when we are physically immobilised, the brain mechanisms involved in action – in decision-making can be completely intact.

That doesn’t mean I think that conscious man-made robots are possible.  I think life is way too complicated for mere humans (mere intelligent designers :)) to fabricate.  If we ever do make “artificial” intelligent beings, I think we will have to use some kind of evolutionary program.  Indeed, brains themselves work on a kind of speeded up “neural Darwinism” in which successful brain patterns repeat and unsuccessful ones are extinguished (“Hebb’s rule: what fires together, wires together).  Which is why, incidentally, in at least one sense I am an “intelligent design” proponent – I do think that life is designed by a system that closely resembles human intelligence (although differs from it in some key respects), namely evolutionary processes.

But my point in this post is simply to argue that:

  1. If a zombie robot (Fred) behaved exactly like a conscious person, to the point of being indistinguishable from a conscious person,
  2. Fred would necessarily be as conscious as a conscious person
  3. because consciousness is intrinsic to strategic, planned decision-making, anticipation of the actions of others, and selection of action in order to maximise the probability of achieving proximal or distal goals, and thus extremely helpful to survival,
  4. And thus is highly likely to have evolved.

114 thoughts on “Zombie Fred

  1. Apparently Thomas Nagel’s question “What is it like to be a bat?” is thought to be an interesting philosophical question. Has philosophy produced any interesting answers? Are these all aspects of the same thing? That the soul is or isn’t imaginary and that aspects of consciousness are or aren’t possible to comprehend without imagining an immaterial soul. Inquiring minds wish to know!

    ETA

    From Wikipedia paraphrasing (dumbing down?) Nagel’s argument:

    Objectivity can make sense of imagining what it is like to be a bat, but one cannot be completely unbiased because we are limited to only what we know. This in turn brings back the idea of subjectivity; we can only be sure of our own experiences.

    If this is a fair summary, it’s hard to disagree. If Nagel is actually a subjectivist, why do ID proponents like his arguments?

  2. Alan Fox:
    Apparently Thomas Nagel’s question “What is it like to be a bat?” is thought to be an interesting philosophical question. Has philosophy produced any interesting answers?

    I think it is an appearance versus reality issue. We can say a table has a two objective appearances: everyday appearance which different people agree on versus a scientific reality according to physics. Here most people are comfortable that science is can address all the ordinary properties people talk about.

    Nagel is saying qualia are different. Because they are subjective, ie first person, there is something about the appearance, that is first-person experience, of qualia which will never be addressed in the objective scientific description.

    One can argue (effectively, I think) against the zombie argument by justifying why such things are inconceivable or, if conceivable, not possible. Inverted spectrum arguments are more challenging, Could two people with exactly the same physical structure be looking at the same object. behave the same way about it, but still be having different color experiences? If so, there is something about vision and color that science cannot capture.

    Note that simple inversion (eg we both look at something that I experience as red but you experience what I experience as green) is not physically possible because of the asymmetric way people physically process and hence behaviorally-react to colors. But it may be possible to specify spectrum inversions that are compatible with people’s physical color processing and hence behavior.

    On the surface, spectrum inversion seems harder to dismiss as inconceivable or impossible the way zombies are.

    Philosophers who try to account for neuroscience will spend time explaining why their solution to the relation between appearance and reality solves the spectrum inversion concern.

    I agree with you that such arguments can be hard to follow, often because they assume a technical knowledge of philosophy.

    But another issue, I think, is that the very problem they are trying to solve is how to give an operational definition to something which is not obviously measurable by science. So the usual standards we might be looking for in defining the terms of interest may not apply.

    From Wikipedia paraphrasing (dumbing down?) Nagel’s argument:

    If this is a fair summary, it’s hard to disagree. If Nagel is actually a subjectivist, why do ID proponents like his arguments?

    I don’t think they follow the details. I think they like the general concept that there are aspects of reality beyond science, at least science as we now understand it.

    I’ve only read the reviews, but I think his latest book also says evolution cannot be complete until it allows for true purpose in the physical universe (Nagel is an atheist). So that would resonate with ID proponents as well, I think.

  3. BruceS,

    Thanks for the response BruceS. I hope I’ll have time to respond in more detail later but can I just pick up on this:

    But another issue, I think, is that the very problem they are trying to solve is how to give an operational definition to something which is not obviously measurable by science. So the usual standards we might be looking for in defining the terms of interest may not apply.

    At first sight I see a non sequitur. I’m quite happy not to insist on an operational definition. I’m just looking for what you might find in a set called “Qualia”. My suspicion is that there is a hint of reification. Barry bases his argument on something (qualia) that only holds if qualia are real.

  4. Alan Fox: Apparently Thomas Nagel’s question “What is it like to be a bat?” is thought to be an interesting philosophical question.

    I see Nagel as a mysterian; I see him as making a mystery where there is none. But then I’m not a philosopher (except in the generic sense that we are all philosophers).

    So I ask a different question. I ask “What is it like to be me?”. And I think neither you nor Nagel can answer that question, either. And even I cannot answer that question. So it must be a bogus question. And if that’s a bogus question, so is the bat question.

  5. Alan Fox:
    BruceS,

    At first sight I see a non sequitur. I’m quite happy not to insist on an operational definition. I’m just looking for what you might find in a set called “Qualia”. My suspicion is that there is a hint of reification. Barry bases his argument on something (qualia) that only holds if qualia are real.

    I can only say that not being able to define something precisely is not the same thing as saying it does not exist.

    Doesn’t food “taste” different to you with salt? Doesn’t a brown table “look” different from the same table painted red?
    For these reasons I think qualia are real in some way for me. I don’t believe you (or anyone else) is a zombie, so I think they are real for you in the same way.

    I also agree it is easier to think about differences in how one experience things rather than try to puzzle out what is meant by saying simply that one is an experiencer (eg what it is like to be me).

    Is there something about the subjective difference between experiencing red and experiencing green that is beyond science, since science deals only with objective facts regarding wavelengths, rods and cones, neurons, neurochemicals, etc? Or is subjective experience another way of talking about certain physical events in the real world that science addresses.

    I think that is what the arguments are about

    The usual analogies are comparing heat to molecular motion/energy or comparing life to biochemical structures and processes. These are a start, but since both sides of the comparison are objective, they don’t capture the whole situation.

  6. BruceS: Doesn’t food “taste” different to you with salt? Doesn’t a brown table “look” different from the same table painted red?
    For these reasons I think qualia are real in some way for me.

    You are jumping (leaping?) from the ability to discriminate to the existence of qualia. Yet, by definition, a zombie has the same ability to discriminate.

    Yes, it is reification.

  7. Neil Rickert: You are jumping (leaping?) from the ability to discriminate to the existence of qualia.Yet, by definition, a zombie has the same ability to discriminate.
    Yes, it is reification.

    In saying “discriminate” I was not referring to the objectively observable: behavior or fMRI readings for example, but rather trying to appeal to the reader’s experience to provide a limited definition of qualia by this example. I am assuming the reader is not a zombie.

    You and Alan use the word reification but I have to admit I don’t understand how it applies. I take reification as a fallacy to mean incorrectly giving something the properties of a physical thing when that something is an abstraction. But since we are trying to decide what kind of thing a qualia is, and whether it might be neither a physical thing nor an abstraction, I don’t see how the term helps. Unless your definition of abstraction includes the “raw feels” that qualia are supposed to refer to.

  8. BruceS: I take reification as a fallacy to mean incorrectly giving something the properties of a physical thing when that something is an abstraction. But since we are trying to decide what kind of thing a qualia is, and whether it might be neither a physical thing nor an abstraction, I don’t see how the term helps.

    My view is that qualia — whatever they are supposed to be — lack thingness. People give it a name which suggests that it is an objective entity, and then expect an objective reductionist account. But maybe whatever they are trying to talk about is unavoidably subjective, so we should not try to treat it as if objective.

  9. Coincidentally, I spent the last six hours pretending to be a zombie and pursuing students across campus. This activity really brings home the usefulness of being able to model and predict the behavior of others. For example, while even a very simple organism may know if it is seeing me, an additional level of processing is needed for it to know whether I can see it–and this is a very useful thing to know. Similarly, it is one thing to notice that I am not behaving like a predator (i.e. not chasing) and another level of complexity to wonder if I am pretending in order to get closer. (The activity of pretending also seems quite complex–on some level it has to involve modeling the prey’s expectations so as to deceive them. I know that expected behavior for a hunter is to look at the prey, so I approach with eyes averted. If I didn’t know what the prey expected, I might do the wrong thing.)

    My best guess about the evolution of consciousness is that it arises from the capacity to model other individuals’ behavior. This requires a lot of processing power and wetware. Once the processing power and wetware are available, it might be relatively selectively irrelevant whether the organism uses them to model its own behavior or not–but why shouldn’t it?

    Humans have a great many capabilities which it’s not clear we need. I posit that any very capable and versatile natural intelligence will in fact have a great many capabilities which it doesn’t need, because they come “for free” with the complexity and the intelligence. The mystery would be if the organism did *not* have any emergent, selectively neutral capabilities, because that’s not how complex systems naturally behave.

  10. BruceS

    I was picking up on this part of your earlier comment:

    …an operational definition to something which is not obviously measurable by science.

    and my perception that Barry was arguing his case based on the assumption that qualia are a done deal. I think it is premature to talk about qualia as if they have some scientific significance. Personally, I don’t see the concept has much use beyond philosophy. I’m also prejudiced against the abuse of the word intelligence but that’s probably my cognitive bias showing.

  11. Neil Rickert: My view is that qualia — whatever they are supposed to be — lack thingness.People give it a name which suggests that it is an objective entity, and then expect an objective reductionist account.But maybe whatever they are trying to talk about is unavoidably subjective, so we should not try to treat it as if objective.

    Fair enough.

    My view is this: neuroscience will show that a given subjective statement about qualia is talking about the same thing as some to-be-determined statement objective statement about organism states.

    Alan Fox:
    BruceS

    I was picking up on this part of your earlier comment:

    and my perception that Barry was arguing his case based on the assumption that qualia are a done deal. I think it is premature to talk about qualia as if they have some scientific significance. Personally, I don’t see the concept has much use beyond philosophy. I’m also prejudiced against the abuse of the word intelligence but that’s probably my cognitive bias showing.

    I guess it comes down to whether you think qualia are things that merit scientific explanation. The neuroscientists who write popularizations seem to think so. But I’ve also seen claims (by a philosopher) that most neuroscientists treat qualia as epiphenomenal — they may exist but for reseach purposes they don’t cause anything that could affect the outcome of experiments.

    ETA: Since Blas has not shown up in this thread, I feel duty bound to add that if you claim that qualia have nothing to do with scientific explanation, you may be close to accepting the zombie premise of BA’s argument.

  12. mkkuhner:

    My best guess about the evolution of consciousness is that it arises from the capacity to model other individuals’ behavior.

    In his book Consciousness and the Social Brain, Graziano makes a similar claim.

    But he focuses on modelling attention.

    Very roughly: First animals developed the ability to model the attention of others (eg for protection from predators or as part of social behavior). If that capability to model attention was then applied to a model of self, consciousness of some sort resulted.

  13. BruceS: I guess it comes down to whether you think qualia are things that merit scientific explanation.

    I think I already used the word premature. I cannot assess or predict whether scientific research may one day benefit from the concept of qualia. I think that a first step would be an attempt at some kind of working definition.

  14. BruceS: Very roughly: First animals developed the ability to model the attention of others (eg for protection from predators or as part of social behavior). If that capability to model attention was then applied to a model of self, consciousness of some sort resulted.

    Evidence of predators (with excellent eyesight) and prey (with formidable armour) exists in the Cambrian period. These organisms were having sex as well.

Leave a Reply