Zombie Fred

A neat zombie post from Barry Arrington (thanks, Barry!  I do appreciate, and  this is without snark, the succinctness and articulacy of your posts – they encapsulate your ideas extremely cogently, and thus make it much easier for me to see just how and why I disagree with you!)

Barry writes:

In the zombie thought experiment we are supposed to imagine a person (let’s call him Fred) who looks and acts exactly like a fully conscious human being. Fred eats, drinks, converses, laughs, cries, etc. exactly like a human being, but he is in fact a biological robot with no subjective consciousness at all. The point of the thought experiment is that I can experience only my own consciousness. Therefore, I can be certain only of my own consciousness. I have to take Fred’s word for his consciousness, and if Fred is in fact a robot programed to lie to me and tell me he is conscious, there is no way I could know he is lying. Here’s the kicker. With respect to any particular person, everyone else in the world may in fact be a zombie Fred, and if they were that person would never be able to know. I may assume that everyone else is conscious, but I cannot know it. I can experience my own consciousness but no other person’s.

Gelernter points out that from an outside observer’s perspective, a fully conscious, self-aware person cannot be distinguished from a zombie Fred. They behave exactly alike. Here is where it gets interesting. If a conscious person and a zombie behave exactly alike, consciousness does not confer a survival advantage on the conscious person. It follows that consciousness is invisible to natural selection, which selects for only those traits that provide a survival advantage. And from this it follows that consciousness cannot be accounted for as the product of natural selection. Nature would not have done it that way.

Where does this get us? It is hard to say. At the very least, it seems to me that the next time an anti-ID person employs the “God would not have done it that way” argument, I can respond with “And nature wouldn’t have either so where does that leave us?” response.

 

If I can attempt to summarise this even more succinctly than Barry has done:

  1. A zombie robot (Fred) would, by definition, behave exactly like a conscious person, and thus be indistinguishable from a conscious person.
  2. Therefore consciousness does not make any detectable difference to behaviour
  3. Therefore consciousness cannot help a person survive
  4. Therefore it cannot have evolved.

If Barry reads this and thinks I have misunderstood him, I would welcome correction, either here (where he has OP posting permissions) or at UD (which I will check periodically).

OK, well, here goes: if consciousness, as per the Barry’s hypothetical, makes absolutely no difference to the behaviour of the person (I don’t mind if Fred looks like a robot, but it must behave like a person), then Fred should do the following:

  • If I make a sudden unexpected noise, Fred should startle, and look around to see what is happening.
  • If I whisper to Fred, it should come closer in order to hear more clearly.
  • If it doesn’t understand me, it should ask me to repeat or rephrase.
  • If it finds itself short of battery power, but also in danger of being struck by lightning, it ought to be able to weigh up which is more risky, risking a battery outage by waiting for the storm to pass over, or risking lightning strike by heading straight to the charging point
  • If I ask it to go to the shop and buy me something nice for supper, but not too fancy, it should be able to find its way there, check the shelves for some things it thinks I might like, weigh up what I might think looks too fancy, pick something, maybe spot the chocolates I like on the way out (hey, a girl can dream), and decide to pay for them out of its own money as a gift, and return home with a smile, explaining what it had selected and why, then surprise me with the chocolates.
  • If it reads a story about a hurricane in the Philippines, it should get on to the internet, and donate some money, as much as it thinks it can afford, to leave it enough to pay for its annual service, and recharging fees.

 

In other words, Fred has to be able to:

  • React appropriately to unexpected danger signals.
  • Recognise when it needs to take action (e.g. move closer) in order to gain relevant information
  • Recognise when information is insufficient, and seek clarification
  • Make decisions that involve anticipating future events and contingencies, and weighing up the least bad of two poor options to avoid serious trouble.
  • Understand non-specific instructions, plan a strategy to fulfill someone else’s goal, weigh up what someone else would decide in the same circumstances, conceive of a novel course of action in order to please another person, and carry it out.  Signal apparent pleasure at having been able to please that person.
  • React to information about people’s distress by conceiving of a course of action that will alleviate it.

If Fred were truly able to do all these things, and more – involving anticipation, choice of strategy, weighing up immediate versus distant goals and deciding on a course of action that would best bring about the chosen goal, being able to anticipate another person’s wishes, and regard fulfilling them as a goal worth pursuing (i.e. worth spending energy on), being able to anticipate another person’s needs, and regard alleviating that person’s needs – my question is: what is Fred NOT “conscious of” that we would regard a “conscious” person as being “conscious” of?

What is being startled if not being “conscious” of an alarming signal? What is trying to gain further information, if not a volitional act?  What is recognising that information is lacking if not a metacognitive examination of the state of self-knowledge?  What is anticipating another’s desires and needs, if not the ability to imagine “what it it is like” to be that person?  What is wanting to please or help another person if not the capacity to recognise in another being, the kinds of needs (recharging? servicing?) that mandate your own actions?

In other words, what is consciousness, if not these very capacities?  And if consciousness is these very capacities, then why should they not evolve?  They are certainly likely to promote successful survival and reproduction.

In other words, I think the premise of the argument breaks down on examination.  I think that human behaviour is what it is because we are conscious of precisely these things.  A human being who is not capable of being aware of an alarming sound, of seeking out further information in response to an interesting stimulus, of anticipating her own needs, of making decisions on her own behalf or on behalf of another person, of responding to another’s needs, would be, well, unconscious. Asleep. Comatose.

Trying to divorce behaviour from consciousness is, I suggest, fundamentally incoherent – consciousness is intimately related (as heads are to tails on a coin, heh) to decision-making, and decision-making involves action, even if that action is merely the moving of an eyeball to a new fixation, in order to gain new relevant sensory information.  We are not computers, and nor are robots – the thing about robots is that, like us, they move – they act.  Sure, humans can, tragically, be both immobile and conscious, and it is a major medical challenge to find out whether a person is conscious if they cannot physically act.  But that is because the way a person acts is a major clue to whether they are conscious. And, interestingly, the most promising ways of using brain imaging to communicate with “locked in” patients is to get them to imagine actions. Even when we are physically immobilised, the brain mechanisms involved in action – in decision-making can be completely intact.

That doesn’t mean I think that conscious man-made robots are possible.  I think life is way too complicated for mere humans (mere intelligent designers :)) to fabricate.  If we ever do make “artificial” intelligent beings, I think we will have to use some kind of evolutionary program.  Indeed, brains themselves work on a kind of speeded up “neural Darwinism” in which successful brain patterns repeat and unsuccessful ones are extinguished (“Hebb’s rule: what fires together, wires together).  Which is why, incidentally, in at least one sense I am an “intelligent design” proponent – I do think that life is designed by a system that closely resembles human intelligence (although differs from it in some key respects), namely evolutionary processes.

But my point in this post is simply to argue that:

  1. If a zombie robot (Fred) behaved exactly like a conscious person, to the point of being indistinguishable from a conscious person,
  2. Fred would necessarily be as conscious as a conscious person
  3. because consciousness is intrinsic to strategic, planned decision-making, anticipation of the actions of others, and selection of action in order to maximise the probability of achieving proximal or distal goals, and thus extremely helpful to survival,
  4. And thus is highly likely to have evolved.

114 thoughts on “Zombie Fred

  1. Lizzie: Yes, and not a cutting edge rebuttal either. But a useful exercise for me all the same 🙂

    I thought you did pretty well. In terms of “The Chinese Room”, I’d say that your response is about equivalent to “The Systems Reply”, perhaps in its more specialized form as “The Robot Reply.” You explained it well.

  2. Lizzie,

    In other words, what is consciousness, if not these very capacities?

    Qualia, of course. 🙂

    If you take the ‘zombic intuition’ seriously, then information processing and subjective experience (qualia) are separate and possibly separable aspects of consciousness – – what Ned Block called ‘access consciousness’ and ‘phenomenonal consciousness’, respectively.

    The question is whether we can make sense of such a separation.

    It gets right to the heart of the Hard Problem.

  3. My position is that qualia are covered in what I wrote – that all qualia are a form of consciousness of stuff that is relevant to action.

  4. [B. Arrington, Esq wrote] … Gelernter points out that from an outside observer’s perspective, a fully conscious, self-aware person cannot be distinguished from a zombie Fred….

    That particular Gelernter (whom Barry thinks is “fabulous) is a deranged Bush-toady who is (by reputation) intelligent on the subject of computers, but nothing else. No wonder the RWA UDers find Gelenter’s stupidity convincing.

    The only thing that makes Gelenter tolerably interesting is that he doesn’t worship the zombie named Jesus. (He’s prominent in worshipping Jesus’ hypthetically-nonhuman baby-daddy, though.) Hmm, wonder why the UDers don’t mention Gelenter’s deliberate rejection of their Savior while they’re kissing his … article.

  5. Zombie thought experiments don’t merely postulate robots that behave indistinguishably from human beings – they postulate entities that are physically identical in every physical respect to human beings (identical brain tissue, etc.) yet not conscious (so it goes). The paradox is that, given the causal closure of the physical world, they should behave utterly indistinguishably from ourselves, including describing exquisite conscious experiences.

  6. Reciprocating Bill 2:
    Zombie thought experiments don’t merely postulate robots that behave indistinguishably from human beings – they postulate entities that are physically identical in every physical respect to human beings (identical brain tissue, etc.) yet not conscious (so it goes). The paradox is that, given the causal closure of the physical world, they should behave utterly indistinguishably from ourselves, including describing exquisite conscious experiences.

    Unless, of course, consciousness is something undetectably from the outside, but additional, which presumably Barry thinks, and I dispute.

    I don’t mind whether the zombie looks like us or not – if I came across a real alien, it probably wouldn’t like like us at all, but if it behaved in away that evinced consciousness (e.g. as in the OP) I would conclude that it was conscious

    Because I don’t think “conscious” makes sense unless it is “of” something, even if the something is simply “of being conscious”!

    I might have more to say about qualia tomorrow.

  7. The zombies I’ve consulted tell me that what underscores the “hard” problem for them is that it follows from the supposed causal closure of the physical world that a non-conscious zombie – by definition physically identical in every respect to a conscious person yet devoid of subjective experience – would behave indistinguishably from that person. The subtraction of subjective experience would make no difference. Even zombies agree this is counterintuitive. And you and I would certainly attribute consciousness to the zombie.

    (These discussions were with fast zombies, of course.)

  8. Lizzie, in my training in special education I ran into people who would fail all of your criteria.

  9. Reciprocating Bill 2: The zombies I’ve consulted tell me that what underscores the “hard” problem for them is that it follows from the supposed causal closure of the physical world that a non-conscious zombie – by definition physically identical in every respect to a conscious person yet devoid of subjective experience – would behave indistinguishably from that person.

    You should have consulted with non-zombies. They might have explained that physically identical implies the same subjective experience. The zombies that you consulted with cannot actually exist, which presumably explains why you did not get a correct explanation.

  10. The zombies I’ve consulted…

    Of course, it’s impossible to know whether you are consulting a zombie, and even the zombie doesn’t know.

  11. If you take the ‘zombic intuition’ seriously, then information processing and subjective experience (qualia) are separate and possibly separable aspects of consciousness – – what Ned Block called ‘access consciousness’ and ‘phenomenonal consciousness’, respectively.

    From a purely subjective standpoint, have any of us ever done anything moderately complex without thinking about it or remembering it?

    I am not a huge fan of Freud, but he did pose some — in my opinion — interesting questions about what part of our own inner behavior “we” (our conscious selves) can observe.

    I think it raises interesting questions about what we mean by free will. If we cannot observe the machinery responsible for our motives, in what sense are we free?

    Personally, I define free will semi-operationally as the capacity to learn from outcomes. Any system that learns has free will, by definition. That is, I think, as far as we can go with objective definitions.

  12. Oh, philosophy of mind! 🙂

    A little bit of relevant back-story: the zombie problem got started as a response to a particular brand of materialism called “identity-theory.” Identity theory just says that each mental event is identical with some physical event, so the mind is identical with the brain (or the brain + body, or brain + body + environment . . . .).

    But wait! some metaphysicians said. Identity is a necessary relation — if x and y are identical, then this must be so in every possible world. (The basis for this claim rests on some “intuition” about the meaning of words that metaphysicians often claim to have.) So, if mental events are identical with physical events, that must be so in every possible world. But then, it would follow, that we couldn’t conceive of anything has having all the same physical states (behavior, including speech behavior) and none of the mental states (e.g. consciousness).

    More basically:

    (1) If A and B are identical, then it is necessary that all As are Bs.
    (2) If it is necessary that all As are Bs, then it is impossible that any As could not be Bs.
    (3) And since we cannot conceive of impossible things, that we can conceive of As that aren’t Bs means that it cannot be necessary that As are Bs, and so As cannot be identical with Bs.

    In this particular case, that we can conceive of “zombies” commits us to nothing more than the logical possibility of beings without consciousness but behaviorally indistinguishable from us, and hence (given that identity is necessary), that conscious mental events cannot be identical with physical events.

  13. petrushka:
    Lizzie, in my training in special education I ran into people who would fail all of your criteria.

    They aren’t criteria, Petrushka – they are paradigm cases.

    I think myself (like Hofstadter) that consciousness comes in levels – that it’s a continuum. But I took the example of a zombie that behaved indistingushably from a person who we’d all agree was fully, humanly, conscious.

    I think that bats are conscious – of stuff – and dogs, and probably babies in utero in the final weeks, in the first few days, not.

    But what they are conscious of will vary, from a sort of wantiness in contrast with non-wantiness, to full metaconsciousness of self and one’s past and potential futures.

  14. Perhaps I’m trying to drag the discussion where I think Barry wants it to go, which is into the realm of dualism. That’s what this is about, isn’t it? The radio brain. The disembodied true self? The soul?

    So I bring up the existence of real, living persons, who exhibit none of the behaviors that we associate with consciousness. How would Barry infer the state of their consciousness?

  15. Lizzie: But what they are conscious of will vary, from a sort of wantiness in contrast with non-wantiness, to full metaconsciousness of self and one’s past and potential futures.

    I think that IDists are looking for “ensoulment”, a designer given event. Not a continuum. They also need human exceptionalism, because of the bible.

    petrushka: Perhaps I’m trying to drag the discussion where I think Barry wants it to go, which is into the realm of dualism. That’s what this is about, isn’t it? The radio brain. The disembodied true self? The soul?

    I was just watching Alan Partridge read from “I partridge”, where he has the immortal line “I told my brain to..”

  16. I just visited my 10 month old grandson at Christmas. This is the third child who is in some sense “mine.”

    It’s really interesting to me that you can watch a person who is obviously conscious and self-motivated, but who will remember little or nothing of the first two years of life.

    I wonder if Mapou at UD would define babies and toddlers as meat robots.

    I had an uncle who — due to an aneurysm — had total amnesia. That is, he remembered everything prior to his illness, but nothing afterwords. Every day for the next twenty-five years he woke up thinking he would go to work, and had to be told he had been sick, and years had passed.

    I would like to ask the folks at UD just what part of his real, disembodied mind got disconnected. How does that work.

  17. Neil Rickert: You should have consulted with non-zombies.They might have explained that physically identical implies the same subjective experience.The zombies that you consulted with cannot actually exist, which presumably explains why you did not get a correct explanation.

    I actually agree with you. I think. See the discussion here:

    Conching

  18. Hey, if consciousness is indistinguishable from zombiemode, there’d be no survival advantage to zombiemode and no reason to expect it to evolve over consciousness.

    And then there’s the fact that not all things which evolve initially have any survival advantage, but that’s a bit of a digression 🙂

  19. Mapou at Barry’s UD thread on Zombies

    First of all, consciousness should have been deselected by natural selection because it is a hindrance to survival. It forces the organism to pay a lot of attention to things that are not beneficial to survival (music and the arts).

    But Geoffrey Miller has a neat explanation to offer in his Mating Minds. Consider the possibility of sexual selection.

  20. keiths:
    Lizzie,

    The last time we discussed this you argued that qualia are tantamount to knowledge,and that blindsight patients experience visual qualia since their brains have visual knowledge.I disagreed.

    Do you still hold that view?

    Yes, I think so, although I don’t think the qual[ity] of the qualia would be the same!

    Here’s something I think might be analagous: take the difference between watching a movie in 2D and in 3D, or of looking at a Magic Eye picture before and after you “get” it. What is the difference when you get the 3D percept? Difficult to tell – it’s one of quality – qualia, but it’s also, and I suggest that this is closely linked, one of information. This is especially striking with the Magic Eye pictures because you often learn explicit stuff you didn’t know before – a message, or the representation of an object. More practically, in the real world, you know what is further away than what.

    But it’s hard to put your finger on the difference in quality – HOW do you now “know” there is a shark sitting there in front of the background? It has no edge, no shading, no parallax (at least not in a still). And we know the mechanics – your visual system computes the difference between the image sent to each eye and generates the information that some things are further away than others. And I suggest that in a way so attenuated that you are not consciously aware of it, the 3D image is perceived by you as consisting of something that you could reach out and touch, with a different amount of arm extension (or miles to go, if it’s a big scene) between the nearer thing and the far.

    And I suggest that this 3D quale might be something like the para-visual qualia that might, with enough training or experience, be available to someone with blindsight, once they have got used to trusting the knowledge brought to them by the sensory system they still retain: just as the shark “pops out” of the background noise once you master the Magic Eye trick, so, I suggest, objects may “pop out” of the background noise of not-seeing, once you learn (and we are extremely good at learning) the association between impulse to “catch” something and the fact of its coming your way.

  21. Milner and Goodale’s work on the ventral versus dorsal streams of visual processing presented in “The Visual Brain in Action” suggests other fascinating ways in which non-conscious visual processing nevertheless guides action, and speaks to some blindsight phenomena.

    Wikipedia has a good summary:

    http://en.wikipedia.org/wiki/Two-streams_hypothesis

  22. Kantian Naturalist:
    Oh, philosophy of mind!

    A little bit of relevant back-story: the zombie problem got started as a response to a particular brand of materialism called “identity-theory.”

    I had understood that the Zombie argument was meant to counter any purely physicalist theory of mind. It works against purely physicalist functionalism too, I think. As others in the thread point out, it is usually deployed to argue for some form of dualism.

    Do you find BA’s argument reminiscent of Plantinga’s EAAN? Both start from the premise that evolution depends on behavior and behavior on brain states.

    Plantinga then uses the fact that we lack a fully naturalistic explanation for the relation of brain states and mental content to claim that naturalistic evolution cannot be used to justify reliance on the contents of belief, and hence naturalistic evolution is self-defeating.

    Similarly, BA could be interpreted as arguing that we lack a full explanation of (conscious) mental causation, therefore brain states could cause behavior without mental events, therefore evolution would not select for consciousness.

    If that is a fair summary of what BA means, it is missing an argument for why consciousness would be selected against. Consciousness could be an evolutionary spandrel (which would be epiphenomenonalism, I guess).

    But I think the zombie concept itself does not stand up to close scrutiny. The relevant SEP article provides a good summary of its deficiencies. Indeed, from that article, one might conclude that is it Chalmers against the world when it comes to acceptance of the Zombie argument amoung philosophers.
    SEP on Zombies

  23. I honestly don’t buy the idea that consciousness is some kind of byproduct. I think that is to fail to unpack what we mean by the word. If we actually think about what being conscious means being conscious of then it becomes blindingly clear that it is a huge aid to survival!

    And that includes the qualities of colour. If we couldn’t tell red from green, if red had the same effect on us as green, if it were not the colour of blood, fire, and ripe fruit, would it have the same effect on us as it does?

    And the whole idea that there is some contextless percept of “redness”, is, I suggest, illusory. I suggest that if you try to “imagine red” you will find that it attaches itself to objects, and contrasts, if only as splodges of colour white.

    Think of a red elephant – easy, right? But don’t just think of an elephant icon, a coloured-in drawing of an elephant, think if a real one, with hide, and teeth. Where does the red stop? Is it uniform? Does it look darker under the elephant’s belly? Is it like dyed leather? Where have you seen dyed leather that colour? What about a different shade of red?

    I think we are trying to explain something that disintegrates on closer analysis here.

    And presumably everyone knows that if you look at the world through a red (or any coloured) filter, very rapidly you cease to see anything other than shades of grey, with white objects and red objects looking identically white.

  24. petrushka:
    Peacock feathers.

    Are you saying evolution cannot explain them either? I thought the usual explanation was sexual selection.

  25. Lizzie [in OP]: Trying to divorce behaviour from consciousness is, I suggest, fundamentally incoherent

    Sure, I agree with this, but I think someone who bought the Zombie argument might demand an argument for why it is incoherent. Why do we believe conscious mental events must be there when, from a purely physicalist standpoint, it is brain states that drive behavior (including alarm and planning behavior) and the zombie argument “shows” brain states can be separated from consciousness.

    Hence the SEP link.

  26. BruceS: Are you saying evolution cannot explain them either?I thought the usual explanation was sexual selection.

    It was a lighthearted suggestion that language and consciousness are products of sexual selection. But it’s convoluted, because they are indicators of intelligence, which is useful and correlated with wealth and power.

  27. Lizzie: I honestly don’t buy the idea that consciousness is some kind of byproduct. I think that is to fail to unpack what we mean by the word. If we actually think about what being conscious means being conscious of then it becomes blindingly clear that it is a huge aid to survival!

    The people who see consciousness as an epiphenomenon or an unexplained side effect, are not looking at it in the same way as you are. To a first approximation, they are computationalists. They treat the brain as a computer. And then they come to believe that they could, at least in principle, build a computerized robot with the “right” behavior but it would not be conscious. So they would need consciousness as an add-on, though they do not know how to achieve it.

    Computers are relatively new, but computationalism is probably old. Maybe Plato thought of the mind as a logic machine acting on propositions.

  28. BruceS: Sure, I agree with this, but I think someone who bought the Zombie argument might demand an argument for why it is incoherent.Why do we believe conscious mental events must be there when, from a purely physicalist standpoint, it is brain states that drive behavior (including alarm and planning behavior) and the zombie argument “shows” brain states can be separated from consciousness.

    Fair enough, OK. Well, because I think “brain states drive behaviour” is a bit like saying “the current in the electric circuit makes the light bulb glow”. It leaves out a huge and important segment of the causal loop. Brain states are caused by lots of things, and we can broadly categorise them into external inputs (“exogenous” or “bottom up”) and internal (“endogenous” or “top down”), but there is feedback between the two, as what inputs we received depends on where we look and where we go, what we reach for, what we touch. That’s why I don’t think you can separate the two – we aren’t mech suits with a computer inside – the “suit” itself is an integral part of the “computer” and the computer is constantly sending out for more information that it collects via the suit, and which determines further action. Everything we do affects what goes in to our brains – and that stuff is the stuff we construct our models of the world from, and it is those models that we are conscious of – not as models (except in meta moments) but as our percepts of the world..

    Hence the SEP link.

    SEP?

  29. The real question is, as keiths pointed out, about qualia. If qualia is unnecessary to achieve all of the outward appearances of consciousness, of what evolutionary value is qualia?

    Is my personal, subjective experience of pain necessary for evolution to program in a protective response to certain situations my body encounters? I doubt that; organisms that don’t react in a certain fashion to X stimuli will have less chance of reproducing whether they “feel pain” or not. If one holds that the precise nanotechnology of a cell is not system that has qualia, then one must admit that all sorts of very precise and necessary actions, reactions and interactions are completely within the realm of nature to produce without qualia. If cellular mechanisms can act as if they are anticipating needs, as if they are consciously (qualia) choosing and responding, surely an non-qualia human can interact successfully in a social way and survive and thrive in the world.

    So, what’s the evolutionary benefit of having qualia, instead of just being non-consciously programmed to do your part in the whole system, like any individual mechanism in a cell? Seems to me we should be more like the Borg, but without the borg queen or the capacity to regain “individuality” (like in the “Hugh” episode of TNG).

  30. William J. Murray: The real question is, as keiths pointed out, about qualia. If qualia is unnecessary to achieve all of the outward appearances of consciousness, of what evolutionary value is qualia?

    “Is qualia?” I have always taken “qualia” to be a plural form, with “quale” as the singular.

    In any case, qualia don’t exist so there is nothing to explain 😉

  31. One minor point: the zombie argument has nothing to do with whether consciousness is epiphenomenal.

    In the zombie argument, we’re asked to conceive of the following: someone who is behaviorally identical to a conscious person, but who lacks all ‘inwardness’. All this person has is the outward criteria; there’s no interiority. And the question is, is such a being logically possible? And what follows from either answer, yes or no?

    But even if zombies are logically possible, that doesn’t mean that consciousness is epiphenomenal, because the argument is about the behavioral criteria for the application of the concept of “consciousness” and not about the actual mechanisms for producing that behavior. It’s entirely consistent with the zombie argument that the same behaviors are produced by different mechanisms in the two cases, and that means that consciousness could play a causal difference in those beings that have it — not a causal difference in the outward behavior, but a causal difference in the mechanisms that produce that behavior.

    Likewise, as BruceS points out above, even if consciousness were epiphenomenal, it wouldn’t follow that it is invisible to selection, since it could be a “spandrel” — for example, the neural correlates of consciousness could be (indirect) targets of selection, if they played some other role that is a selective target (e.g. increased intelligence).

    I do there is a clear parallel between what Arrington is trying to do here and what Plantinga is trying to do in the EAAN, along the lines that BruceS suggested — consciousness and content. In both cases there’s a crypto-dualism and a lot of a priori metaphysics being snuck into the background of the scenario described.

    In Plantinga’s case, this comes through in his refusal to appeal to the cognitive sciences of human and animal behavior and cognition n order to assign a higher probability to the hypothesis that semantic content is causally efficacious than to the other hypotheses he conceives of.

    In Arrington’s case, this comes though in his refusal to appeal to the cognitive sciences of human and animal behavior and cognition in order to recognize that the causal efficacy of consciousness plays an important role in generating and testing empirical theories, and so it isn’t at all something that “naturalists” must eschew or that only “dualism” can accommodate.

  32. William J. Murray: Quale? Isn’t that another term for “feather”?

    If it is, then that is unrelated.

    I didn’t invent the terminology. I just do my best to use it.

    As far as I know, “qualia” is a made up word that is derived from “qualities”, and “quale” is a back formation to contruct a singular form,

    I see the whole idea as misguided. The word “qualia” contributes nothing useful to the language.

  33. Neil Rickert: “Is qualia?” I have always taken “qualia” to be a plural form, with “quale” as the singular.

    That’s correct — “quale” is singular, “qualia” is plural.

    I don’t know exactly when the term appears in the history of philosophy, though I’m not personally familiar with any usage before C. I. Lewis’s Mind and the World-Order (1929). But his use of the term is quite different from how philosophers of mind like Ned Block, Dan Dennett, and Frank Jackson use it today.

    One article that’s had a huge impact on how philosophers talk about qualia is Dennett’s “Quining Qualia“, which I put here for those who haven’t read it or want to read it again.

  34. Leaving aside the issue of whether the non-conscious robot must actually be conscious (which you have all addressed well), I want to come back to the issue of whether the consciousness of its human model could have evolved.

    Whatever consciousness is, the machinery that produces it is physical. Whether it is a byproduct or an essential part of the intellectual machinery, it costs something. If two mutations are available to the population, one starting the organism down the road to consciousness, and the other down the road to equivalent-behavior zombiedom, then there is no reason to assume that they have the same fitness.

    Thus the secondary consequences of those mutants are available for selection to act on.

    It seems to me that the part of the zombie argument that says that natural selection can’t choose between consciousness and equivalent-zombiedom is mistaken. Only if we require that the physical machinery (neuron circuits, neurotransmitter levels, etc.) be exactly the same in the zombie as in the conscious individual can we escape natural selection on the costs involved, and escape the greater likelihood of occurrence of the necessary mutations in one case as opposed to the other.

  35. William J. Murray:
    The real question is, as keiths pointed out, about qualia. If qualia is unnecessary to achieve all of the outward appearances of consciousness, of what evolutionary value is qualia?>

    Is my personal, subjective experience of pain necessary for evolution to program in a protective response to certain situations my body encounters?

    It’s a good question, but one to which I think there is an answer. First of all, I think we can say that “pain” isn’t “pain” unless it is “personal and subjective”. A simple aversive (literally “turning away”) doesn’t need to be accompanied by “pain” (as in the personal, subjective version).

    So why should things evolve from, say, a simple aversive response to an experience – something that is both subjective (by which I mean, is experienced) and personal (understood to be happening to the experiencer, specifically, not just a generalised perception that the world has turned nasty)?
    Let me present a toy model (i.e. not one meant to say how this actually happened, just a demonstration in principle):

    We start with 2D world which we can represent as squares on a vast sheet of graph paper. It contains three kinds of critters: stingers, sweeties and alberts. Each, at the beginning, moves around its environment one square at a time (including diagonals, like a King in chess), choosing the next square at random, once a second. If a stinger moves on to a square occupied by an albert, it stings it. This reduces the albert’s energy levels, and if they reach zero, it dies. If a stinger comes upon a dead albert, it eats it. If a stinger doesn’t eat often enough, it dies. If it does, eventually it spawns a new stinger.

    If an albert moves on to a square occupied by a sweetie, it eats it. This increases its energy level. If it moves on to a square occupied by another albert, it mates with it, and produces offspring, so there are now three alberts on the square instead of two. Alberts need energy to move, so if they don’t eat enough sweeties, they die, even if they don’t get stung.

    Sweeties don’t need to eat to move as they make sugar from sunshine. When they land on another sweetie they mate with it as with the alberts. They ignore the stingers, and the stingers don’t harm them.

    How do the alberts evolve? (I’ll assume for simplicity that they are the only ones that do, and the others remain in equilibrium) Now, the first ability that alberts evolve (I could implement this as an evolutionary simulation, but just take my word for it just now) is to detect a sweetie the next square and move to that one and take eat it. This is clearly advantageous – and those with that ability will clearly do better than those who just rely on stumbling on sweeties by chance. The next ability they evolve is to detect a stinger in the next square, and not move to that one. The third ability they evolve is to detect another albert in the next square and move to that to mate (this won’t help the albert itself, but its progeny who share this desire to jump other alberts will clearly breed more and leave more progeny). So now we have a population of alberts looking for sweeties, avoiding stingers, and chasing other alberts for sex. From the outside, this will look quite cute, even in simulation – the critters will look quite purposeful as they move about the board, like autonomous pacmen (pacmen). And they do this because each now has an internal representation – a map – of its immediate surroundings, constantly updated every time it moves.

    Not only that, but the central square on that nine-square map, represents its own location

    But we would assume that while they look as though they fear stingers, enjoy sweeties and lust after other alberts, they experience not fear nor taste nor lust. Something seems obviously missing.

    And they still aren’t very effective – if a stinger is on the same square as a sweetie or another albert, where do they go? What if they are running out of energy, and the only choices are a stinger with an albert, and a stinger with a sweetie? The critter needs a new internal representation – not only of its own location in immediate space, but its own internal state – how low its energy store is. If it is able to input its own internal state into the decision, it can go for stinger+albert if it is in a good state, and stinger+sweetie if it is running low. In other words, it can evaluate, in a very crude way, whether lust or hunger is the more important need to satisfy, and the extent to which it needs to put up with a sting, or back off and hope for a better selection on the next move.

    But alberts are still hugely limited by having no memory: a huge improvement would be for alberts to be able to store old maps as they updated the current map – they could remember where there was last a bunch of sweeties, or fellow alberts, and head off there, hoping that at least some of them are still around. In other words the alberts would then be able to learn. And once an albert can learn where things were in the past, it can possibly learn to anticipate future moves by other critters – that a herd of alberts heading thataway probably remembers a bunch of sweeties thataway, and so it may be worth going the same way.

    By this point, the alberts’ internal representations of the world have become really quite complex, and include a representation of their own position in space and of their own most pressing current needs, the capacity to model the past, the present, and to combine these into a probabilistic map of the future, in which likely threats, and likely opportunities will result from different courses of actions, from which it can select the one most likely to fulfil its most important needs, even when these are not the most immediately pressing – a hungry albert, for instance, surrounded by a thicket of stingers, may nonetheless figure that it can put up with a few stings, by taking the least densely stinger-populated route to the next bunch of sweeties. Or fill up with sweeties before confronting a forest of stingers to reach more alberts to mate with.

    And – to cut to my conclusion, because this is, after all, just a toy – I suggest that the ability to model oneself in relation to past, present, and likely future, to seek further information by exploratory forays, to combine that information with internal information regarding the organism’s current state, and to weigh up courses of action in the light of that information, changing, if necessary the course of action, if, say, there are more stingers stinging than originally anticipated (ouch, yikes, this thicket is going to kill me, I’ve got to get out quick, hungry though I am, perhaps I should just make for my darling albert, and mate one last time before I expire at her dear little feet…) is what starts to generate subjective (yikes this is threatening I’ve got to get out of here) and personal (this is happening to me, not my darling, she’s probably OK, so I should make for the safety of her bosom) are the ingredients of subjective personal experience of pain, pleasure, desire, as well as proxy pain, pleasure and desire, and thus, over billions of years, to compassion and humanity.

  36. There are people trying to answer these questions by building sinulations of neurons and brains. IBM claims to have constructed an assemblage of artificial neurons equivalent to ten percent of a human brain.

    There are several interesting factoids regarding this device.

    It doesn’t do anything. It has no outward behavior equivalent to human or animal behavior. No one knows how to solve this problem.

    It’s slow despite the fact that silicon transistors switch at millions of times the speed of neurons.

    It’s really big and really power hungary. To run at the speed of a human brain would require a power plant equivalent to New York City And Los Angeles combined.

    No one reall knows how brains do what they do in the sense of knowing how to emulate them.

    So the idea that it is easy to build a robotic zombie that could be hollow or otherwise and produce a convincing facsimile of hunan behavior is a bit presumptious.

    The evolved body is far more elegant than anything we can design.

  37. petrushka: It was a lighthearted suggestion that language and consciousness are products of sexual selection. But it’s convoluted,because they are indicators of intelligence, which is useful and correlated with wealth and power.

    I should also say that for some reason I understood horse feathers even though you referred to the peacock variety.

    Yes, my reply was meant as an attempt at humor too. I was really trying to discover if you thought what I said was wrong, in which case I wanted to understand why, or whether this was just a comment on BA’s original argument. I see now that you had a more subtle point that I missed.

  38. Kantian Naturalist:
    One minor point: the zombie argument has nothing to do with whether consciousness is epiphenomenal.

    Yes, understood. I was trying to make the point (which you also make in your following) that BA’s argument also fails if consciousness is a biological spandrel (and thus epiphenomenal.) Not that I believe that that is true.

  39. William J. Murray:
    The real question is, as keiths pointed out, about qualia. If qualia is unnecessary to achieve all of the outward appearances of consciousness, of what evolutionary value is qualia?

    This makes the assumption that qualia are something separate from brain states.

    But if you think the Zombie argument is wrong, as I do for reasons outlined in the SEP article, then a reasonable position is that the qualia are identical to brain states and so the proposed separation by evolution is physically impossible.

    Once you accept that qualia are brain states, then the arguments in the Dr Liddle’s OP about why such brains states would be favored by evolution are complete and valid.

  40. BruceS,

    But if you think the Zombie argument is wrong, as I do for reasons outlined in the SEP article, then a reasonable position is that the qualia are identical to brain states and so the proposed separation by evolution is physically impossible.

    I agree that qualia are by all appearances inextricably linked to the accompanying brain states and that they can therefore be explained by evolution, whether they are epiphenomenal or not.

    The mystery is that some brain states (or portions of brain states) are associated with qualia while others aren’t, which is why I brought up the phenomenon of ‘blindsight’. Pace Lizzie, I think it is clear that blindsight is not accompanied by qualia. Why not, and what are the differences in brain state(s) that account for the presence of visual qualia in a normally-sighted person?

Leave a Reply