A neat zombie post from Barry Arrington (thanks, Barry! I do appreciate, and this is without snark, the succinctness and articulacy of your posts – they encapsulate your ideas extremely cogently, and thus make it much easier for me to see just how and why I disagree with you!)
In the zombie thought experiment we are supposed to imagine a person (let’s call him Fred) who looks and acts exactly like a fully conscious human being. Fred eats, drinks, converses, laughs, cries, etc. exactly like a human being, but he is in fact a biological robot with no subjective consciousness at all. The point of the thought experiment is that I can experience only my own consciousness. Therefore, I can be certain only of my own consciousness. I have to take Fred’s word for his consciousness, and if Fred is in fact a robot programed to lie to me and tell me he is conscious, there is no way I could know he is lying. Here’s the kicker. With respect to any particular person, everyone else in the world may in fact be a zombie Fred, and if they were that person would never be able to know. I may assume that everyone else is conscious, but I cannot know it. I can experience my own consciousness but no other person’s.
Gelernter points out that from an outside observer’s perspective, a fully conscious, self-aware person cannot be distinguished from a zombie Fred. They behave exactly alike. Here is where it gets interesting. If a conscious person and a zombie behave exactly alike, consciousness does not confer a survival advantage on the conscious person. It follows that consciousness is invisible to natural selection, which selects for only those traits that provide a survival advantage. And from this it follows that consciousness cannot be accounted for as the product of natural selection. Nature would not have done it that way.
Where does this get us? It is hard to say. At the very least, it seems to me that the next time an anti-ID person employs the “God would not have done it that way” argument, I can respond with “And nature wouldn’t have either so where does that leave us?” response.
If I can attempt to summarise this even more succinctly than Barry has done:
- A zombie robot (Fred) would, by definition, behave exactly like a conscious person, and thus be indistinguishable from a conscious person.
- Therefore consciousness does not make any detectable difference to behaviour
- Therefore consciousness cannot help a person survive
- Therefore it cannot have evolved.
If Barry reads this and thinks I have misunderstood him, I would welcome correction, either here (where he has OP posting permissions) or at UD (which I will check periodically).
OK, well, here goes: if consciousness, as per the Barry’s hypothetical, makes absolutely no difference to the behaviour of the person (I don’t mind if Fred looks like a robot, but it must behave like a person), then Fred should do the following:
- If I make a sudden unexpected noise, Fred should startle, and look around to see what is happening.
- If I whisper to Fred, it should come closer in order to hear more clearly.
- If it doesn’t understand me, it should ask me to repeat or rephrase.
- If it finds itself short of battery power, but also in danger of being struck by lightning, it ought to be able to weigh up which is more risky, risking a battery outage by waiting for the storm to pass over, or risking lightning strike by heading straight to the charging point
- If I ask it to go to the shop and buy me something nice for supper, but not too fancy, it should be able to find its way there, check the shelves for some things it thinks I might like, weigh up what I might think looks too fancy, pick something, maybe spot the chocolates I like on the way out (hey, a girl can dream), and decide to pay for them out of its own money as a gift, and return home with a smile, explaining what it had selected and why, then surprise me with the chocolates.
- If it reads a story about a hurricane in the Philippines, it should get on to the internet, and donate some money, as much as it thinks it can afford, to leave it enough to pay for its annual service, and recharging fees.
In other words, Fred has to be able to:
- React appropriately to unexpected danger signals.
- Recognise when it needs to take action (e.g. move closer) in order to gain relevant information
- Recognise when information is insufficient, and seek clarification
- Make decisions that involve anticipating future events and contingencies, and weighing up the least bad of two poor options to avoid serious trouble.
- Understand non-specific instructions, plan a strategy to fulfill someone else’s goal, weigh up what someone else would decide in the same circumstances, conceive of a novel course of action in order to please another person, and carry it out. Signal apparent pleasure at having been able to please that person.
- React to information about people’s distress by conceiving of a course of action that will alleviate it.
If Fred were truly able to do all these things, and more – involving anticipation, choice of strategy, weighing up immediate versus distant goals and deciding on a course of action that would best bring about the chosen goal, being able to anticipate another person’s wishes, and regard fulfilling them as a goal worth pursuing (i.e. worth spending energy on), being able to anticipate another person’s needs, and regard alleviating that person’s needs – my question is: what is Fred NOT “conscious of” that we would regard a “conscious” person as being “conscious” of?
What is being startled if not being “conscious” of an alarming signal? What is trying to gain further information, if not a volitional act? What is recognising that information is lacking if not a metacognitive examination of the state of self-knowledge? What is anticipating another’s desires and needs, if not the ability to imagine “what it it is like” to be that person? What is wanting to please or help another person if not the capacity to recognise in another being, the kinds of needs (recharging? servicing?) that mandate your own actions?
In other words, what is consciousness, if not these very capacities? And if consciousness is these very capacities, then why should they not evolve? They are certainly likely to promote successful survival and reproduction.
In other words, I think the premise of the argument breaks down on examination. I think that human behaviour is what it is because we are conscious of precisely these things. A human being who is not capable of being aware of an alarming sound, of seeking out further information in response to an interesting stimulus, of anticipating her own needs, of making decisions on her own behalf or on behalf of another person, of responding to another’s needs, would be, well, unconscious. Asleep. Comatose.
Trying to divorce behaviour from consciousness is, I suggest, fundamentally incoherent – consciousness is intimately related (as heads are to tails on a coin, heh) to decision-making, and decision-making involves action, even if that action is merely the moving of an eyeball to a new fixation, in order to gain new relevant sensory information. We are not computers, and nor are robots – the thing about robots is that, like us, they move – they act. Sure, humans can, tragically, be both immobile and conscious, and it is a major medical challenge to find out whether a person is conscious if they cannot physically act. But that is because the way a person acts is a major clue to whether they are conscious. And, interestingly, the most promising ways of using brain imaging to communicate with “locked in” patients is to get them to imagine actions. Even when we are physically immobilised, the brain mechanisms involved in action – in decision-making can be completely intact.
That doesn’t mean I think that conscious man-made robots are possible. I think life is way too complicated for mere humans (mere intelligent designers :)) to fabricate. If we ever do make “artificial” intelligent beings, I think we will have to use some kind of evolutionary program. Indeed, brains themselves work on a kind of speeded up “neural Darwinism” in which successful brain patterns repeat and unsuccessful ones are extinguished (“Hebb’s rule: what fires together, wires together). Which is why, incidentally, in at least one sense I am an “intelligent design” proponent – I do think that life is designed by a system that closely resembles human intelligence (although differs from it in some key respects), namely evolutionary processes.
But my point in this post is simply to argue that:
- If a zombie robot (Fred) behaved exactly like a conscious person, to the point of being indistinguishable from a conscious person,
- Fred would necessarily be as conscious as a conscious person
- because consciousness is intrinsic to strategic, planned decision-making, anticipation of the actions of others, and selection of action in order to maximise the probability of achieving proximal or distal goals, and thus extremely helpful to survival,
- And thus is highly likely to have evolved.