Zombie Fred

A neat zombie post from Barry Arrington (thanks, Barry!  I do appreciate, and  this is without snark, the succinctness and articulacy of your posts – they encapsulate your ideas extremely cogently, and thus make it much easier for me to see just how and why I disagree with you!)

Barry writes:

In the zombie thought experiment we are supposed to imagine a person (let’s call him Fred) who looks and acts exactly like a fully conscious human being. Fred eats, drinks, converses, laughs, cries, etc. exactly like a human being, but he is in fact a biological robot with no subjective consciousness at all. The point of the thought experiment is that I can experience only my own consciousness. Therefore, I can be certain only of my own consciousness. I have to take Fred’s word for his consciousness, and if Fred is in fact a robot programed to lie to me and tell me he is conscious, there is no way I could know he is lying. Here’s the kicker. With respect to any particular person, everyone else in the world may in fact be a zombie Fred, and if they were that person would never be able to know. I may assume that everyone else is conscious, but I cannot know it. I can experience my own consciousness but no other person’s.

Gelernter points out that from an outside observer’s perspective, a fully conscious, self-aware person cannot be distinguished from a zombie Fred. They behave exactly alike. Here is where it gets interesting. If a conscious person and a zombie behave exactly alike, consciousness does not confer a survival advantage on the conscious person. It follows that consciousness is invisible to natural selection, which selects for only those traits that provide a survival advantage. And from this it follows that consciousness cannot be accounted for as the product of natural selection. Nature would not have done it that way.

Where does this get us? It is hard to say. At the very least, it seems to me that the next time an anti-ID person employs the “God would not have done it that way” argument, I can respond with “And nature wouldn’t have either so where does that leave us?” response.

 

If I can attempt to summarise this even more succinctly than Barry has done:

  1. A zombie robot (Fred) would, by definition, behave exactly like a conscious person, and thus be indistinguishable from a conscious person.
  2. Therefore consciousness does not make any detectable difference to behaviour
  3. Therefore consciousness cannot help a person survive
  4. Therefore it cannot have evolved.

If Barry reads this and thinks I have misunderstood him, I would welcome correction, either here (where he has OP posting permissions) or at UD (which I will check periodically).

OK, well, here goes: if consciousness, as per the Barry’s hypothetical, makes absolutely no difference to the behaviour of the person (I don’t mind if Fred looks like a robot, but it must behave like a person), then Fred should do the following:

  • If I make a sudden unexpected noise, Fred should startle, and look around to see what is happening.
  • If I whisper to Fred, it should come closer in order to hear more clearly.
  • If it doesn’t understand me, it should ask me to repeat or rephrase.
  • If it finds itself short of battery power, but also in danger of being struck by lightning, it ought to be able to weigh up which is more risky, risking a battery outage by waiting for the storm to pass over, or risking lightning strike by heading straight to the charging point
  • If I ask it to go to the shop and buy me something nice for supper, but not too fancy, it should be able to find its way there, check the shelves for some things it thinks I might like, weigh up what I might think looks too fancy, pick something, maybe spot the chocolates I like on the way out (hey, a girl can dream), and decide to pay for them out of its own money as a gift, and return home with a smile, explaining what it had selected and why, then surprise me with the chocolates.
  • If it reads a story about a hurricane in the Philippines, it should get on to the internet, and donate some money, as much as it thinks it can afford, to leave it enough to pay for its annual service, and recharging fees.

 

In other words, Fred has to be able to:

  • React appropriately to unexpected danger signals.
  • Recognise when it needs to take action (e.g. move closer) in order to gain relevant information
  • Recognise when information is insufficient, and seek clarification
  • Make decisions that involve anticipating future events and contingencies, and weighing up the least bad of two poor options to avoid serious trouble.
  • Understand non-specific instructions, plan a strategy to fulfill someone else’s goal, weigh up what someone else would decide in the same circumstances, conceive of a novel course of action in order to please another person, and carry it out.  Signal apparent pleasure at having been able to please that person.
  • React to information about people’s distress by conceiving of a course of action that will alleviate it.

If Fred were truly able to do all these things, and more – involving anticipation, choice of strategy, weighing up immediate versus distant goals and deciding on a course of action that would best bring about the chosen goal, being able to anticipate another person’s wishes, and regard fulfilling them as a goal worth pursuing (i.e. worth spending energy on), being able to anticipate another person’s needs, and regard alleviating that person’s needs – my question is: what is Fred NOT “conscious of” that we would regard a “conscious” person as being “conscious” of?

What is being startled if not being “conscious” of an alarming signal? What is trying to gain further information, if not a volitional act?  What is recognising that information is lacking if not a metacognitive examination of the state of self-knowledge?  What is anticipating another’s desires and needs, if not the ability to imagine “what it it is like” to be that person?  What is wanting to please or help another person if not the capacity to recognise in another being, the kinds of needs (recharging? servicing?) that mandate your own actions?

In other words, what is consciousness, if not these very capacities?  And if consciousness is these very capacities, then why should they not evolve?  They are certainly likely to promote successful survival and reproduction.

In other words, I think the premise of the argument breaks down on examination.  I think that human behaviour is what it is because we are conscious of precisely these things.  A human being who is not capable of being aware of an alarming sound, of seeking out further information in response to an interesting stimulus, of anticipating her own needs, of making decisions on her own behalf or on behalf of another person, of responding to another’s needs, would be, well, unconscious. Asleep. Comatose.

Trying to divorce behaviour from consciousness is, I suggest, fundamentally incoherent – consciousness is intimately related (as heads are to tails on a coin, heh) to decision-making, and decision-making involves action, even if that action is merely the moving of an eyeball to a new fixation, in order to gain new relevant sensory information.  We are not computers, and nor are robots – the thing about robots is that, like us, they move – they act.  Sure, humans can, tragically, be both immobile and conscious, and it is a major medical challenge to find out whether a person is conscious if they cannot physically act.  But that is because the way a person acts is a major clue to whether they are conscious. And, interestingly, the most promising ways of using brain imaging to communicate with “locked in” patients is to get them to imagine actions. Even when we are physically immobilised, the brain mechanisms involved in action – in decision-making can be completely intact.

That doesn’t mean I think that conscious man-made robots are possible.  I think life is way too complicated for mere humans (mere intelligent designers :)) to fabricate.  If we ever do make “artificial” intelligent beings, I think we will have to use some kind of evolutionary program.  Indeed, brains themselves work on a kind of speeded up “neural Darwinism” in which successful brain patterns repeat and unsuccessful ones are extinguished (“Hebb’s rule: what fires together, wires together).  Which is why, incidentally, in at least one sense I am an “intelligent design” proponent – I do think that life is designed by a system that closely resembles human intelligence (although differs from it in some key respects), namely evolutionary processes.

But my point in this post is simply to argue that:

  1. If a zombie robot (Fred) behaved exactly like a conscious person, to the point of being indistinguishable from a conscious person,
  2. Fred would necessarily be as conscious as a conscious person
  3. because consciousness is intrinsic to strategic, planned decision-making, anticipation of the actions of others, and selection of action in order to maximise the probability of achieving proximal or distal goals, and thus extremely helpful to survival,
  4. And thus is highly likely to have evolved.

114 thoughts on “Zombie Fred

  1. keiths:
    BruceS,

    I agree that qualia are by all appearances inextricably linked to the accompanying brain states and that they can therefore be explained by evolution, whether they are epiphenomenal or not.

    The mystery is that some brain states (or portions of brain states) are associated with qualia while others aren’t, which is why I brought up the phenomenon of ‘blindsight’.Pace Lizzie, I think it is clear that blindsight is not accompanied by qualia.Why not, and what are the differences in brain state(s) that account for the presence of visual qualia in a normally-sighted person?

    Interesting. So how are you distinguishing between qualia and non-qualia, and why do you think that qualia are associated with some “brain states” and not others?

    To come clean – I don’t think the term “brain states” is relevant. What matters is “organism states” and those aren’t particularly coherent unless considered dynamically.

    IMO.

  2. keiths:
    BruceS,

    The mystery is that some brain states (or portions of brain states) are associated with qualia while others aren’t, which is why I brought up the phenomenon of ‘blindsight’.Pace Lizzie, I think it is clear that blindsight is not accompanied by qualia.Why not, and what are the differences in brain state(s) that account for the presence of visual qualia in a normally-sighted person?

    I thought the standard answer for blindsight was that there was a separate path associated with conscious awareness and that path was damaged in people with this condition. I think a previous poster also mentioned this.

    But of course that just dodges the question of why some brain states are qualia and some aren’t. Obviously no one knows. Some of the speculation I have read relates to what those brain states are representing: eg meta representations of the relationship between the body-self representation and self representations (Damasio , if I understand him correctly), or meta representations of brain representations of attention (Graziano, same caveat). And I think Damasio also believes the fact that the representing cells are themselves living with their own need to for self-regulation is important in the explanation of qualia.

    But this is not much more than handwaving although at best it may suggest research in neuroscience, which I believe is the only way the question of how some brains states are qualia will be answered (and not by possible world arguments, for example).

  3. BruceS: I thought the standard answer for blindsight was that there was a separate path associated with conscious awareness and that path was damaged in people with this condition.I think a previous poster also mentioned this.

    But that’s not an answer – because “conscious awareness” isn’t some brain module that blindsight bypasses.

    The big difference is that with normal sight, you can tell the ping pong ball is coming, and then decide to move your bat. With blind sight, you only know the ping pong ball is coming because you move your bat.

    But I see no reason why a person with blindsight couldn’t learn to read her own body’s incipient response, and say “hey, I’m trying to hit the ball, it must be coming!”, just as a sighted person monitors her visual system, and says “hey, my eyes are being drawn to a flying object, the ball must becoming!” The difference is that second is either pre-wired, or learned in infancy, and the second would involve novel pathways and learning in adulthood.

    In which case, I see no reason why both should not be associated with qualia – they’d just be a bit different, but both would be an awareness of a response set associated with a moving object.

  4. And to support my hypothesis, there are the astonishingly good results for artificial retinas, in which the person has to learn to interpret a quite different set of inputs as information about the world, and rapidly learns to navigate the world in a way that appears to be experienced as “seeing it”.

    Or, for that matter, evidence from inverting lenses – what we perceive as “normal” is simply a mapping that works, whether that is derived from an inverted image on on the retina or a rectified one.

  5. Lizzie: Interesting.So how are you distinguishing between qualia and non-qualia, and why do you think that qualia are associated with some “brain states” and not others?

    To come clean – I don’t think the term “brain states” is relevant.What matters is “organism states” and those aren’t particularly coherent unless considered dynamically.
    IMO.

    I don’t know the answer to your first question. I hope some neuroscientist figures it out in my lifetime (an optimistic hope for me, I suspect). I don’t have any original ideas beyond what neuroscientists write or neuroscience-informed philosophers write. I post here to try to find out if I understand them well enough to express them coherently.

    For your second point: the brain metarepresentations would include representations of representations of the state of the entire organism, so I think that amounts to the same thing as saying the whole organism is involved. I do think a brain, or at least a nervous system of sufficient complexity to build the right kind of representation, is necessary but not sufficient for qualia.

  6. One part of the problem is that “qualia,” as traditionally conceived, don’t fall within the purview of empirical science; they are only available to the introspection of the conscious being whose qualia they are. The “Hard Problem” is hard because it is ‘designed’ to be hard — the integration of the objective and the subjective. (Notice that this is how Nagel sets it up in his “What is it like to be a bat?”)

  7. BruceS: I don’t know the answer to your first question. I hope some neuroscientist figures it out in my lifetime (an optimistic hope for me, I suspect).

    Well, my somewhat hubristic position is that I think it’s already figured out – the problem is that “qualia” is ultimate an incoherent concept!

    But we certainly won’t ever figure it out unless we can define it – so how do you define it enough to know it is associated with some [] but not others>

  8. BruceS: For your second point: the brain metarepresentations would include representations of representations of the state of the entire organism, so I think that amounts to the same thing as saying the whole organism is involved. I do think a brain, or at least a nervous system of sufficient complexity to build the right kind of representation, is necessary but not sufficient for qualia.

    Well, I think another problem is the word “representation”. I do think that “representations” are involved, but not that they are created by the brain for some occupant of a seat in the Cartesian theatre to appreciate.

  9. Lizzie: I do think that “representations” are involved, but not that they are created by the brain for some occupant of a seat in the Cartesian theatre to appreciate.

    S/he’s a representation too!

  10. Lizzie:
    And to support my hypothesis, there are the astonishingly good results for artificial retinas, in which the person has to learn to interpret a quite different set of inputs as information about the world, and rapidly learns to navigate the world in a way that appears to be experienced as “seeing it”.

    I agree that a separate path does not answer why qualia arise from that path.

    But for the cases you quote, could it not be that one would be using brain plasticity to redevelop the broken pathway (blindsight) or to develop an equivalent to the normal eye-seeing brain pathway (artificial retina) that has the type of representations that are qualia as in the path that is used for normal sight.?

    Maybe this has already been settled by research?

    I agree that there is no observer in the brain that is being represented to although there can be representations of representations.

    In one of your previous posts, you said blind sight people have qualia because they have visual knowledge. What did you mean be knowledge if it is something different physically from a particular kind of representation? (And by representation, I am thinking of the right kind neuron configuration.)

  11. Lizzie:
    But I see no reason why a person with blindsight couldn’t learn to read her own body’s incipient response, and say “hey, I’m trying to hit the ball, it must be coming!”, just as a sighted person monitors her visual system, and says “hey, my eyes are being drawn to a flying object, the ball must becoming!”The difference is that second is either pre-wired, or learned in infancy, and the second would involve novel pathways and learning in adulthood.

    Oops, I see you were ahead of me on what I posted above.

    But then what does “monitors her visual system mean”? Does it not mean “forms a separate of representations which represent the representations in the visual system?”

  12. Kantian Naturalist:
    One part of the problem is that “qualia,” as traditionally conceived, don’t fall within the purview of empirical science; they are only available to the introspection of the conscious being whose qualia they are.The “Hard Problem” is hard because it is ‘designed’ to be hard — the integration of the objective and the subjective.(Notice that this is how Nagel sets it up in his “What is it like to be a bat?”)

    I need more time to think about how to form my reply to this one so I hope you will stay tuned and give me your comments on what I eventually post.

    However, I did lose a bet with myself that you would point to multiple realization as a counter argument to what I said.

  13. BruceS: Oops, I see you were ahead of me on what I posted above.

    But then what does “monitors her visual system mean”?Does it not mean “forms a separate of representations which represent the representations in the visual system?”

    Well, as I said, I think “representations” are involved, but the projector of the representation is constructed by the whole person and the recipient is also the whole person – think the infinite regress to a homunculus is relatively easily avoided by relaxing the assumption that consciousness is continuous and instantaneous, like frames in a movie.

    We know that it seems that way, but we also know, from vision science, that vision (ordinary vision) is nothing like a movie camera system – instead of taking “frames” of the visual scene, we scan it on an “need to know” basis, in which the periphery serves to signal something worth foveating (movement; bulky objects) and this results in a saccade to fixate that location, often simply required to confirm that what we assumed was there, really was, or has changed in some minor regard. We don’t even “fill in”, as it is sometimes described – it’s more that we are unaware of what we don’t know, because whatever we do need to know – or rather check on – can be checked on as soon as we need to know it (which is why I like the fridge-light analogy – we don’t tend to think of the fridge interior as being dark between door openings, and a light coming on when we open the door – rather we think of the fridge as consistently illuminated place – because whenever we look it is. Alternatively. it’s like the fact that we only ever see our face in a mirror making eye contact with our own . We know we are mostly not ,making eye contact most of the time, but we can’t catch ourselves not doing it!)

    And what the neuroscience seems to suggest is that we make “forward models” of the world, we “imagine” what the world will look like/feel like after an action (at a neural level, we activate the neurons that would be activated if it did) and then, post movement, compare the actual activation with the predicted one, and use the difference to tweak the model, and make the next one. Thus we present the world to ourselves as a series of objects, with current, past and possible future locations and trajectories, and these result from a process of making motor programs of our potential actions with regard to them, and feeding the simulated output from those programs (what will my hand feel when I reach the beer glass?) as input into the next simulation.

    Which is why we occasionally jerk the beer glass upwards if someone else has taken a gulp from it unobserved, and why I jolly nearly threw a friend’s newborn over my shoulder when I picked him up, having scaled the lifting force to a model based on my own strapping 9 month old child.

    But more generally, I suggest that our consciousness of the world works like this writ large – we do not have a timeline of memory constantly with us and extending as predictions into the near future – rather we reinvent the past and future as and when we need it, by simulating the past effects (somewhat unreliably), by activating the actual neural cascades that occurred back then. Memories, in that model, are not so much a “file store” but a reportoire of neural cascades, dominated by motor programs, I suggest, and triggering one of them, triggers a full “reanactment” – or can, especially if the event was “burned in” (almost literally) by a big dopamine surge at the time, resulting in “flashbulb” memories, or, worse, the flashbacks of PTSD.

    I’m rambling a bit, but I guess I’m trying to paint a picture of both vision and consciousness being a continous active process of reconstruction and forward modelling, and as the process by which we parse the world into objects, models of which (simulations of seeing, or grasping, for instance) that we can summon as needed. And I think this is the key to why our brains are so efficient – we don’t see, or know, a fraction as much as we think we do at any given time, but we can recreate it on the fly, so that it doesn’t matter.

    Experience, in this approach, isn’t a continuous exposure but sequential creations and recreations of past and future. This would explain the odd but common experience of driving for miles, apparently safely, but with no recollection of where we were supposed to be going, as we merrily drive way past the turn-off. Were we “unconscious”? No, clearly not – or we’d be in the ditch. But we had turned something off – the process of periodically matching our model of our surroundings with the goal of our journey, and why we might think we have no “consciousness” of passing our turn-off, we can become retrospectively conscious of it – and find ourselves saying “oh yes! I remember seeing a yellow van on the overpass!”

    We were conscious at the time of the turn-off – what we did not do, apparently, was relate that to our planned program of turning-off there, and so no turning-off motor cascade was triggered, and as there was nothing else worth noting for more than a passing moment (the yellow van) our default “keep going” program remained in force, the yellow van, dismissed, as we must, if we are not to be completely overloaded (as some people, interestingly, are). But it is still there to be re-summoned to consciousness, should we initiate the cascade that was formed when we passed it – by conjuring the turn-off we should have taken, and thus its match to the yellow van cascade.

    Does that make any sense?

  14. petrushka:
    Peacock feathers.

    It is, in my view, hard to account for the artistic aspect of human consciousness purely in survival terms. The ability to sing and make music, the ability to appreciate music, the ability to create images with pigment, body decoration, poetry in language, story telling (making stuff up!). But add mate choice and preference into the mix and you get positive feedback of traits and morphology that impinge directly on your mating success that may be analogous to peacock tail feathers..

  15. Well, I think once you get symbol using in there, and there’s a clear advantage in that, the sky’s the limited.

    You can (and we did) bootstrap yourself to the moon and beyond, with that, taking in the B minor mass and the Mona Lisa on the way.

  16. Alan Fox: It is, in my view, hard to account for the artistic aspect of human consciousness purely in survival terms.

    We are a social species. Being able to gain influence with others in the social group is beneficial (aids survival).

  17. Neil Rickert,

    I’m not suggesting all-or-nothing and I certainly agree that sociability in humans is hugely important in how human society has developed. But there is, I suggest, a large fraction of what makers humans human that may not have evolved due to direct selection by survivability. One important thing about sexual selection is its speed consequent upon the positive feedback element.

  18. Barry starts out:

    In the zombie thought experiment we are supposed to imagine a person (let’s call him Fred) who looks and acts exactly like a fully conscious human being.

    And I admit I lost track of it right there. Everything else boils down to “things equal to the same thing are equal to each other.” Barry has presented as a given that we have an exact match. Either Fred is conscious, or Barry is not, because they are indistinguishable in any way.

    Now, what was the argument again?

  19. Conscious is an anti-evolution force. Human beings are conscious about themselves and the society, so we are conscious about the ill effects of population growth, and the difficulty in bringing up a child, so we, as a species ,are reducing our progeny, despite our ability to procreate at least one child a year !

    The future generations will have a evolved, intelligent but older and reproductively useless set of Human species. For evolution, Natural selection will have to ‘select’ genes from either evolved, intelligent but reproductively useless human being or less evolved but reproductively more active species.
    Does this mean evolution is ‘devolution’? Did we devolve from a more intelligent but less reproductive ET? or are Humans the maxima of evolution and all other species in future will devolve? or is evolution cyclic – evolving and devolving alternatively ?

  20. coldcoffee:
    Conscious is an anti-evolution force. Human beings are conscious about themselves andthe society, so we are conscious about the ill effects of population growth, and the difficulty in bringing up a child, so we, as a species ,are reducing our progeny, despite our ability to procreate at least one child a year !

    The future generations will have a evolved, intelligent but older and reproductively useless set of Human species. For evolution, Natural selection will have to ‘select’ genes from either evolved, intelligent but reproductively useless human being or less evolved but reproductively more active species.Does this mean evolution is ‘devolution’? Did we devolve from a more intelligent but less reproductive ET? or are Humans the maxima of evolution and all other species in future will devolve? or is evolution cyclic – evolving and devolving alternatively ?

    If what you say is true, coldcoffee, and it might be, that doesn’t mean that consciousness is an “anti evolution” force. It may simply mean that too much consciousness and ability to control our fertility may ultimately lead to our own extinction.

    That doesn’t mean we will “devolve” – that word doesn’t mean what you think it means, and there isn’t one that does! “Evolve” doesn’t mean “getting more intelligent”. It could mean “get less intelligent” if intelligence turns out to be a handicap to successful reproduction. Which it might.

  21. Flint:
    Barry starts out:

    In the zombie thought experiment we are supposed to imagine a person (let’s call him Fred) who looks and acts exactly like a fully conscious human being.

    And I admit I lost track of it right there. Everything else boils down to “things equal to the same thing are equal to each other.” Barry has presented as a given that we have an exact match. Either Fred is conscious, or Barry is not, because they are indistinguishable in any way.

    Now, what was the argument again?

    This was my thinking as well Flint. Basically, Barry’s scenario boils down to, “Assume A and ~A are equal…”

    Many years ago I had an ongoing discussion with a creationist whose arguments against evolution and for creation ultimately relied upon an appeal to fear that without God, everything you experience could just be an elaborate illusion that would appear just like reality, but it would ultimately be meaningless since nothing in it would be real. I kept asking him if there was anything in this “fake reality” that I’d experience differently from the way I would experience such a thing in “real reality” and he kept insisting no. So then I’d ask him why I’d care if I could not discern any difference and he could not fathom the question. From his perspective, meaning and purpose can only exist in context of what is “real” by having it grounded in a deity. Experience means nothing. Barry’s zombie scenario seems to imply the same type of thinking.

    So I guess my question to Barry would be, why should I care that evolution cannot produce consciousness in your thought experiment if there is no way to tell if something has consciousness or not?

  22. Lizzie:
    And what the neuroscience seems to suggest is that we make “forward models” of the world, we “imagine” what the world will look like/feel like after an action (
    […]
    we can become retrospectively conscious of it – and find ourselves saying “oh yes! I remember seeing a yellow van on the overpass!”
    […]
    We were conscious at the time of the turn-off –

    I’m with you on the models part in the sense the brain forms a model of what it expects versus the feedback it receives from body sensations, and that this model is used, eg, to control our movements.

    Where I disagree is where you say “look like/feel” which to me implies that it is the act of forming that model that is equivalent to conscious awareness (eg of qualia).

    Rather, I think those models are being used whether we are conscious of eg the associated body sensations or not. Your beer glass example seems to illustrate that to me: it’s an unconscious action that fails because the model is wrong.

    I’d also explain the yellow van differently. I’d say we were not consciously aware of the yellow van, but the visual processing continued nonetheless, and the yellow van happened to be moved to memory. The conscious experience is of bringing that memory to one’s attention and that is different from the conscious experience of the perception that we would have had had we been aware of the yellow van during driving.

    Consider a musician learning a new guitar piece from sheet music but without memorizing it. While learning, one is conscious of the qualia of seeing the notes on the page, of the body sensations from moving the fingers, of the feel of the strings. One is also conscious of past experience of similar pieces and uses that experience to decide on appropriate fingering. But once the piece is learned, the notes seem to “flow directly” from the sheet music to the fingers without awareness. But the visual processing is still happening: if the sheet music is removed, one can no longer play it.

    I think there are mental representations present while consciously learning, that these representations are in addition to the ones used to process visual information and control movement of the fingers, and that there are facts about those representations that neuroscience can discover which explains why they are conscious, but that we do not know those facts yet.

  23. BruceS: I’d also explain the yellow van differently. I’d say we were not consciously aware of the yellow van, but the visual processing continued nonetheless, and the yellow van happened to be moved to memory. The conscious experience is of bringing that memory to one’s attention and that is different from the conscious experience of the perception that we would have had had we been aware of the yellow van during driving.

    Except that nothing in neuroscience or cognitive psychology supports the model of stuff being “processed unconsciously” then “moved to memory” in this way.

  24. Lizzie: Except that nothing in neuroscience or cognitive psychology supports the model of stuff being “processed unconsciously” then “moved to memory” in this way.

    So you are saying perception we remember we must have been aware of at some time. In the case of the yellow van, we simply forgot that we were aware of it.

    Aren’t there experiments that show we can be exposed to masked stimuli, report that we are never aware of them, but still have future behavior changed as measurement of behavior on some task. If I have that right, it would seem to show that brain state can be changed in ways that are not conscious but that affect future behavior. But I guess one could reply that this is no consciously recalled memory in that case either.

  25. Kantian Naturalist:
    One part of the problem is that “qualia,” as traditionally conceived, don’t fall within the purview of empirical science; they are only available to the introspection of the conscious being whose qualia they are.The “Hard Problem” is hard because it is ‘designed’ to be hard — the integration of the objective and the subjective.(Notice that this is how Nagel sets it up in his “What is it like to be a bat?”)

    It’s been a long time since I read that paper; when I reread it I found he had directly addressed the statement that mental events are brain events by questioning whether we could ever have a scientific explanation of that equality in the same way we have a scientific explanation of why matter is energy. But I don’t really understand why he thinks the two cases are different.

    It reminds me of the discussion of Mary the scientist who knows everything about color but has never experienced anything other than black and white. What happens when she first sees something red? Doesn’t she learn a new fact about the world which was not in the science she knew? If that is what Nagel means, then it seems to be open to the many arguments against that interpretation of the Mary experiment (eg she just learns a new ability, or she just leans a new mode of presentation of facts she already knew).

    Or maybe he means that we our first hand acquaintance with qualia means we understand everything about them, and since that understanding does not include brain states, therefore they cannot be brain states. But I don’t think think that intuition is reliable in specifying what can be explained scientifically.

    So I am not convinced, but that may be because I am missing his point.

  26. BruceS: So you are saying perception we remember we must have been aware of at some time.In the case of the yellow van, we simply forgot that we were aware of it.

    Sort of – but to stick with the fridge-light model – we are aware of something when we “remember” it – when we aren’t remembering it, we aren’t aware o fit 🙂

    That’s why Gerard Edelman called his book on Consciousness (the first one) “The Remembered Present” – sure long-term memory is a little different, and if short term memories are not rehearsed by short-term “remembering” aka being aware of the stuff, then the cascade fades before proteins have been expressed to make that cascade more likely in the long-term. But it’s fundamentally the same process – the process by which we remember, imagine, forward model, and “be aware” are, I argue, essentially the same process.

    Aren’t there experiments that show we can be exposed to masked stimuli, report that we are never aware of them, but still have future behavior changed as measurement of behavior on some task. If I have that right, it would seem to show that brain state can be changed in ways that are not conscious but that affect future behavior. But I guess one could reply that this is no consciously recalled memory in that case either.

    Yes indeed. But this is a very short term effect – the cascade triggered by the cue continues, but simply “pokes” a few cascades that are then potentiated and more likely to be activated by the prime stimulus. So you never “remember” the experience of seeing the cue, because you never do truly see the cue (though your retinas transmit the info, and that triggers the cascade that potentiates stuff that is then activated by the stimulus. You do remember your response, and if it was an odd one, you wonder why you thought if it – and the experimenter might tell you that there was a subliminal poke.

    But that’s at terribly short durations – a few tens of milliseconds, and subliminal effects are quite hard to get – too long and the participant notices the cue; too short and it has no effect. Sliding in between the early visual processing and more frontal processing is quite tricky – it’s a very narrow window. Not like seeing a yellow van on an overpass.

  27. Am I missing something here? Barry’s argument seems to be that because you can imagine something other consciousness producing the same external results therefore consciousness would not be selected. That’s daft. Natural selection doesn’t work on imaginary mechanisms. It works on real ones that exist. Whatever, your theory of consciousness, if you grant that it either produces our behaviour or is a by-product of whatever produces our behaviour then it is open for selection.

    This seems so obvious I feel I must be missing the point somewhere.

  28. Mark Frank:
    if you grant that it either produces our behaviour

    The Zombie argument, on which Barry is relying, refuses to grant that.

    It claims to show that you could have behavior and brain states without consciousness, therefore this could be what occurs in our world. Barry’s argument is then that therefore evolution would not have chosen it, although I think he is missing some kind of premise like consciousness would have added metabolic cost and so have been selected out, if it had arisen.

    There are some philosophers, mainly property dualists, who think the Zombie argument is sound. Most do not think that, however, in which case, of course, BA’s argument as presented fails.

    My personal view is to be suspicious of any argument which tries to limit what science can do by appealing to possible worlds, especially when such an argument is not even accepted by most philosophers.

    But still, it is a fun intellectual exercise to try to understand the logic of it.

  29. BruceS,

    But that’s nuts. Is anyone seriously denying that consciousness does not either result in behaviour or is linked to behaviour? Just to show that it is logically possible to have the behaviour without the consciousness doesn’t show that it as physically possible and certainly doesn’t show it could ever have evolved.

  30. Mark Frank:
    BruceS,

    But that’s nuts. Is anyone seriously denying that consciousness does not either result in behaviour or is linked to behaviour?

    Well, I am not a philosopher, but I think the answer is yes to both questions. KN’s post above explains Zombies and epiphenomenalism better than me, and there is also the Stanford Encyclopedia of Philosophy article on zombies which goes through the arguments, if you like to read that kind of thing.

    SEP on Zombies

  31. It would seem to me that this hypothetical is inextricably tied to AI and the Turing Test. I find it interesting that a few programs have fooled people when the line of questioning is limited.

    There’s a movie exploring this, “Her.” I’ve seen the trailer but not the movie. there have been numerous fictional treatments of artificial intelligence and the question of whether a sufficiently sophisticated robot can be considered alive.

    My own take is that AI is really hard, and we really don’t know how to achieve it. We have scarcely managed to implement learning robots having the abilities of insects. We are just a decade or two away from the idea that neurons are little more than transistors. Not many people outside the AI community have any idea of how difficult it is to emulate human language outside of very formal constraints. The best machine translators mostly copy translations of phrases made by humans.

    Before I worry about zombie scenarios, I’d like to see something that isn’t hypothetical. What Barry proposes is not unlike stage magic. A crafted illusion that wouldn’t stand up to scrutiny.

  32. Mark Frank,

    Whatever, your theory of consciousness, if you grant that it either produces our behaviour or is a by-product of whatever produces our behaviour then it is open for selection.

    Yes. Gelernter’s argument depends on the unsupported assumption that functionalism is false and that philosophical zombies are therefore possible in our world:

    If zombies and humans behave the same way all the time, one group would be just as able to survive as the other. So why would nature have taken the trouble to invent an elaborate thing like consciousness, when it could have got off without it just as well?

    And:

    Of course the deep and difficult problem of why consciousness exists doesn’t hold for Jews and Christians. Just as God anchors morality, God’s is the viewpoint that knows you are conscious. Knows and cares: Good and evil, sanctity and sin, right and wrong presuppose consciousness.

    And even if we grant for the sake of argument that functionalism is false, it still doesn’t follow that consciousness could not evolve.

    Gelernter hasn’t thought this through. Needless to say, neither has Barry.

  33. Barry A writes:
    Gelernter points out that from an outside observer’s perspective, a fully conscious, self-aware person cannot be distinguished from a zombie Fred. They behave exactly alike. Here is where it gets interesting. If a conscious person and a zombie behave exactly alike, consciousness does not confer a survival advantage on the conscious person. It follows that consciousness is invisible to natural selection, which selects for only those traits that provide a survival advantage. And from this it follows that consciousness cannot be accounted for as the product of natural selection. Nature would not have done it that way.

    I see no functional difference in the above than the following rewording:
    “Gelernter points out that from an outside observer’s perspective, a fully conscious, self-aware person cannot be distinguished from a zombie Fred. They behave exactly alike. Here is where it gets interesting. If a conscious person and a zombie behave exactly alike, zombihood does not confer a survival advantage on the zombie person. It follows that zombihood is invisible to natural selection, which selects for only those traits that provide a survival advantage. And from this it follows that zombihood cannot be accounted for as the product of natural selection. Nature would not have done it that way.”

  34. This is only half a step removed from the argument that animals can’t suffer because they are meat puppets. People seem to be divided into those that empathize and those that don’t. Or more likely, it is a continuum.

    It is an odd thing, because a person who believes that consciousness evolves would be more likely to see consciousness as a continuum and more likely to treat other creatures humanely.

  35. llanitedave: I see no functional difference in the above than the following rewording:
    “… If a conscious person and a zombie behave exactly alike, zombihood does not confer a survival advantage on the zombie person. It follows that zombihood is invisible to natural selection, which selects for only those traits that provide a survival advantage. And from this it follows that zombihood cannot be accounted for as the product of natural selection. Nature would not have done it that way.”

    One could simply say that there would be no difference in fitness between zombies and conscious beings, if they behave identically.

    As I pointed out earlier in this thread, this does not follow, if having that behavior by zombihood and having it by conscioushood are achieved by different structure of the nervous system. That would lead to evolution based on secondary effects of that structure, such as resources needed or ease of occurrence of mutations (or different amounts of standing variation in the population allowing change toward one versus the other).

    The only way you could avoid selection on these correlated effects would be if zombiedom were achieved by exactly the same neural machinery as an equivalently-behaved consciousness. Which I suggest can’t be.

  36. Joe Felsenstein: …
    The only way you could avoid selection on these correlated effects would be if zombiedom were achieved by exactly the same neural machinery as an equivalently-behaved consciousness.Which I suggest can’t be.

    It can be if they’re actually the same thing, and “zombie” is just a made-up concept. Hmmmmm….

  37. llanitedave: It can be if they’re actually the same thing, and “zombie” is just a made-up concept.Hmmmmm….

    In which case consciousness is also a made-up concept. I am however not claiming that. I think there is such a thing. It is just that then it must be reflected in the neural machinery and thus its presence or absence or its extent and detailed nature, must make some difference to fitness.

    Which makes the whole Zombie Fred argument a dead one, albeit a Living Dead one.

  38. Joe Felsenstein: One
    As I pointed out earlier in this thread, this does not follow, if having that behavior by zombihood and having it by conscioushood are achieved by different structure of the nervous system.T

    FWIW, philosphical zombies are usually assumed to be physically identical to us (who are all presumably conscious) because the purpose of the argument is to disprove physicalism.

  39. keiths:
    Gelernter hasn’t thought this through.Needless to say, neither has Barry.

    Yes, one needs some addition premise about consciousness having some cost that would be selected against (but which would not change physical structure and hence behavior). But since the zombie argument is about disproving physicalism, it seems difficult to see how that cost could apply to evolution acting on the physical world.

    Possibly one could argue that consciousness is invisible to evolution and hence the fact that we (or some of us?) have it is pure luck.

    That seems similiar to saying that consciousness is epiphenomenal and so does not impact evolution for that reason.

    Technical point: I believe there are functionalists who are property dualists, at least in theory, so one needs physicalism as well as functionalism or some other consciousness theory that is also physicalist.

  40. BruceS:
    FWIW, philosphical zombies are usually assumed to be physically identical to us (who are all presumably conscious) because the purpose of the argument is to disprove physicalism.

    The biology of philosophical zombies is not well-studied. It is not understood how they can have no consciousness but the same neural machinery as the Living Live.

    I can imagine that identical behavior might be achieved by two routes, one of which involves consciousness and one of which doesn’t. But I can’t imagine that this could be done with identical neural machinery, so identical costs, the same mutations and genetic variation, etc.

  41. petrushka:
    This is only half a step removed from the argument that animals can’t suffer because they are meat puppets. People seem to be divided into those that empathize and those that don’t. Or more likely, it is a continuum.

    It is an odd thing, because a person who believes that consciousness evolves would be more likely to see consciousness as a continuum and more likely to treat other creatures humanely.

    Well, it seems pretty clear to me it’s a continuum. I don’t think a conceptus is any more conscious (or rather, conscious of any more) that a plant.

    But a baby clearly is, and a 3rd trimester foetus too. But of not as much as, say, a 6 week old baby. And so on.

    (Then in reverse….)

    And of course there’s sleep.

  42. Joe Felsenstein: In which case consciousness is also a made-up concept.I am however not claiming that.I think there is such a thing.It is just that then it must be reflected in the neural machinery and thus its presence or absence or its extent and detailed nature, must make some difference to fitness.

    Which makes the whole Zombie Fred argument a dead one, albeit a Living Dead one.

    Well, it seems to exist only to score rather sterile points off of a pointless argument.

    Here’s my alternative parsing of Barry’s argument:

    1. Assertion — Consciousness is something apart from behavior and has no physical componant.
    2. Caveat — But the only way to possibly infer consciousness on the part of another entity is via observing its behavior.
    3. Therefore, an entity can appear to be conscious in its behavior without really possessing it, but there is no way for the observer to tell the difference.
    4. Since consciousness is not truly defined by behavior and has no physical componant, it is immune to the effects of natural selection, which can only work on physical and behavioral characteristics.
    5. Triumphant Flourish — Therefore, consciousness cannot be the result of biological evolution, and must be inserted via the Designer.

    Problem with the Triumphant Flourish is, that in making it Barry concedes that behavior which is indistinguishable from consciousness to outside observers CAN be evolved, so that it may very well be that everyone around him behaves as if they were conscious as a result of evolution, and there’s no way for him to tell whether they are or are not. He can only assert that he, himself is conscious via some miraculous event, but he can’t apply it to anyone else.

    In distinguishing between consciousness and behavior by stating that the former can’t be evolved, he’s pretty much giving up the farm on the latter, and it’s behavior, as Lizzie has so exquisitely pointed out in her response, that is really important.

  43. llanitedave: Well, it seems to exist only to score rather sterile points off of a pointless argument.

    Here’s my alternative parsing of Barry’s argument:

    Is Gelernter’s argument more straightforward? I think it’s Gelernter’s argument, not Barry’s, that needs to be examined.

  44. Joe Felsenstein:
    But I can’t imagine that this could be done with identical neural machinery, so identical costs, the same mutations and genetic variation, etc.

    One has to be a dualist of some sort. (Not necessarily dualists in Descartes sense.) So neural machinery is only identical in the physicalist world.

    I’m not a dualist and I cannot imagine it either. But there are some very smart people who can or at least who accept an argument that its possible. Maybe 27% of philosophers according to question 16 of this poll; see also the last question on zombies:
    Philosphers poll

  45. llanitedave:
    2.Caveat — But the only way to possibly infer consciousness on the part of another entity is via observing its behavior.

    In distinguishing between consciousness and behavior by stating that the former can’t be evolved, he’s pretty much giving up the farm on the latter, and it’s behavior, as Lizzie has so exquisitely pointed out in her response, that is really important.

    I’m not sure whose premise 2. is, but I don’t think we need behavior to infer consciousness. In a complete neuroscience, we’d be able to study consciousness by some kind of advanced monitoring of the brain only.
    Even today, there are studies of how to do that on people to understand if they are in a completely vegetative state.

    To accept BA’s argument, you have to accept that (1) consciousness is separable from brain states (which control behavior) and (2) that it consciousness would somehow be selected out, and not be neutral or a spandrel, if it arose.

    (1) is justified by the philosophers zombie argument.
    (2) does not seem to be mentioned.

  46. BruceS: I’m not sure whose premise 2. is, but I don’t think we need behavior to infer consciousness.In a complete neuroscience,we’d be able to study consciousness by some kind of advanced monitoring of the brain only.
    Even today, there are studies of how to do that on people to understand if they are in a completely vegetative state.

    To accept BA’s argument, you have to accept that (1) consciousness is separable from brain states (which control behavior) and (2) that it consciousness would somehow be selected out, and not be neutral or a spandrel, if it arose.

    (1) is justified by the philosophers zombie argument.
    (2) does not seem to be mentioned.

    Consciousness would manifest itself in behavior, also, as you note, in brain states. Evolutionarily, it would presumably also be embodied in altered structure of the brain, which could in principle observe neuroanatomically, if we knew enough to interpret the structures and connections properly. Even if it did not need altered neuroanatomy, it would need altered neurotransmitter levels, and those could in principle be observed.

    In a non-dualist framework, consciousness would be a mechanism that arose to accomplish some tasks. Although people like to think of consciousness as inherently involving Angst, Weltschmerz, and deep thoughts about one’s mortality and the fate of the universe, I suspect that an evolutionary account has to be much more mundane. A mouse in a forest might have a “picture” in its brain which has in it the tree, the bush, the rock, and a representation of the mouse itself (“I am between the bush and the rock”). If the mouse is represented in its own abstract representation of the scene, to me that’s the beginnings of consciousness.

    If consciousness needs changes in the neuroanatomy and/or the neurotransmitter levels, it is then not exempt from natural selection even if the behaviors could also be achieved some other way, without consciousness.

  47. Joe Felsenstein:
    If consciousness needs changes in the neuroanatomy and/or the neurotransmitter levels, it is then not exempt from natural selection even if the behaviors could also be achieved some other way, without consciousness.

    Just to be clear, that is exactly what I think.

    I only am trying to clarify BA’s argument for essentially the same reason I do (British) crossword puzzles.

  48. BruceS: Just to be clear, that is exactly what I think.

    I only am trying to clarify BA’s argument for essentially the same reason I do (British) crossword puzzles.

    I understand. I was summarizing my take on this. I was not assuming that you disagreed, that you were a dualist or that you thought consciousness could not be explained by natural selection. This is another case of violent agreement.

  49. Barry Arrington puts up another OP related to “philosophical zombies” here.

    He quotes Lizzie thus:

    What is being startled if not being “conscious” of an alarming signal? What is trying to gain further information, if not a volitional act? What is recognising that information is lacking if not a metacognitive examination of the state of self-knowledge? What is anticipating another’s desires and needs, if not the ability to imagine “what it it is like” to be that person? What is wanting to please or help another person if not the capacity to recognise in another being, the kinds of needs (recharging? servicing?) that mandate your own actions? In other words, what is consciousness, if not these very capacities?

    and responds

    Let’s answer Lizzie’s question using her first example (the reasoning applies to all of her others). To be startled means to be agitated or disturbed suddenly. I can be startled by an unexpected loud noise and jump out of my seat. Zombie Fred would have the same reaction and jump right out of his chair too. Our physical outward actions were be identical. So what is the difference? Simply this. I as a conscious agent would have a subjective reaction to the experience of being startled. I would experience a quale – the surprise of being startled. Zombie Fred would not have a subjective reaction to the experience.

    Which I guess might be meaningful if the concept of qualia were meaningful. It must be obvious that I am not well-versed in philosophy and have yet to be convinced of the rigour of philosophical arguments as they tend to rush off into meandering expositions without defining or clarifying the terms used.

    I see from Wikipedia that

    It is worth noting that a necessary condition for the possibility of philosophical zombies is that there be no specific part or parts of the brain that directly give rise to qualia—the zombie can only exist if subjective consciousness is causally separate from the physical brain.

    So it would seem that defining what a quale is would be a prerequisite for talking about philosophical zombies.

    So are qualia real or imaginary?

    ETA

    Barry goes on to suggest a thought experiment where a computer is programmed to assess and respond to a “beautiful sunset”.

    The computer has had no “experience” of the sunset at all. It has no concept of beauty. It cannot experience qualia. It is precisely this subjective experience of the sunset that cannot be accounted for on materialist principles.

    I’d disagree on two counts.

    1) The trivial one that material experience is all that we have evidence for. It is up to those that propose that immaterial elements affect our experience and consciousness to make that case. “I had no need of that hypothesis”.

    2) The pedantic one that artistic concepts, not having an evolutionary survival advantage, could not have evolved. This ignores the proposition that sexual selection could have been an element.

Leave a Reply