“The Quale is the Difference”

Barry has graciously posted a counter-rebuttal at UD to my Zombie Fred post rebutting his own original zombie post at UD.  (This debate-at-a-distance procedure isn’t a bad way to proceed, actually!  Although as always, he is welcome to come over here in person if he would like.)

redmousemat

Barry writes:

Over at TSZ Lizzie disagrees with me regarding my conclusions from the zombie thought experiment (see this post).  Very briefly, in the zombie post I summarized David Gelernter’s argument from the zombie thought experiment:

If a conscious person and a zombie behave exactly alike, consciousness does not confer a survival advantage on the conscious person. It follows that consciousness is invisible to natural selection, which selects for only those traits that provide a survival advantage. And from this it follows that consciousness cannot be accounted for as the product of natural selection.

Lizzie disagrees.  In her post she writes:

What is being startled if not being “conscious” of an alarming signal? What is trying to gain further information, if not a volitional act?  What is recognising that information is lacking if not a metacognitive examination of the state of self-knowledge?  What is anticipating another’s desires and needs, if not the ability to imagine “what it it is like” to be that person?  What is wanting to please or help another person if not the capacity to recognise in another being, the kinds of needs (recharging? servicing?) that mandate your own actions?  In other words, what is consciousness, if not these very capacities?

Let’s answer Lizzie’s question using her first example (the reasoning applies to all of her others).  To be startled means to be agitated or disturbed suddenly.  I can be startled by an unexpected loud noise and jump out of my seat.  Zombie Fred would have the same reaction and jump right out of his chair too.  Our physical outward actions were be identical.  So what is the difference?  Simply this.  I as a conscious agent would have a subjective reaction to the experience of being startled.  I would experience a quale – the surprise of being startled.  Zombie Fred would not have a subjective reaction to the experience.

I submit that Barry has not addressed my questions at all.  He has simply repeated his assertion – that physically identical entities (Fred and Zombie Fred) would differ in in some key way, namely one would experience a quale, and the other would not. And of course, I disagree.  But let me unpack Barry’s assertion:

Let me first note that Barry refers to the “physical outward” actions of the two Freds.  I suggest that “outward” is at best unnecessary, and at worst, misleading.  In classic Philosophical Zombie thought experiments, the two Freds are physically identical, right down to the last ion channel in the last neuron.

This means that not only would Zombie Fred’s “outward” (i.e. apparent to someone meeting Zombie Fred at, say, a cocktail party) reactions be identical to Fred’s, but the cascade of biological events generating those reactions would also be identical.  Not only that, but the results of those reactions – for example, changing the direction of gaze, reaching out to touch something, grasp something, change the trajectory of an action, move to a new location – will bring in new, behavourally relevant, information that ZF would otherwise not have gained.  This itself will impact on the results of further decisions that ZF makes, and therefore on Zombie Fred’s biological equipment, in just the same way as it would on Fred’s biological equipment.  In both cases, that equipment must enable both ZF and F to interrogate the state of their own knowledge, in order to base a decision on that knowledge. If it doesn’t in ZF’s case, ZF’s behaviour will differ from that of Fred’s.

And my question (or one of them) to Barry was: in what way does ZF’s interrogation of the state of ZF’s own knowledge differ from the meta-cognitive interrogation of our own state of knowledge that we call conscious awareness of our state of knowledge?

I will try to illustrate may point with the stupid red mouse mat illustrated at the top of this OP. My optical computer mouse reacts to a red mouse mat simply by stopping work (because it “can’t tell” whether it is moving when its red laser beam traverses a red surface).  If I put it my mouse on a black mouse mat it starts work.  But I do not argue that my mouse experiences red when it meets a red mouse-mat, even though though it reacts to it (by stopping work).  So I do not think my mouse experiences a quale.

But I experience the quale of “red” when I see the mouse mat. Or, if you like, when I look at the picture of the red mouse mat at the top of the OP, I “experience redness”. So what do I mean by that?

I suggest that my experience of redness consists not merely of reacting to redness (my red receptors do this, but they are not me), as my mouse does (by stopping work), but also of knowing that the mouse-mat is red, and, moreoever, knowing that I know that the mouse-mat is red, and being able to compare that state of knowledge with the state of not knowing that the mousemat was red, as would be the case if, for instance I saw this picture:

redmousematGSwhich is a greyscale image of the same mousemat.

Aha, you say – but if I told you that the mousemat was red, you would have that knowledge – what you’d lack was the quale assocate with a red mousemat.

Yes, indeed, I agree. But I suggest that that knowledge (gained from you telling me that the mousemat is red) is qualitatively different from knowing it is read by seeing that it is red in the following ways:

  • I know that my knowledge is contingent on my trust in your honesty, not my own perceptual apparatus
  • I know that I would not know that the mouse mat was red unless you told me
  • I know that if I saw a coloured red mousemat, as opposed to a greyscale image of the mouse-mat that I would know that it was red without you telling me.
  • I also know that if I really saw a red mousemat, I would quite like it, because I know I like red.
  • I also know that I would think it was a silly colour for a mouse mat, because the mouse probably won’t work on it, but maybe a red mouse could be cool.

And I suggest that all these pieces of knowledge are part of what constitutes my experience of directly seeing a red mousemat.  Moreover:

  • I also know that when I see red, and even, to some extent when I imagine red, and also to some extent, after you tell me the mousemat is red, I have an idea of what red is (or a red mouse mat would be) like, and that knowing what red is “like” is different from knowing that something is red.

And there’s the rub – what does that last thing mean?  Because now we are close to this ineffable “qualia” business.  And I suggest that the “quale” of red consists not only of all the explicit knowledge I listed first, it also includes implicit knowledge of what I feel like when we see red things, gained from partly from my life experience, but partly, I suspect, bequeathed to me by evolution in the genes that constructed my infant brain.

And in the case of red, specifically, I suggest it is a slight elevation of the sympathetic nervous system, resulting from both learned and hard-wired links between things that are red, and which have in common both danger and excitement – fire and blood being the most primary, edible fruit probably as a co-evolutionary outcome, and fire engines, warning lights, stop lights, etc as learned associations.  And just as we can implicitly find ourselves feeling a touch of anxiety that we cannot pin down, triggered by a reminder of something we know we should have done, but can’t remember what, I suggest that the “qualia” of red, and indeed of other colours, is, in addition to our explicit knowledge-about-knowledge, also implicit knowledge about our own internal responses, including idiosyncratic appetitative or aversive responses.

And my point is: all the mechanisms that generate that explicit and implicit knowledge in response to a red stimulus would have to be present in Zombie Fred for Zombie Fred to react to red as Fred does.  And, as a result, ZF would have just the same quale as Fred.  And we could test this: if we use red stimulus in a priming experiment, does ZF show the same priming effects?  Does ZF show the same increased reaction time to a red stimulus as to a green one as Fred does?  Will ZF be more likely react to a fire alarm test following a series of red stimuli than following a series of green ones?

I suggest, in short, that a “quale” is a highly automated repertoire of possible action sets triggered by a certain stimulus property (classically colour), and that our knowledge that the mouse-mat is red when we see the full-colour picture boils down to the knowledge that it has activated in us a specific repertoire of action sets that we package together, for convenience, as “red”.

And that in order to behave exactly like Fred, those action sets must also be triggered in Zombie Fred by the same stimulus property.  If they aren’t, he will behave differently.  If they are, he will experience qualia, because that’s what qualia are.

Barry finishes his post:

I discussed a similar situation in this post in which I contrasted my experience of a beautiful sunset with that of a computer.  I wrote:

Consider a computer to which someone has attached a camera and a spectrometer (an instrument that measures the properties of light).  They point the camera at the western horizon and write a program that instructs the computer as follows:  “when light conditions are X print out this statement:  ‘Oh, what a beautiful sunset.’” Suppose I say “Oh, what a beautiful sunset” at the precise moment the computer is printing out the same statement according to the program.  Have the computer and I had the same experience of the sunset?  Obviously not.  The computer has had no “experience” of the sunset at all.  It has no concept of beauty.  It cannot experience qualia.  It is precisely this subjective experience of the sunset that cannot be accounted for on materialist principles.

I completely agree with Barry that the computer is not experiencing the sunset (just I assume that he agrees that my mouse is not experiencing a red mouse mat).  But the computer is not behaving like Barry.  There is a tiny overlap in behaviour – both are outputting an English sentence that conveys the semantic information that the sunset is beautiful.  But to leap from “computer does not experience the sunset” to “this subjective experience of the sunset …cannot be accounted for on materialist principles” is a non-sequitur, because the computer does not behave like Barry.  And if we replace the computer by Zombie Barry “Zombie Barry does not experience the sunset” is mere assertion – unlike the computer, Zombie Barry does behave exactly like Barry, not merely “outwardly” but with every molecule and ion of its being.  So why should we conclude that Zombie Barry has no qualia?

I suggest that Zombie Barry both has, and must, because those qualia are a necessary consequence of ZB’s interrogation of its own internal state, and if it can’t do that, it won’t be able to behave exactly like Barry.

120 thoughts on ““The Quale is the Difference”

  1. Neil Rickert: I disagree with that particular criticism.

    I hope you see what I did there.It’s a matter of the level at which we aim our account, so we can have different causal explanations at different levels.

    It’s precisely this type of consideration that I think is missing from the argument linking consciousness to behavior and so evolution.

    I’m all for levels of explanation but higher levels need to be constrained by lower levels. Explanations of psychological events must be constrained by neuroscientific explanations, in the same way biological explanations involving mutation must be constrained by the chemistry and physics that explains mutations.

    In other words, there is no downward causation: psychological/mental events cannot cause neuroscientific/brain events. So it is incomplete to argue about the usefulness of consciousness (which I agree is a vague term for psychological events) in evolution without linking it to brain events and behavior.

    The two usual ways of linking are functionalism, where the brain is implementation detail, and identity theory, where the brain state types are identical the the psychological state types. As someone with an IT background, functionalism always seemed natural to me, since it is like saying one only needs to look at source code most of the time to understand how a program works. But as I learn more neuroscience, I have come to doubt that it is that simple. But that’s enough for now.

  2. Causation ,must be linked in some way to emergence. Water can’t cause changes in the attributes of hydrogen and oxygen, but H and O can’t account for the behavior of water.

  3. BruceS: In other words, there is no downward causation: psychological/mental events cannot cause neuroscientific/brain events. So it is incomplete to argue about the usefulness of consciousness (which I agree is a vague term for psychological events) in evolution without linking it to brain events and behavior.

    There will never be a satisfactory explanation in terms of brain events. It’s not possible.

    Try giving an explanation of how to get from New York City to Chicago. But the explanation has to be given in terms of the amount of pressure applied to the steering wheel, the gas pedal, the brake pedal, and the pixel data from stimulation of retinal cells. Such an explanation would be impossibly complex in detail, and would be an account of only one driving exercise that could never be repeated.

    That’s why, instead, you need an account in terms of information and intentions. The question inherently calls for something like a teleological account.

  4. There could be some problems lurking about here about what it means to offer an “explanation” of some phenomena.

    John McDowell, for example, distinguishes between “constitutive explanations” and “enabling explanations” in a criticism of Dennett. On McDowell’s account, constitutive explanations specify what it is for some phenomena to count as “mental” or “psychological” in the first place. A constitutive explanation of mental life could specify the nature of intentionality, consciousness, representation, conceptuality, or rationality. That’s different from offering an enabling explanation that specifies how those phenomena are implemented in the order of nature.

    So the relevance of neuroscience to philosophy of mind depends on whether we’re interested in constitutive explanations (in which case neuroscience doesn’t seem relevant) or in enabling explanations (in which case neuroscience is definitely relevant).

    And because they are different kinds of explanations, they also have different success-conditions. I share Neil’s skepticism about whether a completely satisfying enabling explanation of mental life is in the cards. For that matter, I don’t even know if there’s a completely satisfying constitutive explanation, though there’s much to learn from Kant, Husserl, and Ryle, and also much room for improvement.

  5. Neil Rickert: There will never be a satisfactory explanation in terms of brain events.It’s not possible.

    Try giving an explanation of how to get from New York City to Chicago.

    I agree and I never meant to imply otherwise. Explanations at different levels science may always be useful and may be all that is available for all practical purposes. That is not the same thing as downward causation.

    The software analogy would be that no one would explain the operation of a program by referring to the states of the electrical components of the the computer it was running on. But that does not change the fact it is those electrical states that have causal power in the real world, not the abstraction that is the software.

    This may seem like nit picking, but denying that analogous linkage for qualia and brains is the crux of BA’s original argument. That is why I think it is important to include in any refutation of that argument.

  6. BruceS,

    In other words, there is no downward causation: psychological/mental events cannot cause neuroscientific/brain events.

    I agree. In fact, I am unaware of any valid examples of downward causation in nature.

    This doesn’t render mental events causally inert, however, if mental events are just physical events at a different level of description.

  7. keiths:
    BruceS,

    I agree.In fact, I am unaware of any valid examples of downward causation in nature.

    This doesn’t render mental events causally inert, however, if mental events are just physical events at a different level of description.

    Exactly. And is statement like this the missing link I was referring to in the argument chain. It’s as simple as that. I made that point in my original feedback to Dr Liddle and was just restating the fact that Torley missed it too.

    ETA: Not that everyone accepts that approach — it’s not quite functionalism as I understand it, for example, where it is more like mental events being realized by brain events in humans but not necessarily Martians. But that can do the same work, except maybe for qualia…

  8. Kantian Naturalist:
    There could be some problems lurking about here about what it means to offer an “explanation” of some phenomena.

    As usual, KN, this is too subtle for me to understand immediately. So another challenge to puzzle out (in a good way).

  9. BruceS,

    ETA: Not that everyone accepts that approach — it’s not quite functionalism as I understand it, for example, where it is more like mental events being realized by brain events in humans but not necessarily Martians. But that can do the same work, except maybe for qualia…

    To me, that’s the real import of the zombie argument.

    Both physicalists and property dualists accept that two physically identical systems must be identical in terms of consciousness (or lack thereof).

    The interesting question is whether two functionally identical systems (in terms of how they process information) must also be identical in terms of consciousness. In other words, must any system that behaves indistinguishably from a human experience phenomenal consciousness, regardless of its “implementation”?

    The answer determines whether zombies are possible.

  10. keiths:
    The interesting question is whether two functionally identical systems (in terms of how they process information)

    I think it depends on what you mean by “how they process information”.
    I believe a functionalist would say that the implementation does not matter, as long as the same outputs and internal state updates occur for given inputs. So in this case, how they process refers to the causal relationships between the states.

    I can accept that works for any state that can be associated with some kind of functional role, things like beliefs, intentions, thoughts. How it works for qualia is not so clear to me. In fact, thought experiments like qualia inversion are sometimes used to show that qualia are independent of function. Of course, there are arguments to show qualia are also functional, but they are not as intuitive to me.

    On the other hand, if by “how they process information” you mean that the implementation matters, at least for qualia, then it would be possible for two functionally identical systems to have different qualia (but I am not so sure about no qualia).

    There are counter-arguments on both sides (eg thought experiments on multiple realizability to “prove” implementation cannot matter and functionalism is right. Or the Chinese nation implementation of functionalism to show qualia cannot be solely functional. In fact Prinz book I am ready claims only an integration of the two concepts work. But I am still puzzling out what he means and why he thinks it it works.

  11. Bruce,

    I believe a functionalist would say that the implementation does not matter, as long as the same outputs and internal state updates occur for given inputs.

    A functionalist could go even further and argue that as long as the outputs are the same, the implementation doesn’t matter.

    I can accept that works for any state that can be associated with some kind of functional role, things like beliefs, intentions, thoughts. How it works for qualia is not so clear to me.

    Or to me.

    In fact, thought experiments like qualia inversion are sometimes used to show that qualia are independent of function.

    When I was a teenager, I was disturbed by the thought of a pain/pleasure qualia inversion. I imagined a nightmarish scenario in which my body and brain would be wired to seek out the usual sources of pleasure, but the accompanying qualia would be agonizing and unbearable.

    No one would know that I was in pain, because my body would show all the outward signs of pleasure and satisfaction, and I would continue to bring about the very things that caused my agony.

  12. keiths: When I was a teenager, I was disturbed by the thought of a pain/pleasure qualia inversion. I imagined a nightmarish scenario in which my body and brain would be wired to seek out the usual sources of pleasure, but the accompanying qualia would be agonizing and unbearable.

    No one would know that I was in pain, because my body would show all the outward signs of pleasure and satisfaction, and I would continue to bring about the very things that caused my agony.

    Come to that, how would you know? How could you know that you, yourself, weren’t just deluded?

    This is why I dismiss qualia as any sort of useful or explanatory idea. Such a scenario cannot work in the real world unless you allow “imaginary agony” as having some consequence in the real world, which is utterly illogical.
    I wonder if I can go so far as to say that a concept, which by the very nature of the assumptions in formulating it renders it utterly unable to be demonstrated and shared, is non-existent. Or is that too pragmatic?

    ETA

    Does this relate to the “Twin Earth” thought experiment where “water” isn’t “water” except that it is?

    ETA

    If zombies are indistinguishable from people because they are identical, people are zombies and vice versa?

    (Last comment for a while but perhaps OT, anyone else think Jonah Lehrer’s “The Decisive Moment” – I have his book “Proust was a Neuroscientist”* which seems to make the point that the Arts have a contribution to make to neuroscience – might be worth a read?

    *There’s a chapter on Stravinsky which induced me to play “Rite of Spring”. I see another work by him is entitled “Petrushka”! Spooky!)

    ETA Oops!

    ETA Bizarrely there’s an oblique reference to Lehrer in an OP by “News”

  13. Alan Fox:
    This is why I dismiss qualia as any sort of useful or explanatory idea.

    But do you dismiss their existence entirely? If not, then I think that means they need an explanation. That’s different from saying they are useful as part of a scientific explanation to explain other things.

    Does this relate to the “Twin Earth” thought experiment where “water” isn’t “water” except that it is?

    That’s a thought experiment in philosophy of language (originally) and mental content (by extension) which aims to show that meaning/mental content cannot be just in the head. Two beings with identical internal states on different planets both refer to water. But on one planet, the being is actually referring to twater. So meaning and content must involve more than just internal state. (You have to define twater as behaving identically to water as far as both can tell but somehow being different in some other way. Not sure if there is a physically realistic way to do this: one attempt I’ve seen is to say twater is the same as water except at temperature > 1 million degrees).

    If zombies are indistinguishable from people because they are identical, people are zombies and vice versa?

    Physically identical but not completely identical because they have no qualia, since qualia are not physical (if you accept the zombie argument shows that).

    If you don’t believe qualia exist at all, then we are all zombies, I guess.

  14. keiths:

    A functionalist could go even further and argue that as long as the outputs are the same, the implementation doesn’t matter.

    I would have thought the internal state has to change to cover all possible future outputs. But maybe there is a way to get around that.

    Still I am not sure what happens to things like beliefs — if one just considers outputs, it seems like behaviorism to me.

  15. Alan,

    Come to that, how would you know? How could you know that you, yourself, weren’t just deluded?

    That’s a key point. As an adult, I can see what my teenage self missed: to know that I am in pain is to take a propositional attitude that must necessarily be reflected in my brain state. Otherwise it wouldn’t be possible for me to say “ouch” or reach for the aspirin bottle.

    So even if I were experiencing pleasurable qualia at the time, I would still “know” that I was in pain. This seems nonsensical, strongly suggesting that a pain/pleasure qualia inversion is impossible.

  16. BruceS:

    I believe a functionalist would say that the implementation does not matter, as long as the same outputs and internal state updates occur for given inputs.

    keiths:

    A functionalist could go even further and argue that as long as the outputs are the same, the implementation doesn’t matter.

    BruceS:

    I would have thought the internal state has to change to cover all possible future outputs. But maybe there is a way to get around that.

    Yes, the internal state has to change, but my point is that the state updates don’t have to be identical in order to guarantee identical output.

    In other words, multiple realizability doesn’t just apply to different media, such as silicon vs. neurons. It also applies to different implementations in the same medium.

  17. keiths:
    This seems nonsensical, strongly suggesting that a pain/pleasure qualia inversion is impossible.

    True, It has to be something that would not affect behavior, like colors.

  18. I would suggest that quite a few people exhibit behavior that brings about what most of us would call pain.We even have a word for it.

Leave a Reply