Conching

The problem of consciousness is, notoriously, Hard.  However, it seems to me that much of the hardness is a result of the nature of the word itself.  “Consciousness” is a noun, an abstract noun, derived from an adjective: “conscious”.  And that adjective describes either a state (a person can be “conscious” or “unconscious” or even “barely conscious”) or a status (a person, or an animal, is said to be a “conscious” entity, unlike, for example, a rock).

But what must an entity be like to merit the adjective “conscious”?  What, properties must it possess?  Merriam-Webster gives as its first definition: “perceiving, apprehending, or noticing with a degree of controlled thought or observation doing, or be capable of doing”.  In other words, the properties of a conscious thing refer to its capacity to do something.  And, indeed, it is derived from a latin verb: the verb scire, to know.   In other words, “consciousness” may be better thought of as an abstract noun derived not from an adjective that describes some static attribute of an entity, but from one that implies that that entity is an agent capable of action.

And so, I will coin a new verb: to conch.  And I will coin it as a transitive verb: an entity conches something, i.e. is conscious of something.

Armed with that verb, I suggest, the Hard Problem becomes tractable, and dualism no more of an issue than the dualism that distinguishes legs and running.

And it allows us, crucially, to see that an animal capable of conching itself, and conching others as other selves, is going to be something rather special.  What is more, to be an animal capable of conching itself and others as selves will not only experience the world, but will experience itself experiencing the world.

It will, in a sense, have a soul, in Hofstadter’s sense, and even a measure of life beyond death in the conchings of other conchers.

 

[As has this post....]

123 thoughts on “Conching

  1. Neil,

    Such modeling [of neural networks] is imperfect… It certainly does not prove that a semantic system can be built on top of purely syntactic processing.

    Why do you think that an imperfect model of a neural network cannot be a semantic system?

  2. keiths: Why do you think that an imperfect model of a neural network cannot be a semantic system?

    I already went through that in an earlier post. If you try to provide semantics through syntactic operations, you will need very fine adjustments of the syntactic processing. That will require a learning capability. By using a model, you will find yourself forced to hard code into the model everything that will have to be learned. This is probably not doable. But, even if it is, that would not work for the brain unless you are claiming that there is an enormous amount of “preprogrammed” knowledge carried by the genome.

    Computationalism – lots of highly detailed information processing in the brain – is a designers way of thinking about cognition. We should be thinking about what might evolve, and that kind of information processing seems unlikely as a result of evolution. I’ll suggest J.J. Gibson’s ideas as at least closer.

  3. keiths: You seem be arguing that the phenomenology of consciousness is essential to the underlying information processing, rather than merely accompanying it. What causal role does the phenomenology play, and how would behavior differ if the phenomenology did not accompany the information processing?

    That’s a very odd question. It’s as if you consider the subjective aspects of consciousness to be separate from and isolated from the rest of the mind. While people can philosophize about and ‘reify’ their common subjective experiences, that doesn’t mean those experiences are not part of how the mind interacts with itself and with the world.

    Returning to the flip-switch that swaps pain for pleasure, it seems nonsensical to suppose that such a switch wouldn’t affect behavior. If you flipped the switch, people would avoid caresses and seek pinpricks, even if contrary to their autonomic responses.

  4. Zachriel,

    It’s as if you consider the subjective aspects of consciousness to be separate from and isolated from the rest of the mind.

    Not separate and isolated, but causally impotent. The evidence suggests that subjective awareness depends intimately on neural processing, but not vice-versa. The causal work is done by neurons changing state in response to their inputs, according to the laws of physics. Those physical laws do not take subjective experience into account. How then can subjective experience play a causal role?

    Returning to the flip-switch that swaps pain for pleasure, it seems nonsensical to suppose that such a switch wouldn’t affect behavior.

    Counterintuitive, yes. Nonsensical, no.

    If you flipped the switch, people would avoid caresses and seek pinpricks, even if contrary to their autonomic responses.

    Your statement assumes the causal power of subjective experience, when the point of the thought experiments is to determine whether subjective experience has such power.

    Reciprocating Bill and I think it doesn’t, for the reasons I gave above. The evolution of neuronal states in response to environmental inputs is causally closed, determined only by the laws of physics. Subjective experience has no influence on this evolution. It’s just along for the ride. It’s epiphenomenal.

  5. It’s perfectly possible that the kind of behavior exhibited by humans requires self awareness. We have no counterexamples as far as verbal behavior goes.

    I suppose that leaves open the possibility that self awareness doesn’t require consciousness.

  6. Keiths:

    Your statement assumes the causal power of subjective experience, when the point of the thought experiments is to determine whether subjective experience has such power.
    Reciprocating Bill and I think it doesn’t, for the reasons I gave above.

    I don’t quite follow you all the way into epiphenomenalism. In fact, as above, I think really accepting epiphenomenalism results in incoherence, e.g., we can’t actually be talking about subjective awareness, we just think we are. And if that is so, we haven’t really attained any conclusions about subjective awareness at all – including the conclusion that subjective awareness is epiphenomenal. So I suspect a wrong turn. (This may be a self-inflicted sleight-of-hand, but if so, somebody will have to show me the trick, as I don’t see it.)

    At the risk of repeating myself, this ties back to “a rose by any other name.” True philosophical zombies would be characterized by information processing every bit as nuanced and subtle and multi-leveled as our own, crammed with shifts of something we generally refer to in ourselves as “awareness” or “consciousness.” Zombies would awaken from sleep and take note of their surroundings. They would formulate thoughts, loose them (with some frustration), then recover them again. They would possess an indexial self to which they attribute beliefs, desires, intentions and other features of agency at least as correctly as do we. They would refer to themselves in the first person, and be privy to private information of the sort available only to subjects, and unavailable to others. Mountain vistas would prompt remarks on the majesty of nature, and star fields on the existence of God. Indeed, every causal nuance, however global, ephemeral, focused, dreamlike, or hallucinatory that I now ascribe to changes in states awareness would be present. Zombies would attribute to LSD the ability to transform consciousness and deepen insight. Every particle and every archway of the content of subjective awareness I possess would have an analog in my zombie. It follows that none of those features are what are being subtracted in the zombie thought experiment. Indeed, following Wittgenstein in saying that “meaning is use,” we would say that the zombie possesses subjective awareness.

    Meanwhile, the notion of subjective awareness that has been subtracted in the zombie thought experiment has been entirely emptied of content. It has no causal force whatsoever. Nor can we non-zombies be said to be aware of it in any sense that we think about it, reflect upon it, classify it, or wax poetic about it, as those activities would result in differences in information processing, and hence divergence from zombie behavior. Therefore it has no characteristics we can discuss. As above, I don’t see how any of this conversation can actually concern it, as it seems to float elsewhere, utterly disconnected, utterly without significance and utterly beyond the reach of all of the other forms of awareness and reflection ascribed to both zombies and ourselves above. Indeed, not only does it have no causal significance, I wonder if it can even be said to exist. But how can we intelligibly go there?

    In short, when push the notion of epiphenomenalism, everything of significance concerning subjective awareness, either causally or poetically, is sucked over to i-consciousness before our eyes. The subjective awareness of this thought experiment proves to be an empty notion with no attributes, and nothing has been subtracted from the Zombies, after all! So, once again, we don’t appear to be thinking about what we think we are thinking about when we pose the notion of epiphenomenalism.

  7. Arrrgg! I edited a lengthly comment, and now it is “marked for moderation” and has disappeared from others’ view!

  8. This is a repeat post, after the original went unexpectedly into moderation. Liz, pls feel free to delete either this or its duplicate, as they are identical.

    Keiths:

    Your statement assumes the causal power of subjective experience, when the point of the thought experiments is to determine whether subjective experience has such power.
    Reciprocating Bill and I think it doesn’t, for the reasons I gave above.

    I don’t quite follow you all the way into epiphenomenalism. In fact, as above, I think really accepting epiphenomenalism results in incoherence, e.g., we can’t actually be talking about subjective awareness, we just think we are. And if that is so, we haven’t really attained any conclusions about subjective awareness at all – including the conclusion that subjective awareness is epiphenomenal. So I suspect a wrong turn. (This may be a self-inflicted sleight-of-hand, but if so, somebody will have to show me the trick, as I don’t see it.)

    At the risk of repeating myself, this ties back to “a rose by any other name.” True philosophical zombies would be characterized by information processing every bit as nuanced and subtle and multi-leveled as our own, crammed with shifts of something we generally refer to in ourselves as “awareness” or “consciousness.” Zombies would awaken from sleep and take note of their surroundings. They would formulate thoughts, loose them (with some frustration), then recover them again. They would possess an indexial self to which they attribute beliefs, desires, intentions and other features of agency at least as correctly as do we. They would refer to themselves in the first person, and be privy to private information of the sort available only to subjects, and unavailable to others. Mountain vistas would prompt remarks on the majesty of nature, and star fields on the existence of God. Indeed, every causal nuance, however global, ephemeral, focused, dreamlike, or hallucinatory that I now ascribe to changes in states awareness would be present. Zombies would attribute to LSD the ability to transform consciousness and deepen insight. Every particle and every archway of the content of subjective awareness I possess would have an analog in my zombie. It follows that none of those features are what are being subtracted in the zombie thought experiment. Indeed, following Wittgenstein in saying that “meaning is use,” we would say that the zombie possesses subjective awareness.

    Meanwhile, the notion of subjective awareness that has been subtracted in the zombie thought experiment has been entirely emptied of content. It has no causal force whatsoever. Nor can we non-zombies be said to be aware of it in any sense that we think about it, reflect upon it, classify it, or wax poetic about it, as those activities would result in differences in information processing, and hence divergence from zombie behavior. Therefore it has no characteristics we can discuss. As above, I don’t see how any of this conversation can actually concern it, as it seems to float elsewhere, utterly disconnected, utterly without significance and utterly beyond the reach of all of the other forms of awareness and reflection ascribed to both zombies and ourselves above. Indeed, not only does it have no causal significance, I wonder if it can even be said to exist. But how can we intelligibly go there?

    In short, when push the notion of epiphenomenalism, everything of significance concerning subjective awareness, either causally or poetically, is sucked over to i-consciousness before our eyes. The subjective awareness of this thought experiment proves to be an empty notion with no attributes, and nothing has been subtracted from the Zombies, after all! So, once again, we don’t appear to be thinking about what we think we are thinking about when we pose the notion of epiphenomenalism.

  9. keiths: Not separate and isolated, but causally impotent.

    Well, isolated in one direction, like a one-way mirror.

    keiths: The evidence suggests that subjective awareness depends intimately on neural processing, but not vice-versa.

    That seems to overstate the findings. Sure, scientists can sometimes predict choice before the conscious mind is aware of it, but that doesn’t mean the consciousness is completely impotent, any more than a committee chair is impotent (though it may seem so at times). These experiments usually concern simple tasks which the consciousness may assign to a bot in the brain (like driving home mindlessly). It tells the bot to make the decision, then for the bot to let the consciousness know when it has made its decision. Scientists can catch the communication between the bot and the conscious mind, and know before the conscious mind knows. But that doesn’t mean the conscious mind had no say in the matter.

    The mind is funny that way. It assigns the choice to a bot, then discards the answer a couple of times, then takes the next answer, which you might observe as the hand moves towards, then back, then towards the choice, as the person “makes up their mind”. From the mind’s point of view, it made the choice because it doesn’t make a clear distinction between the various bots that comprise it.

    The consciousness is interwoven with all the bots that work with it. Indeed, the consciousness is probably just a bunch of bots all having a say, a committee of committees all clamoring for attention. For a more detailed treatment, see:

    http://www.youtube.com/watch?v=djQ7WZlb140

    -

  10. Reciprocating Bill,

    Sorry for prematurely drafting you into the ranks of the epiphenomenalists. I read more into your previous comment than I should have.

    Here’s my attempt at distilling the essence of the conundrum as it stands:

    1. Some kinds of neural activity are associated with subjective awareness.

    2. Our intuition tells that subjective awareness has causal power. If I’m thirsty, I drink something, and my drinking seems to be motivated by my subjective thirstiness.

    3. Brains are physical systems whose states evolve according to the laws of physics. Physical laws make no reference to subjective states.

    4. #2 and #3 appear to be contradictory. #2 says that subjective awareness has causal power, and #3 seems to say that it doesn’t. How can we resolve the contradiction?

    5. One way is to argue that subjective states are identical to the corresponding brain states. Under that interpretation, #2 and #3 can be true simultaneously.

    6. A problem with this approach is that a brain state does not merely cause the accompanying subjective feeling. It is the feeling. How is this possible? It’s like being told that sonnets and horseshoes are the same thing — that they may seem different to us, but it’s just an illusion.

    7. Another way out of the contradiction is to jettison #2 and posit that subjective awareness is epiphenomenal, with no causal power.

    8. A problem with this approach, as you have pointed out, is that when we talk about subjective awareness, our comments must actually be informed by something other than subjective awareness itself, because subjective awareness has no causal power.

    I’ll have more to say about all of this, but I thought it was a good time to pause and restate the problem in light of the discussion so far.

  11. I also feel that there is another neglected end of the problem. When discussing subjective awareness, we tend to by hypnotized by the problems of awareness, and pass over the problems presented by peculiar fact that there are subjects in the world. I like Nagel in “The View from Nowhere” on this:

    “To acquire a more objective understanding of some aspect of life or the world, we step back from our initial view of it and form a new conception which has that view and its relationship to the world as its object. In other words, we place ourselves in the world that is to be understood…

    Every objective advance creates a new conception of the world that includes oneself, and one’s former conception, within its scope; so it inevitably poses the problem of what to do with the older, more subjective view, and how to combine it with the new one….if what we want to understand is the whole world, we can’t forget about those subjective starting points indefinitely; we and our personal perspectives belong to the world. (p. 4, p, 5-6)”

    The “View from nowhere” is the subject-free description of mathematical physics, which, as the ultimate intellectual achievement of Western culture, inherently and progressively leaves behind the question of subjects and subjectivity it strives to describe the world in a way that is independent of any given subject (ie., objective). In so doing, it ignores a patent fact of the world (the fact of subjectivity) and creates the problem of how to integrate the subjective into a scientific scheme. I think this is responsible in part for the dilemma posed by your numbers 2. and 3. above.

  12. keiths: 2. Our intuition tells that subjective awareness has causal power. If I’m thirsty, I drink something, and my drinking seems to be motivated by my subjective thirstiness.

    That seems like a strange thing to say. I’m not sure what it even means to say that subjective awareness has causal power. We humans are causal agents, even if only because our concept of cause is derived from what we are ourselves able to cause. And subjective awareness is part of our experience. But I can’t make sense of separating out subjective awareness as having causal power in its own right.

    3. Brains are physical systems whose states evolve according to the laws of physics. Physical laws make no reference to subjective states.

    I’m not sure of the point there, either. Physics is not a theory of everything. It is not required that physical laws make reference to subjective states. Physical laws don’t make reference to traffic jams, either.

    I can look at a brain in different ways. One way of looking at the brain, is as a collection of molecules in some sort of configuration. Looked at that way, there is not much difference between the brain of a living person and the brain of a person who recently died (assuming that the cause of death was not explicitly brain related). Another way of looking at the brain, is as a system of biochemical processes, and that’s where there would be a lot of difference between the brain of a living person and of one recently dead.

    Physics is very good at explaining the behavior of objects, of things made of molecules. However, there is a lot less there in process physics. And it seems to me that if physics were to explain awareness, that would require substantial development of a general physics of processes. I am not expecting that to happen any time soon.

    4. #2 and #3 appear to be contradictory. #2 says that subjective awareness has causal power, and #3 seems to say that it doesn’t. How can we resolve the contradiction?

    That just seems confused. I don’t see that subjective awareness has causal power, as I have remarked above. And I don’t see that your #3 carries any implication that it doesn’t.

  13. Reciprocating Bill,

    I also feel that there is another neglected end of the problem. When discussing subjective awareness, we tend to by hypnotized by the problems of awareness, and pass over the problems presented by peculiar fact that there are subjects in the world.

    I find that statement confusing. Don’t we have awareness wherever there are subjects, and vice-versa? Are you (or Nagel, whom I haven’t read) drawing a distinction that I’m missing?

  14. Neil,

    I’m not sure what it even means to say that subjective awareness has causal power.

    Most humans feel intuitively that subjective awareness has causal power (even if some of us come to doubt our intuition upon reflection). That is why the following conversation makes sense to us:

    Q: What are you doing?
    A: Getting a drink.

    Q: Why are you getting a drink?
    A: Because I’m thirsty.

    Q: Would you be getting a drink if you didn’t feel thirsty?
    A: No.

    We humans are causal agents, even if only because our concept of cause is derived from what we are ourselves able to cause.

    If we are able to cause things then we are causal agents by definition.

    It is not required that physical laws make reference to subjective states.

    Who said it was? My point is that the evolution of brain states seems to depend on nothing but physics, and the laws of physics don’t take subjective states into account. Therefore the evolution of brain states, and all the actions our bodies take as a result of our changing brain states, are independent of subjective awareness — unless, as I mentioned earlier, subjective awareness is somehow identical to those physical brain states.

    Physics is very good at explaining the behavior of objects, of things made of molecules.

    What is a brain, if not a thing made of molecules?

  15. keiths: What is a brain, if not a thing made of molecules?

    Consider your brain, and the molecules involved.

    In a few months time, most of those molecules will be gone, dissipated throughout the environment. Should we say that your brain will be gone, or that your brain will be dissipated throughout the environment?

  16. keiths: Most humans feel intuitively that subjective awareness has causal power (even if some of us come to doubt our intuition upon reflection). That is why the following conversation makes sense to us:

    Q: What are you doing?
    A: Getting a drink.

    Q: Why are you getting a drink?
    A: Because I’m thirsty.

    Q: Would you be getting a drink if you didn’t feel thirsty?
    A: No.

    But the thirst does not cause me to drink. My decision to drink is what causes that. The thirst might be part of the motivation. But to say that it is motivational is far weaker than to say that it is causal.

  17. Neil,

    In a few months time, most of those molecules will be gone, dissipated throughout the environment. Should we say that your brain will be gone, or that your brain will be dissipated throughout the environment?

    The collection of molecules that makes up my brain changes over time, but there is continuity. It is reasonable to speak of ‘Keith’s brain at time x’ and ‘Keith’s brain at time y’, even if there are no molecules common to both, just as it makes sense to talk about an ocean wave at time x and the same wave at time y, even if a completely different set of water molecules is involved.

    In any case, that’s beside the point. Explaining how a brain went from state A to state B doesn’t depend on our ability to draw a sharp boundary between brain and environment. It just requires us to apply the laws of physics — and nothing but the laws of physics. That’s the point: the evolution of brain states according to the laws of physics is causally closed. No extraneous causes are needed.

    But the thirst does not cause me to drink. My decision to drink is what causes that. The thirst might be part of the motivation. But to say that it is motivational is far weaker than to say that it is causal.

    Motivations are causes. Think about the etymology. And surely you can see how this causal chain makes intuitive sense:
    1. I felt thirsty.
    2. Feeling thirsty made me decide to get a drink.
    3. My decision to get a drink caused my brain to initiate the muscular motions required for grabbing my water bottle, unscrewing the lid, raising the bottle to my lips, etc.

    No matter how finely you divide it, the causal chain is there, and feeling thirsty is part of it.

    Since I’m arguing the epiphenomenal position in this thread, let me remind you that I don’t claim that the subjective feeling of thirst has any causal power. I’m just pointing out that such a view is intuitively convincing, and that this creates tension with the view that the evolution of brain states is pure physics.

  18. keiths: 2. Feeling thirsty made me decide to get a drink.

    But it didn’t.

    If I am busy doing something else, then I can postpone getting a drink for some time, perhaps for hours. However, if I really like the taste of a particular beverage, I might drink some even when not thirsty.

    Thirst is just one of many inputs that we evaluate as we make our decisions.

  19. Neil,

    The fact that there are multiple necessary causes for an event doesn’t render each one of them causally impotent.

    It is perfectly sensible to say that gravity made my car roll down the driveway even though other conditions were necessary:

    1. My car was on an incline.
    2. The wheels weren’t missing.
    3. The parking brake was disengaged.
    4. The car was in neutral.
    etc.

    Likewise, it is perfectly intuitive to say that my itch caused me to scratch even though other conditions were necessary:

    1. I wasn’t paralyzed.
    2. I wasn’t in a full-body cast.
    3, I wasn’t being offered a million doors to refrain from scratching.
    etc.

  20. keiths: It is perfectly sensible to say that gravity made my car roll down the driveway even though other conditions were necessary:

    1. My car was on an incline.
    2. The wheels weren’t missing.
    3. The parking brake was disengaged.
    4. The car was in neutral.

    Taken together, those may explain what happened. But I wouldn’t say that any one of them was the cause. I am disagreeing with the way that you are using “cause.”

  21. All of them have causal power in the relevant sense. If we started with identical conditions except that the parking brake was engaged, the car would not roll. Release the parking brake and the car rolls. Releasing the parking brake has causal power.

    Likewise, according to the intuitive view: If we start with identical conditions except that I don’t feel thirsty, I won’t drink. The feeling of thirst has causal power in this scheme.

Leave a Reply