Conching

The problem of consciousness is, notoriously, Hard.  However, it seems to me that much of the hardness is a result of the nature of the word itself.  “Consciousness” is a noun, an abstract noun, derived from an adjective: “conscious”.  And that adjective describes either a state (a person can be “conscious” or “unconscious” or even “barely conscious”) or a status (a person, or an animal, is said to be a “conscious” entity, unlike, for example, a rock).

But what must an entity be like to merit the adjective “conscious”?  What, properties must it possess?  Merriam-Webster gives as its first definition: “perceiving, apprehending, or noticing with a degree of controlled thought or observation doing, or be capable of doing”.  In other words, the properties of a conscious thing refer to its capacity to do something.  And, indeed, it is derived from a latin verb: the verb scire, to know.   In other words, “consciousness” may be better thought of as an abstract noun derived not from an adjective that describes some static attribute of an entity, but from one that implies that that entity is an agent capable of action.

And so, I will coin a new verb: to conch.  And I will coin it as a transitive verb: an entity conches something, i.e. is conscious of something.

Armed with that verb, I suggest, the Hard Problem becomes tractable, and dualism no more of an issue than the dualism that distinguishes legs and running.

And it allows us, crucially, to see that an animal capable of conching itself, and conching others as other selves, is going to be something rather special.  What is more, to be an animal capable of conching itself and others as selves will not only experience the world, but will experience itself experiencing the world.

It will, in a sense, have a soul, in Hofstadter’s sense, and even a measure of life beyond death in the conchings of other conchers.

 

[As has this post….]

123 thoughts on “Conching

  1. Rilx: I’m sorry but I don’t understand what kind of process you think. IMO we discriminate objects which get our attention – from the beginning of our ability to observe/experience – and build our worldview on those objects and their ( later experienced/thought) relations.

    How can an object get our attention, if we are unable to discriminate it?

    You seem to be taking perception for granted as a magical deliver of facts. Not believing in magic, I take perception as requiring explanation, and I take that as prior to most other questions of cognition.

  2. Reciprocating Bill,

    Would it not follow, were subjective awareness invisible to selection, that, by virtue of drift and the absence of stabilizing selection most species would come to be devoid of awareness, or acquire bizarre forms of awareness that are irrelevant to their usefulness, correspondence to actual states of affairs, etc.? In particular, if awareness imposes any costs (say, extra metabolic work) in the absence of any other relevance to selection would we not expect to be zombies?

    Yes, if subjective awareness were both invisible to selection and decoupled from the information processing that drives behavior. My argument is that subjective awareness is not decoupled from this information processing, and in fact is the inevitable product of it. Thus subjective awareness persists and remains in correspondence with reality despite being invisible to selection.

    To recap my argument in detail:
    1. Subjective awareness seems to be an inevitable byproduct of certain kinds of information processing.
    2. We can imagine a world in which identical information processing takes place without the accompanying subjective awareness.
    3. Behavior — being the result of purely physical interactions between the nervous system, the body, and the environment — would be identical in such a world.
    4. Thus, it is the information processing, and not the subjective awareness, that drives behavior in the real world. Subjective awareness is epiphenomenal.
    5. Because information processing drives behavior, it is visible to natural selection.
    6. Because subjective awareness doesn’t affect behavior, it is invisible to selection.
    7. Subjective awareness is shaped by selection despite being invisible to it. Why? Because natural selection shapes information processing, and information processing of a certain kind produces subjective awareness.

    Summary: Subjective awareness is epiphenomenal, but the (apparent) fact that it is an inevitable product of certain kinds of information processing explains why it persists and remains in correspondence with reality despite being invisible to selection.

  3. keiths: Summary: Subjective awareness is epiphenomenal, but the (apparent) fact that it is an inevitable product of certain kinds of information processing explains why it persists and remains in correspondence with reality despite being invisible to selection.

    Awareness seems to be part-and-parcel of how many organisms make sense of their environment. Higher animals reflect on their sensations, and this is a decided selective advantage.

  4. keiths,

    It would also follow from your argument that the presence of subjective awareness is entirely “free,” itself imposing absolutely no costs (as well as conferring no benefit) upon the aware organism relative to the zombie organism. Any such cost would be visible to selection. Of course, the information processing that gives rise to awareness in this model can itself be quite costly.

    Vis subjective awareness being the “inevitable byproduct of certain kinds of information processing,” I wonder what your thoughts are vis the possibility that specific kinds of embodiment also accompany subjective awareness, as distinct from information processing, which is computational and therefore abstractable from specific physical instantiation. Subjective awareness always (I would expect) entails both experiences and a subject of those experiences (or at least the experience of being a subject having experiences). The simplest experiences – sensations of pain in particular, as well as the avoidance of painful stimuli – have been connected with midbrain structures such as the periaqueductal gray, the activity of which is less apt to be characterized as “information processing” in the sense that its activity can be computationally abstracted from the structures. I don’t have any trouble imagining that these non computational (less computational?) elements of the sense of being an embodied subject, a subject with a stake in just what sort of events the organism encounters, is grounded in the activity of, and perhaps the structural arrangement of, those midbrain structures.

  5. Zachriel,

    Awareness seems to be part-and-parcel of how many organisms make sense of their environment. Higher animals reflect on their sensations, and this is a decided selective advantage.

    This is where things get tricky. Higher animals do reflect on their sensations, and this confers a selective advantage, as you point out. However, I claim that the selective advantage arises only because of the information processing they do as part of such reflection. Sensory information gets processed by the nervous system, the outputs of the nervous system are modulated accordingly, and the outputs of the nervous system drive behavior, which is visible to selection. Subjective awareness doesn’t fit into this causal chain, as far as I can tell. It sits off to the side, depending on the causal chain but having no influence on it.

    For example, we can imagine a zombie who reflects on his sensations and delivers a vivid description of them without actually experiencing the sensations, or the reflections, in a subjective sense. The information gets processed but “no one is home” to experience it, so to speak.

  6. keiths: Subjective awareness doesn’t fit into this causal chain, as far as I can tell. It sits off to the side, depending on the causal chain but having no influence on it.

    While some decisions can be shown to be made just below the level of consciousness, it is reasonable to suppose that there is an interplay between the various levels, rather than consciousness simply existing as a passive observer, or even a separate system apart from the rest of the brain’s activities.

    keiths: For example, we can imagine a zombie who reflects on his sensations and delivers a vivid description of them without actually experiencing the sensations, or the reflections, in a subjective sense.

    You mean like a robot subsystem that examines (rather than feels) and analyzes the sensory input? That’s fine, but it’s not a necessity. And there certainly are a lot of automated responses in the human brain. But it’s not the only possible way to organize this higher-level reflective ability.

    Your point was that consciousness was invisible to selection, but just because it is possible to imagine another way for the brain to work doesn’t mean that the current system isn’t adaptive, or not something evolution actually adapted from more primitive neural systems.

  7. Reciprocating Bill,

    It would also follow from your argument that the presence of subjective awareness is entirely “free,” itself imposing absolutely no costs (as well as conferring no benefit) upon the aware organism relative to the zombie organism. Any such cost would be visible to selection. Of course, the information processing that gives rise to awareness in this model can itself be quite costly.

    Exactly right, and well put.

    There are some unsettling implications of my argument, as well. For example, if I’m right, the subjective painfulness of pain serves no purpose. It’s just something we must suffer as a side effect of the information processing that detects and responds to painful stimuli. Of course by the same logic, subjective pleasure is a pure bonus, also serving no purpose.

    This points to what I think is the weakest aspect of my argument. It seems to me that the same computational logic could underlie both a circuit that recognizes pain and a circuit that recognizes pleasure. If so, what makes the subjective experience noxious in one instance but rewarding in the other?

    You hint at one possible answer when you bring up the issue of embodiment. I’ll comment on that later tonight.

  8. Zachriel,

    While some decisions can be shown to be made just below the level of consciousness, it is reasonable to suppose that there is an interplay between the various levels, rather than consciousness simply existing as a passive observer, or even a separate system apart from the rest of the brain’s activities.

    I’m not suggesting that the systems that generate subjective awareness are passive, or that there is no interplay between levels. I think they are active and have multiple functions. It’s just that subjective awareness per se seems to be purely a product of these systems, with no causal power of its own.

    You mean like a robot subsystem that examines (rather than feels) and analyzes the sensory input? That’s fine, but it’s not a necessity. And there certainly are a lot of automated responses in the human brain. But it’s not the only possible way to organize this higher-level reflective ability.

    Regardless of how this reflective ability is implemented, it amounts to information processing. Any subjective awareness that accompanies this processing looks epiphenomenal to me. It’s just along for the ride.

    Your point was that consciousness was invisible to selection, but just because it is possible to imagine another way for the brain to work doesn’t mean that the current system isn’t adaptive, or not something evolution actually adapted from more primitive neural systems.

    I do think that the current system is adaptive and that it evolved from more primitive neural systems. All I’m claiming is that subjective awareness itself was not selected for. Indeed, it could not have been favored by selection if it is epiphenomenal, because epiphenomena have no causal power and therefore must be invisible to selection. The underlying information processing systems were certainly selected for, but subjective awareness itself was not.

  9. Reciprocating Bill,

    Vis subjective awareness being the “inevitable byproduct of certain kinds of information processing,” I wonder what your thoughts are vis the possibility that specific kinds of embodiment also accompany subjective awareness, as distinct from information processing, which is computational and therefore abstractable from specific physical instantiation.

    I’m open to the idea, but I don’t see a way to confirm or falsify it. The problem of other minds gets in the way, particularly when we start dealing with potential embodiments of consciousness that are quite different from our own. Without direct access to another being’s subjective awareness (or lack thereof) we can’t decide whether “anyone’s home”, regardless of the complexity of the behavior displayed by the being. And it seems to me that we may never have such access.

    The closest we can come to such access is to identify neural correlates of consciousness in our own species (and related species), then look for NCCs in other beings (or systems). But what counts as an NCC in a silicon-based computer system, for example? I have no idea, and I’m not sure we can ever figure it out. Complex behavior is not enough. The problem of other minds foils us again.

    When I can find some time, I’d like to learn more about cephalopod brains. If complexity is the criterion, cephalopod behavior certainly seems indicative of subjective awareness, but cephalopod brain architecture is fundamentally different from that of vertebrates, and both the similarities and differences may be instructive.

    A final thought: there’s a lot of complex information processing going on in the brain, most of which is unconscious. When we know more about which circuits are involved in the generation of subjective awareness, and which aren’t, then comparisons between the two groups of circuits may yield important clues about how subject awareness is and isn’t generated.

    The simplest experiences – sensations of pain in particular, as well as the avoidance of painful stimuli – have been connected with midbrain structures such as the periaqueductal gray, the activity of which is less apt to be characterized as “information processing” in the sense that its activity can be computationally abstracted from the structures.

    Could you expand on that? I don’t see why the activity of midbrain structures couldn’t be functionally abstracted and implemented on another substrate, just as we can do (in principle) for the cortex.

  10. keiths: It’s just that subjective awareness per se seems to be purely a product of these systems, with no causal power of its own.

    Automated responses only go so far. It is awareness that allows the understanding of what those sensory impressions mean. And it is the deep involvement of the consciousness in those sensory impressions that motivates the consciousness.

    keiths: Any subjective awareness that accompanies this processing looks epiphenomenal to me.

    It seems to be intrinsic to the mechanism. The conscious mind is aware of and intimately involved in sensation.

  11. Keiths:

    I’m open to the idea, but I don’t see a way to confirm or falsify it. The problem of other minds gets in the way, particularly when we start dealing with potential embodiments of consciousness that are quite different from our own. Without direct access to another being’s subjective awareness (or lack thereof) we can’t decide whether “anyone’s home”, regardless of the complexity of the behavior displayed by the being. And it seems to me that we may never have such access.

    Doesn’t the proposition that some forms of information processing are inherently accompanied by subjective awareness suffer from the same problem? Which forms? How would we know?

    Beyond disquieting implications, there are paradoxes that arise from the apparent epiphenomenalism of subjective awareness.

    Another thought experiment:

    Let us imagine a world in which information processing sometimes is, and other times is not, accompanied by awareness, and we have figured a way to turn it on and off. Each person is equipped with a small switch capable of activating or deactivating subjective awareness. So we each get to choose either to be aware, or to be a functioning zombie.

    Given epiphenomenalism, one’s behavior would be absolutely unchanged regardless of the position of the switch, and it indeed would be impossible to discover from an investigation of individual activities (including brain activity) which state they were in. Both outward behavior and internal information processing would be unchanged.

    Such a switch might seem like a good idea in some circumstances. As noted above, both suffering and pleasure come along as components of subjective awareness. Were I about to be tortured (say, my fingernails pulled out to extract information), I might wish to turn switch off awareness for the duration. Of course, my zombified self would still scream and give up the intelligence just as quickly, but I would be subjectively unaware of any suffering (and anything else). OTOH, were I about to enter a sexual encounter, I might wish to be, well, turned on.

    However, it follows from epiphenomenalism that neither the fact nor the contents of awareness will have any bearing upon whether and when I would throw the switch, whether or not I am a zombie. Given epiphenominalism, all behaviors – including switch throwing – are mediated by physical events (including certain forms of information processing) without regard to the presence of subjective states, past or present. A decision to turn off awareness in anticipation of torture cannot flow causally from prior subjective experiences of pain – even when that decision is taken by the version of me that possesses subjective awareness. Given epiphenomenalism, the fact of subjective awareness, or the absence of same, can itself have no causal bearing upon whether I throw the switch in advance of certain suffering. Hence both me and my zombie will be equally motivated, and motivated for the same reasons, to deactivate awareness when the pliers come out.

    That seems very peculiar.

    Moreover, it seems to me that if I elected to turn off awareness for the duration of a session of torture (or any other experience), upon reactivating my subjective awareness I would fully recall the torture, and would experience ongoing trauma caused by the torture in exactly they way I would experience trauma had I been aware. This would be true for any experience during which subjective awareness was absent (switch was off), because information processing would have continued unchanged, including the formation of memories.

    More generally, it strikes me that if epiphenomenalism is true, then this discussion is not, after all, about subjective awareness. It cannot be causally motivated by the fact that I, and you, are subjectively aware, believe certain things about our experience of subjective awareness, etc. The zombie versions of ourselves would be having an identical conversation, taking identical positions and reasoning to identical conclusions, without ever having possessed subjective awareness. For our zombie selves this conversation would have no referent, yet our conclusions would be identical. So, for example, were I to maintain that I simply can’t believe that the subjective awareness I am now experiencing is epiphenomenal and plays no causal role in my behavior, that utterance cannot in fact be grounded in my experience of subjective awareness, even in the world in which information processing of certain kinds yields subjective experience. Therefore, it seems to me that, in either world, the proposition that “subjective awareness is epiphenomenal” is in some way defective, because it cannot, in any instance, be motivated, caused, or justified by the actual state of affairs concerning subjective awareness.

    This all makes me wonder if epiphenomenalism hasn’t taken a wrong turn somewhere. (But mayhaps I have taken the wrong turn).

  12. Zachriel,

    Automated responses only go so far. It is awareness that allows the understanding of what those sensory impressions mean.

    Does this mean that you think consciousness is not fundamentally mechanistic?

  13. keiths: Does this mean that you think consciousness is not fundamentally mechanistic?

    Not really. By automated responses, we meant those that happen without conscious awareness. People can even drive their cars unconsciously. When people are conscious of their sensations, then they can think about them, reflect or imagine new situations. This is a decided evolutionary advantage. Furthermore, the conscious mind is invested in the sensory world. It stands to gain or lose depending on its helpfulness in achieving pleasure or pain.

    keiths: Argument 2b: …
    2. In a world where such information processing could occur without subjective awareness, behavior would be unchanged.

    On #2. While complex information processing can certainly occur without consciousness, it wouldn’t be the same processing, so behavior would change. Unless, instead of a zombie, it’s a mimic. One could imagine the latter situation, but there would probably be a tell.

  14. Reciprocating Bill,

    Doesn’t the proposition that some forms of information processing are inherently accompanied by subjective awareness suffer from the same problem [of other minds]? Which forms? How would we know?

    Yes, and I don’t think there’s any way around it. The only way to approach the problem of other minds, as far as I can see, is on the basis of similarity. I experience subjective awareness, and since the brains of other humans are similar to mine, I figure they are probably aware also. At best, this is a probabilistic argument: the similarities among human brains seem to vastly outnumber the differences, so the odds are that the differences do not determine the presence or absence of subjective awareness. As a probabilistic argument, it carries less weight as the differences among brains become larger, which is why it is so interesting to consider the case of cephalopods.

    Regarding your thought experiment in which a switch turns subjective awareness on and off, I proffer the following interesting twist: what if the switch altered the polarity of the subjective experience rather than turning it on or off? In other words, with the switch in one position, subject experience would be normal: sex would be pleasurable, torture painful. With the switch in the other position (let’s call it the “inverting” position), things would be reversed. Sex would be an unbearable agony and torture would be blissful. Judicious management of the switch position would seemingly ensure that experience was never worse than neutral, and often quite pleasurable.

    Of course, things are not quite so simple. Under the terms of the thought experiment, brain activity is unchanged with the polarity switch in either position, just as it was in your on-off switch experiment. Thus there is no reason for the brain to prefer either position, even though it makes all the difference in the world to the nature of the subjective experience.

    More on all of this later.

  15. Semi-OT (to quote ba77):
    An reflection on the vagaries of associative memory and subconscious processing: I just noticed the Clash song “Rock the Casbah” going through my head, though it hadn’t been there just moments before.

    It took me a few seconds to figure out what had happened. I had right-clicked my Windows taskbar, and one of the menu options that appeared was “Lock the taskbar”, which rhymes more or less with “Rock the Casbah”, and my brain was (subconsciously) off and running.

    This happens to me all the time. Songs pop into my head unbidden, for no apparent reason, but if I have the time and the inclination, I’m sometimes able to figure out what environmental cue (such as “Lock the taskbar” ) was responsible.

    Do other readers of this thread have that experience?

  16. And how many of you now have “Rock the Casbah” going through your heads? 🙂

  17. Indeed, it could not have been favored by selection if it is epiphenomenal, because epiphenomena have no causal power and therefore must be invisible to selection.

    I’m not sure what an epiphenomenon is, but awareness of others is certainly useful and selectable, and the ability to observe oneself is certainly added on top of awareness of others. I accept consciousness as something of a mystery, but the evolutionary path is clear.

  18. keiths:
    And how many of you now have “Rock the Casbah” going through your heads?

    Not I. In fact, I had never heard of “Rock the Casbah”.

    Getting back to your “information processing” ideas, it seems to me that what we normally mean by “information processing” is what computers do, and that is entirely a matter of syntactic processing. However, if what minds do can be called “information processing”, then that is predominantly semantic processing. So I am doubting that you will have the same information processing without consciousness.

  19. I don’t think what brains do is information processing in the sense of what computers do. What brains do is more nearly analog. The critical measure of activity is firing rate of neurons. What neurons are doing is not pushing digital bits, but voting. It is the rate of voting that counts.

  20. Petrushka,

    I’m not sure what an epiphenomenon is…

    To assert that subjective awareness is epiphenomenal is to claim that it is a byproduct of the physical activity of the brain, with no causal power of its own. It “sits off to the side”, so to speak. The physical activity of the brain determines subjective awareness, but subjective awareness has no effect on the physical activity of the brain. The causality is unidirectional.

    …but awareness of others is certainly useful and selectable, and the ability to observe oneself is certainly added on top of awareness of others. I accept consciousness as something of a mystery, but the evolutionary path is clear.

    ‘Consciousness’ and ‘awareness’ have multiple meanings, and this causes some confusion. I’ve been deliberately using the phrase ‘subjective awareness’ in this thread to emphasize that what I’m talking about is the feeling, the subjective experience that accompanies brain activity.

    The kind of awareness you’re talking about is the faculty for representing others, and oneself, in one’s brain. it’s a form of information processing. That has enormous evolutionary value, as you note, and is no doubt favored by selection. The subjective experience of being conscious of and manipulating these representations, on the other hand, seems to have no effect on behavior. It appears (to me, at least) to be epiphenomenal.

  21. Zachriel,

    When people are conscious of their sensations, then they can think about them, reflect or imagine new situations. This is a decided evolutionary advantage.

    As I remarked to Petrushka, there is certainly an evolutionary advantage in being able to form representations and manipulate them. But what is the advantage of actually feeling one’s self forming and manipulating them? In other words, what is the advantage of subjective awareness as opposed to mere representation? What can subjective awareness add to what sophisticated representation and manipulation already accomplishes?

    Furthermore, the conscious mind is invested in the sensory world. It stands to gain or lose depending on its helpfulness in achieving pleasure or pain.

    That just boils down to saying that if you present a sophisticated, conscious brain with painful stimuli as inputs, its outputs will be driven in a way that tends to reduce or eliminate the painful stimuli and internal states, and to increase pleasurable stimuli and internal states. The mapping of inputs + current brain state to outputs + new brain state seems to be specifiable (in principle) as a pure information processing operation. The subjective experience of pleasure and pain seem to serve no purpose that isn’t already fulfilled by the physical nervous system responding to physical stimuli in strict accordance with physical law. If the subjective awareness somehow vanished, the physical laws would still hold and the state of the nervous system would evolve in exactly the same way as before, producing the same outputs and consequently the same behavior.

  22. Neil Rickert,

    Getting back to your “information processing” ideas, it seems to me that what we normally mean by “information processing” is what computers do, and that is entirely a matter of syntactic processing. However, if what minds do can be called “information processing”, then that is predominantly semantic processing. So I am doubting that you will have the same information processing without consciousness.

    I think that semantic systems can be built out of ‘mere’ syntactic processing units. Unless you are a dualist, or unless you think that what neurons do somehow transcends computation, I’m not sure how you can argue otherwise. Neural firings are determined solely by physics and are utterly uninfluenced by the interpretations attached to them by the rest of the nervous system.

  23. keiths,

    But what is the advantage of actually feeling one’s self forming and manipulating them? In other words, what is the advantage of subjective awareness as opposed to mere representation? What can subjective awareness add to what sophisticated representation and manipulation already accomplishes?

    So the word epiphenomenon has no entailment and could be dismissed for parsimony. I’m going to repeat and elaborate what I’ve been saying.

    There is no representation of things, event or memories in the brain. There is nothing analogous to what happens in a digital computer. That’s something of an exaggeration. There are some circuits in the auditory and visual input that can be modeled by computer.

    For the most part the analogy with digital computers is misleading. It’s one of those cases where you don’t understand it until you can make it. As far as I can see we have been able to make flying machines that are almost as intelligent as flying insects, and that’s about the limit of our understanding.

  24. keiths: Regarding your thought experiment in which a switch turns subjective awareness on and off, I proffer the following interesting twist:what if the switch altered the polarity of the subjective experience rather than turning it on or off?In other words, with the switch in one position, subject experience would be normal: sex would be pleasurable, torture painful.With the switch in the other position (let’s call it the “inverting” position), things would be reversed.Sex would be an unbearable agony and torture would be blissful. Judicious management of the switch position would seemingly ensure that experience was never worse than neutral, and often quite pleasurable.

    Then the conscious mind would work to reduce sexual contact and increase physical injury. It would have to battle the autonomic systems, but certainly would have an effect on behavior. And that effect would be a selectable disadvantage (all else being equal).

    keiths: But what is the advantage of actually feeling one’s self forming and manipulating them?

    Because that is what motivates the consciousness to work towards selectable ends, sex in the positive sense, injury in the negative sense.

    Yes, it could be strictly autonomic, like driving your car mindlessly. But once you have an independent system that reflects on the situation, then what does it think about? Does it care? Because if it doesn’t care, then why does it bother? It is the passions that galvanize action. Without passion, without hunger, humans are languid and inert.

    We think of computers that are capable of amazing feats of analysis without consciousness, but then, they don’t care. They will just as well find a cure for cancer as direct a nuclear bomb to detonate over a city, or walk themselves off a cliff without any regrets whatsoever. It’s certainly possible to imagine a robot civilization that reproduces and protects itself unconsciously, like ants do. But that’s not the course that evolution followed in the case of humans and their organic relatives. The conscious mind feels, reflects and responds.

  25. keiths: I think that semantic systems can be built out of ‘mere’ syntactic processing units.

    I would like to know how that could be done.

    We have seen attempts by AI researchers. It seems to always require a lot of prior knowledge (rules of syntax that emulate semantic properties), and it seems to at best yield very crude results. I doubt that the human genome has sufficient capacity to encode all of the prior knowledge that would be required. And if that knowledge is instead acquired through learning, then there is nothing coming out of ML (machine learning) research that would indicate how to do that.

    Unless you are a dualist, or unless you think that what neurons do somehow transcends computation, I’m not sure how you can argue otherwise.

    No, I am not suggesting anything that transcends computation. I actually doubt that the brain is doing much computation. I am talking about what precedes computation. You cannot do information processing until there is information. We should be looking at perception as an information producer, not an information processor.

    As I see it, getting information about the world involves action. I see consciousness as the result of the activity involved in getting information about the environment, combined with our ability to perceive ourselves carrying out these actions.

    Neural firings are determined solely by physics and are utterly uninfluenced by the interpretations attached to them by the rest of the nervous system.

    True, but mostly a pointless diversion. Spark plug firings are determined solely by physics and are utterly uninfluenced by the interpretations attached to them. Nevertheless, those spark plug firings won’t happen if I didn’t insert the ignition key and then operate the starter – two actions that are very much influenced by the interpretations that I give to them.

  26. keiths: As I remarked to Petrushka, there is certainly an evolutionary advantage in being able to form representations and manipulate them.

    Perception is representational. People who talk of information processing the way that you do, seem to suggest that there is another layer of “hidden” representation. I am skeptical of that. As for “manipulate them” – I see no selective advantage in being able to “photo-shop” our representations to make them look nice. The selective advantage is for our perceptions to be as representational as possible of the way that reality is.

    But what is the advantage of actually feeling one’s self forming and manipulating them? In other words, what is the advantage of subjective awareness as opposed to mere representation? What can subjective awareness add to what sophisticated representation and manipulation already accomplishes?

    I take the view that information is a construct, in which case the awareness is what enables us to manage that construction of information so that we get the kind of information that we want. That’s what makes picking up information an action.

    Some people take the view that information is entirely physical and exists in the world independent of us. Well, fair enough. But with that view of information, if we were to pick up all of the physical information that our sensory cells are capable of detecting, the signal to noise ratio would be close to zero. So if you take that view of information, then the action involved in perception is one of careful filtering so that you only get information that is useful or potentially useful. In that case, awareness allows you to control the filtering based on pragmatic considerations.

  27. Neil Rickert: As for “manipulate them” – I see no selective advantage in being able to “photo-shop” our representations to make them look nice. The selective advantage is for our perceptions to be as representational as possible of the way that reality is.

    I don’t think Zachriel intends “manipulate” in the sense of retouching or modifying those representations, Photoshop-wise. Were I assembling a machine constructed of several geometrically complex parts, I may need to visualize the rotation and positioning of a part in order to determine how it docks into the mechanism. In so doing I am manipulating (mentally rotating) a representation of the object. At a social level, I might attempt to predict how a person will react to actions I am contemplating by consulting representations of that persons beliefs, desires, and dispositions. In short, we manipulate representations for the purpose of selecting and executing behaviors, not to guild reality.

  28. Petrushka,

    There is no representation of things, event or memories in the brain.

    When I imagine what would happen if I placed a hundred-pound anvil on top of an empty cardboard box, you don’t think my brain is representing the anvil, the box, and the laws of physics?

    There is nothing analogous to what happens in a digital computer. That’s something of an exaggeration. There are some circuits in the auditory and visual input that can be modeled by computer.

    Whether brains are analogous to digital computers and whether they can be modeled by digital computers are two distinct questions. Digital computers are routinely used to model non-digital systems and processes.

  29. DrBot,

    Thanks for the link. I hope they’ll post a video of the debate afterwards.

    Anil Seth’s Guardian article poses the following question:

    5. What is the function of consciousness? What are experiences for?

    Researchers have now discovered that many cognitive functions can take place in the absence of consciousness. We can perceive objects, make decisions, and even perform apparently voluntary actions without consciousness intervening. One possibility stands out: consciousness integrates information. According to this view, each of our experiences rules out an enormous number of alternative possibilities, and in doing so generates an incredibly large amount of information.

    It may well be that the function of consciousness is to integrate information, but that doesn’t explain why the integration is accompanied by subjective awareness.

  30. I should be more specific. There are no specific locations for mental objects, as there would be in data processing model. A better analogy would be a hologram.

  31. Petrushka,

    While it’s true that representations are “spread out” in neural networks, that doesn’t mean that they aren’t data processing systems. Data processing encompasses much more than just serial program execution on digital computers. Google “data processing neural networks” and you’ll see what I mean.

  32. I would prefer to say that computing encompasses more than data processing. But it’s probably just my idiosyncratic terminology. I would argue that brains have an enormous amount of stochastic stuff going on. Timing, firing rates, all influenced by changing chemistry.

    I think it is possible to replicate some of this in silicon, but not to duplicate an individual mind or to record or upload a mind.

  33. Neil,

    I wrote:

    I think that semantic systems can be built out of ‘mere’ syntactic processing units.

    You responded:

    I would like to know how that could be done.

    Many computer programs deal with meanings, but the underlying operations are purely syntactic.

    In response, you might argue that programs are syntactic from top to bottom, and that semantics comes into play only when a human interprets the output of the program. However, that argument is problematic because it could be applied to the human as well. A neuron is syntactic in the sense that its behavior depends only on its own state, the inputs it receives from its environment, and the laws of physics. The nervous system is constructed out of these purely syntactic neurons. If semantic systems can’t be constructed out of syntactic parts, as you claim, then it should be impossible for human nervous systems to deal with meanings.

    What do you think forms the basis for semantics in humans?

  34. Zachriel,

    Regarding my thought experiment in which the “polarity switch” made sex painful and torture pleasurable, you wrote:

    Then the conscious mind would work to reduce sexual contact and increase physical injury. It would have to battle the autonomic systems, but certainly would have an effect on behavior. And that effect would be a selectable disadvantage (all else being equal).

    You’re forgetting that under the terms of the thought experiment, neural activity is unaltered by the position of the switch. Behavior would therefore be unchanged, and only the accompanying subjective experience would change. The nervous system would continue to seek out sex and avoid torture, despite the horrific subjective experiences that would ensue.

    That’s my point: unless subjective experiences have some ability to alter the physical behavior of the nervous system, they are causally impotent – that is, epiphenomenal.

    I wrote:

    But what is the advantage of actually feeling one’s self forming and manipulating [representations]?

    You replied:

    Because that is what motivates the consciousness to work towards selectable ends, sex in the positive sense, injury in the negative sense.

    But if consciousness is generated by neural activity, then ‘motivation’ is also a strictly neural process: neurons are interconnected in such a way that the organism seeks out sex and avoids injury. The neurons will do their thing regardless of whether there is an accompanying feeling of pleasure or pain. The subjective experience is epiphenomenal.

    It is the passions that galvanize action. Without passion, without hunger, humans are languid and inert.

    The passions themselves are ultimately just patterns of interactions among neurons and the environment. Why is there an accompanying feeling? The neurons will do their thing with or without the feeling. I see no reason why we couldn’t have passionate behavior without passionate feeling. The feeling appears to be epiphenomenal.

    We think of computers that are capable of amazing feats of analysis without consciousness, but then, they don’t care. They will just as well find a cure for cancer as direct a nuclear bomb to detonate over a city, or walk themselves off a cliff without any regrets whatsoever.

    A computer that is programmed to preserve itself will not walk itself off a cliff if it foresees the consequences. A computer that is programmed to avoid harming humans will not detonate a nuclear bomb over a populous city. These behaviors are possible without any accompanying subjective feeling. Again, any subjective experience accompanying these behaviors, whether in computers or in humans, would seem to be epiphenomenal.

  35. Reciprocating Bill,

    I reread the comment in which you introduced the thought experiment involving the on/off switch for subjective awareness.

    You wrote:

    However, it follows from epiphenomenalism that neither the fact nor the contents of awareness will have any bearing upon whether and when I would throw the switch, whether or not I am a zombie. Given epiphenominalism, all behaviors – including switch throwing – are mediated by physical events (including certain forms of information processing) without regard to the presence of subjective states, past or present. A decision to turn off awareness in anticipation of torture cannot flow causally from prior subjective experiences of pain – even when that decision is taken by the version of me that possesses subjective awareness. Given epiphenomenalism, the fact of subjective awareness, or the absence of same, can itself have no causal bearing upon whether I throw the switch in advance of certain suffering.

    I agree with all of that, but not with the next sentence:

    Hence both me and my zombie will be equally motivated, and motivated for the same reasons, to deactivate awareness when the pliers come out.

    Actually, under the terms of the thought experiment, the activity of your nervous system is utterly unaffected by the position of the switch. The switch has no effect, as far as the nervous system is concerned, and there is no reason to change its position — despite the fact that its position has huge consequences for subjective awareness.

    I think this supports my position that subjective experience is causally inert, and only the information processing done by the nervous system has causal efficacy.

  36. keiths: In response, you might argue that programs are syntactic from top to bottom, and that semantics comes into play only when a human interprets the output of the program.

    No, I don’t agree with that. Sure, there is some semblance of semantic behavior in a computer. Part of that comes from the input devices which presumably are connected to real world phenomena. And part of the semantic behavior comes from encoding rules into the system – that is, it is derived from the programmer’s semantics.

    The problem is that this requires a huge amount of prior programming. If the brain works that way, it would require a huge amount of innate knowledge. This does not match our experience, and it is doubtful that the genome has sufficient capacity for that to all be encoded. What we see coming from machine learning research does not seem capable of filling the gap.

    A neuron is syntactic in the sense that its behavior depends only on its own state, the inputs it receives from its environment, and the laws of physics.

    I’m not sure why you would think that makes it syntactic. That neurons are able to adaptively modify their behavior (as in Hebbian learning) at least suggests that they might have a semantic aspect to their behavior.

    What do you think forms the basis for semantics in humans?

    The adaptivity of homeostatic processes.

  37. keiths: I agree with all of that, but not with the next sentence:

    Hence both me and my zombie will be equally motivated, and motivated for the same reasons, to deactivate awareness when the pliers come out.

    Actually, under the terms of the thought experiment, the activity of your nervous system is utterly unaffected by the position of the switch. The switch has no effect, as far as the nervous system is concerned, and there is no reason to change its position — despite the fact that its position has huge consequences for subjective awareness.

    That is what I intended to convey – and why I said that zombie and non-zombie will be equally motivated. They are equally motivated because only nervous system activity determines behavior, including switch-setting, regardless of the consequences for subjective experience. The subjective experience of torture cannot influence the choice to set the switch, or motivate the avoidance of the experience of torture (although information processing may). That seems peculiar.

    Other interesting consequences follow from epiphenominalism. Above you said,

    It may well be that the function of consciousness is to integrate information, but that doesn’t explain why the integration is accompanied by subjective awareness.

    Which sounds perilously close to saying that we can experience consciousness without subjective awareness. That prompted me to remark on some experiences I have had. Zombies must have information processing analogs of these experiences, too:

    – I am reading. As I read, I become aware, first somewhere in the back but then front and center, that a faucet is dripping. I get up and tighten the faucet.

    – I am in a crowded ballroom, with a thousand people and a hubbub of voices. No individual words can be made out due to the hubbub. I become aware that that someone is repeating my name, calling me from across the room. I look to see who is calling me.

    – I recall a dream fragment from last night. I dreamt that my father, 92 years of age, was asleep in my top dresser drawer, little more substantial than a small bundle of sticks tied with a string, light and helpless. He called me and I picked up his straw body, crying. I become resolved to spend more time with my very elderly parents.

    Epiphenominalism holds that these subjective experiences reflect the underlying information processing, only, and would occur unchanged “in the dark,” regardless of subjective awareness. Because both behavior and the underlying processing is unchanged regardless of the presence or absence of subjective awareness, and Zombies are capable of identical shifts of information processing, then Zombies too “become aware” of previously unnoticed dripping faucets (and thereby take action), of their names being called, and of dreams they have had, including the emotional content of those dreams. But they do so without subjective awareness of having “become aware.” We have to call these shifts of processing from “lower” levels characterized by unconscious automaticities to “higher” levels characterized by conscious executive attention something else. But one begins to wonder if “a rose by any other name” begins to become apt.

    Lastly, it seems to me that it follows from epiphenominalism that, even in non-zombies, subjective experiences cannot become the object of information processing. Not only is subjective awareness causally irrelevant to information processing (no causation flows from subjective experience to behavior or the underlying processing), subjective experience must be unavailable to information processing, or it would make a difference, and the behavior of zombies and non-zombies would diverge. It follows that you and I do not recall, reflect upon, reason over, appreciate, classify, imagine, or talk about subjective experiences. We “think we do,” but that is an illusion. It also seems to follow that this conversation is not really “about” subjective experience. How can it be?

    At some point, this all strikes me as not just disquieting, but incoherent. So I fall back on my initial statement above, with which you agree, that we are clearly missing something fundamental about subjective experience and consciousness. The consequences of epiphenominalism (which is difficult to refute, unless one is willing to abandon the causal closure of the physical world) deepen that sense that we do not yet know how to think about this problem.

  38. keiths: You’re forgetting that under the terms of the thought experiment, neural activity is unaltered by the position of the switch. Behavior would therefore be unchanged, and only the accompanying subjective experience would change.

    If the consciousness is not able to affect behavior, then, of course, it’s not selective. But that clearly doesn’t appear to be the case *in humans*. That’s why zombies typically act so unhuman-like: They are animated, but not conscious.

    keiths: These behaviors are possible without any accompanying subjective feeling.

    We can imagine an unconscious organism with very complex behaviors, even exceeding that of humans. Ant colonies are capable of many complex activities, but they don’t exhibit consciousness. Consciousness is a *particular and peculiar* adaptation. Consciousness is not the only type of adaptation capable of complex behavior; it’s just the evolutionary path humans are on.

    Oh, and consciousness is distinctive. Even an alien mimic without consciousness would probably be distinguishable with familiarity, though perhaps not on first meeting {nod head and smile a lot. try not to drool or stare at their brains}.

  39. Zachriel,

    If the consciousness is not able to affect behavior, then, of course, it’s not selective. But that clearly doesn’t appear to be the case *in humans*. That’s why zombies typically act so unhuman-like: They are animated, but not conscious.

    Zombies in the popular imagination are unhuman-like, but philosophical zombies – who process information exactly as humans do, but without subjective awareness – behave indistinguishably from humans.

    Can you think of any behavior that depends specifically on phenomenology, and not on the underlying information processing, for its selective value? I can’t.

    We can imagine an unconscious organism with very complex behaviors, even exceeding that of humans. Ant colonies are capable of many complex activities, but they don’t exhibit consciousness.

    We don’t think of ant colonies as being conscious, but how could we know that? It’s the problem of other minds again. Also, see Hofstadter’s Gödel, Escher, Bach for an entertaining interlude featuring a conscious anthill who calls herself Aunt Hillary and depends on her friend the anteater for corrective “neurosurgery”.

    Consciousness is a *particular and peculiar* adaptation. Consciousness is not the only type of adaptation capable of complex behavior; it’s just the evolutionary path humans are on.

    But again, I ask if subjective awareness per se has any adaptive value above and beyond the value of the underlying information processing. To me it seems obvious that it does not, because all the causal power rests with the information processing — or if you prefer, with the physical substrate on which the information processing is implemented.

  40. Reciprocating Bill,

    That is what I intended to convey – and why I said that zombie and non-zombie will be equally motivated. They are equally motivated because only nervous system activity determines behavior, including switch-setting, regardless of the consequences for subjective experience. The subjective experience of torture cannot influence the choice to set the switch, or motivate the avoidance of the experience of torture (although information processing may). That seems peculiar.

    Agreed. I was objecting only to the assertion that you and your zombie would both be “motivated for the same reasons, to deactivate awareness when the pliers come out.” Turning the switch off has no effect on information processing, which means it does not relieve the information-processing analog of pain (let’s call the information-processing aspect of consciousness ‘i-consciousness’). That in turn means that neither you nor your zombie will be motivated to ‘deactivate awareness’ by throwing the switch, though your subjective awareness (let’s call it ‘a-consciousness’), if it had a voice in the matter, would surely beg for the switch to be turned off.

    Thus we have a bizarre situation in which 1) you are experiencing great subjective pain, 2) you have the means to eliminate it, but 3) you feel no motivation to throw the switch, because the subjective experience has no causal power. If someone asked you, “Does the switch make any difference?”, your response would be an unequivocal “no”. In a very real sense, you don’t know that the switch eliminates pain, though it does. Your i-consciousness is aware of no difference when the switch is thrown, but your a-consciousness is.

    And in my polarity switch example, if you had the switch in the inverting position, you could be experiencing great subjective pain, yet be begging for the experience to continue, because your i-consciousness would be i-enjoying itself while your a-consciousness was in a-agony.

    These paradoxical situations may contain the kernel of an argument for why i-consciousness and a-consciousness must always remain in sync, although I don’t think they explain why a-consciousness arises in the first place — unless i- and a-consciousness are really the same thing, somehow.

    Above you said,

    It may well be that the function of consciousness is to integrate information, but that doesn’t explain why the integration is accompanied by subjective awareness.

    Which sounds perilously close to saying that we can experience consciousness without subjective awareness.

    Or more precisely, that we can at least conceive of having i-consciousness without an accompanying a-consciousness. It may or may not be possible in our world for i-consciousness to exist apart from a-consciousness, but it seems to lead to no obvious logical contradiction.

    That prompted me to remark on some experiences I have had. Zombies must have information processing analogs of these experiences, too…
    Epiphenominalism holds that these subjective experiences reflect the underlying information processing, only, and would occur unchanged “in the dark,” regardless of subjective awareness. Because both behavior and the underlying processing is unchanged regardless of the presence or absence of subjective awareness, and Zombies are capable of identical shifts of information processing, then Zombies too “become aware” of previously unnoticed dripping faucets (and thereby take action), of their names being called, and of dreams they have had, including the emotional content of those dreams. But they do so without subjective awareness of having “become aware.” We have to call these shifts of processing from “lower” levels characterized by unconscious automaticities to “higher” levels characterized by conscious executive attention something else. But one begins to wonder if “a rose by any other name” begins to become apt.

    Yes, but I’m left with the nagging feeling that we’re still talking about i-consciousness but not a-consciousness. Where does the a-consciousness come from? Thinking about this makes my brain hurt (both i-hurt and a-hurt).

    Lastly, it seems to me that it follows from epiphenominalism that, even in non-zombies, subjective experiences cannot become the object of information processing. Not only is subjective awareness causally irrelevant to information processing (no causation flows from subjective experience to behavior or the underlying processing), subjective experience must be unavailable to information processing, or it would make a difference, and the behavior of zombies and non-zombies would diverge. It follows that you and I do not recall, reflect upon, reason over, appreciate, classify, imagine, or talk about subjective experiences. We “think we do,” but that is an illusion. It also seems to follow that this conversation is not really “about” subjective experience. How can it be?

    My instinct is to suggest that when we talk about subjective experience, we are really talking about i-subjective experience, not a-subjective experience. There is a referent, both for us and our zombies, since we all have i-consciousness. I need to think about this some more, though.

  41. Neil,

    I wrote:

    A neuron is syntactic in the sense that its behavior depends only on its own state, the inputs it receives from its environment, and the laws of physics.

    You responded:

    I’m not sure why you would think that makes it syntactic. That neurons are able to adaptively modify their behavior (as in Hebbian learning) at least suggests that they might have a semantic aspect to their behavior.

    It’s syntactic because its behavior, including any adaptive self-modification, can be modeled purely formally — i.e., syntactically. A neural net can be modeled on a universal Turing machine. Can’t get much more formal than that.

    I asked:

    What do you think forms the basis for semantics in humans?

    You responded:

    The adaptivity of homeostatic processes.

    That’s pretty cryptic. Are you suggesting that intentionality arising within a system as it maintains homeostasis is somehow original intentionality, versus the derived intentionality of a designed system? If so, how you justify the idea that a system ‘designed’ by natural selection exhibits original intentionality?

  42. keiths: It’s syntactic because its behavior, including any adaptive self-modification, can be modeled purely formally — i.e., syntactically.

    I am not persuaded by that reasoning.

    My car is not syntactic. Maybe it isn’t semantic either, but it is not syntactic. Yet it could be modeled.

    You responded:

    “The adaptivity of homeostatic processes.”

    That’s pretty cryptic. Are you suggesting that intentionality arising within a system as it maintains homeostasis is somehow original intentionality, versus the derived intentionality of a designed system?

    I am suggesting that it is the basis for original intentionality. But you won’t see much actual original intentionality in an amoeba or in a thermostat. System organization also matters.

    If so, how you justify the idea that a system ‘designed’ by natural selection exhibits original intentionality?

    Nothing is actually designed by natural selection. I’m not a big fan of neo-Darwinism. I would prefer an account that emphasizes self-design by the organisms. In any case, the intentionality we see in evolved organisms is not directly a result of natural selection. It is a result of the intentionality that is already present in natural biological processes. If natural selection is seen as a filter, then the driving force of evolution is the biological reproduction that pumps stuff through that filter.

  43. keiths: Zombies in the popular imagination are unhuman-like, but philosophical zombies – who process information exactly as humans do, but without subjective awareness – behave indistinguishably from humans.

    Yes. However, it’s a self-contradiction. If they process information exactly as humans do, then they are conscious, because the human processing of information includes consciousness. Even if a nonconscious entity could behave precisely as humans, that is not how humans do it.

    keiths: Also, see Hofstadter’s Gödel, Escher, Bach for an entertaining interlude featuring a conscious anthill who calls herself Aunt Hillary and depends on her friend the anteater for corrective “neurosurgery”.

    Sure. But again, that’s just not how ant colonies currently process information. If they did, then the consciousness would presumably be able to affect the colony’s behavior, such as Aunt Hillary’s eccentric insistence on being called “Aunt” even though she’s nobody’s aunt.

  44. Zachriel,

    You didn’t address the two most important questions in my comment:

    Can you think of any behavior that depends specifically on phenomenology, and not on the underlying information processing, for its selective value? I can’t.

    And:

    But again, I ask if subjective awareness per se has any adaptive value above and beyond the value of the underlying information processing. To me it seems obvious that it does not, because all the causal power rests with the information processing — or if you prefer, with the physical substrate on which the information processing is implemented.

    We’re in agreement regarding the adaptive value of the information processing that underlies consciousness, but I don’t think the accompanying phenomenology is even visible to selection, much less adaptive. Since you seem to disagree, could you explain why?

  45. Neil,

    Recall how this discussion started. I made this claim:

    I think that semantic systems can be built out of ‘mere’ syntactic processing units.

    You responded:

    I would like to know how that could be done.

    You also stated:

    That neurons are able to adaptively modify their behavior (as in Hebbian learning) at least suggests that they might have a semantic aspect to their behavior.

    I claim that a neural network can be modeled purely formally, and you haven’t disagreed. Why then doesn’t this qualify as an example of how a semantic system can be built on top of purely syntactic processing?

  46. keiths: Can you think of any behavior that depends specifically on phenomenology, and not on the underlying information processing, for its selective value? I can’t.

    Other than consciousness, no.

    keiths: We’re in agreement regarding the adaptive value of the information processing that underlies consciousness, but I don’t think the accompanying phenomenology is even visible to selection, much less adaptive. Since you seem to disagree, could you explain why?

    Because the accompanying phenomenology is part-and-parcel of the information processing that makes up the human mind.

  47. keiths: I claim that a neural network can be modeled purely formally, and you haven’t disagreed.

    Such modeling is imperfect.

    Why then doesn’t this qualify as an example of how a semantic system can be built on top of purely syntactic processing?

    It doesn’t have much to do with with anything. It certainly does not prove that a semantic system can be built on top of purely syntactic processing.

    I’ve encountered that argument many times. The people who make that argument are presumably convinced. Nevertheless it is now 62 years since Turing wrote his “Mind” paper, and AI still has problems with semantics.

  48. Zachriel,

    You seem be arguing that the phenomenology of consciousness is essential to the underlying information processing, rather than merely accompanying it. What causal role does the phenomenology play, and how would behavior differ if the phenomenology did not accompany the information processing?

Leave a Reply